Artificial Intelligence - No Trust, no Use?

Artificial Intelligence

No Trust, no Use?

Marisa Tschopp
Marisa Tschopp
 Prisca Quadroni-Renella (external)
Prisca Quadroni-Renella (external)
Marc Ruef
Marc Ruef
on November 11, 2021
time to read: 8 minutes

Keypoints

Can and should we trust AI?

  • Trust and trustworthiness are populare themes in AI industry and research
  • If we are actually capable of placing our trust in machines has been hotly debated
  • Shifting the discussion to "should" we shifts the focus to trustworthiness
  • When talking about machines or properties the focus should be on trustworthiness
  • Laws and ethic labels do not solve the trust problem

You have probably heard that in April 2021, the European Commission put forward a proposal to regulate Artificial Intelligence (AI) so that the European Union can reap the benefits of AI. The words trust and trustworthiness together appear over 100 times in their proposal. But do we really understand the role of trust in the AI context?

In Switzerland, the Digital Trust Label, the Trust Valley or Center for Digital Trust have also gained momentum from the popular trust theme, suggesting that trust is a catalyst for the successful deployment of artificial intelligence technologies. However, nobody less than Prof. Dr. Joanna Bryson once wrote: No one Should Trust AI. Now you may wonder, what is all that fuzz about trust in AI or the digital world all about. And will blockchain, crypto, or the label industry solve all our trust problems? (Spoiler alert: No, no, maybe kind of).

First Question about Trusting AI: Can we?

Short answer: Yes. Long answer: It’s complicated.

The question if we are actually capable of placing our trust in machines has been hotly debated between the trust believers (think: Yes, we can!) and the trust skeptics (think: AI is nothing to be trusted!). We position ourselves on the trust believer side. We assume that human-to-human trust is different but can be translated and is in many ways similar to the way humans say they trust in machines.

In human relationships, trust is best understood as a coping strategy to deal with risk and uncertainty. This coping strategy is a form of evaluation of characteristics of the trust receiver. This can be, for instance, evaluating the other person’s competence who is the trust receiver. On the trust giver side, vulnerability is the critical element: In a relationship, there is always a certain risk of getting hurt: If there is no risk, then we do not speak of trust anymore.

Vulnerability is also critical when it comes to human-machine trust. The latter can be measured with questionnaires evaluating the extent of trust in a certain technology. Based on their trust we expect a behavioral outcome: If and how they use and rely on a certain technology. Poor performance or unclear processes of a system and other factors of trustworthiness can be such vulnerabilities that influence peoples’ trust attitude. The key here is to understand that trust and trustworthiness are two distinct concepts, unfortunately often conflated.

What we get from all this research is that three things are important to understand: First Trust is an attitude with an involved other (which can be a machine), whereby secondly, this other is entrusted to help reach a specific goal and, thirdly, the whole situation is characterized by uncertainty (think: I trust Amazon to deliver my package on time, but I do not trust Amazon to respect my privacy).

⇒ So, yes, we can trust AI in terms of we are capable of doing so if we define it in a concrete human-machine trust context. But just because we technically can, should we?

Second Question about Trusting AI: Should we?

Short answer: No. Long answer: It’s complicated.

Practically and normatively speaking, the question Should we trust AI, is much more interesting, because it shifts the discussion to the topic of trustworthiness. While trust is the attitude of a human and a complicated latent variable in psychometric terms, trustworthiness is much more a technical term and refers to the properties of the technology. Bryson’s statement Nobody should trust AI is utterly effective at communicating a message: Do not use AI systems (and many other systems) blindly.

An often-quoted example for blind trust gone wrong is the well-educated Tesla driver, who died in an accident because he was gaming and did not watch the road at all. He trusted the system too much to reach its goal. Whether this is a problem of overtrust, misleading Muskian marketing strategies, or a matter of intelligence of the driver or a mix of all will remain a mystery. Nevertheless, educating people towards a zero-trust approach is most likely the safest way to not get hurt by machines.

However, not trusting and refusing to use a system although it may lead to better results is also not ideal. Ideally, we would promote the concept of calibrated trust where the user adapts the level of trust (and if and how the user will rely on a system) along (or against?) the performance of the respective system. Why the against? Because we know that many companies either exaggerate or hide the true capabilities of a system to sell their products (= hype which must be put to test). The calibration of trust can take place both on the machine side (e.g., at the level of design) and on the human side (e.g., through cognitive efforts).

⇒ So, calibrating trust can save lives, but in case of uncertainty and high risk in human-machine relationships, we’re most likely better of with a zero-trust approach (better safe, than sorry).

Third Question is: Should we Stop Talking about Trust?

Short answer: Yes. Long answer: It’s complicated.

We think the most important message of saying you should not trust AI is: think before you act. But thinking is exhausting. Wouldn’t it be great if we in fact could blindly trust a company to respect my privacy and deliver my products on time? Well, no blockchain will help here and don’t even start to crypto us here with a solution. A good label may be a good start in terms of all the things that are also not regulated by law. But aren’t we making things even more complicated by inserting another actor in the trust equation we already don’t understand fully? Will we be investigating the trust in labels in the future as another proxy for trust in machines?

⇒ So, trust as an attitude is interesting for psychologists. But when talking about machines or properties use the terms right and focus on trustworthiness, because this is what we can control best.

Follow-up Question: What about Law and Trust?

To ensure trustworthiness labels are good but aren’t laws better? Shouldn’t we put all our efforts into law and regulations? Is that our only true indicator of trustworthiness? Firstly, yes, we must put a lot of effort into law and regulations to ensure accountability. Secondly, no: The law and trust equation is a false conclusion. We believe that increasing trust should not be the motivator behind creating laws. Instead, it should be accountability and a functioning society. The fundamental purpose of a law is still to establish standards, maintain order, resolve disputes, and protect liberties and rights, not building personal trust, nor trust in AI per se.

Conclusion

Laws and ethic labels do not solve the trust equation no trust, no use. In fact, there might not even be such an equation, like: The more we trust, the more we use. People rely on the most untrustworthy products for the most irrational reasons because the rational homo oeconomicus has died long ago. Homo sociologicus now prefers convenience and sociality. Humans are inclined to be social down to the deepest cell. We love humans, we love connection and because we have no other source of behavioral knowledge to make use of, we even humanize machines.

But, anthropomorphism is not all bad unless you humanize agents to manipulate people by design. Yes, the term trustworthy AI is anthropomorphic language. Yet, it communicates a message instantly, understood by nearly everyone who has an idea of that fuzzy feeling of trust. If you would say explainable or accountable AI, only a very small fraction of people will understand.

So, as much as the terms trust and trustworthiness have earned legitimate criticism in the AI context, it is also eligible to earn praise. The people using this term aim to make the main reasons for building and using these complex sets of technologies and their impact on society implicitly understandable for everyone. Maybe we’d all be a little better off to relax a little and see trustworthy AI rather as a vision than a statement for technical accuracy.

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI

Human and AI

Marisa Tschopp

Specific Criticism of CVSS4

Specific Criticism of CVSS4

Marc Ruef

scip Cybersecurity Forecast

scip Cybersecurity Forecast

Marc Ruef

Human and AI Art

Human and AI Art

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here