Trust and AI - Three Wrong Questions

Trust and AI

Three Wrong Questions

Marisa Tschopp
by Marisa Tschopp
time to read: 4 minutes

Keypoints

Three Wrong Questions about AI

  • Trust is often promoted as a success factor for new technologies
  • There are fundamentally wrong questions that try to implement a "Trust Washing"
  • These include "Does the user trust our AI", "How can we increase trust" and "Should society trust AI"
  • These questions must be asked differently to achieve the true goal

No trust, no use: trust is often put forward as a critical success factor when it comes to the acceptance and use of new technologies. But it’s not as simple as that. Does the use and acceptance of new technologies really go hand in hand with trust? To implement trust and AI more successfully, you first need one thing: better questions.

The use of ubiquitous smart technologies is increasing tensions between humans, machines and society. Passionate debates are held in which the extremes of the-end-is-nigh Nostradamus followers and fervent tech evangelists seem to hold sway.

Uncertainty and scepticism are growing, which inevitably puts the spotlight on trust: how can we convince consumers to trust us? It’s a clever diversionary manoeuvre to distract from the weaknesses of your own company culture or the quality of your products. Real-world use of the word trust is soaring in design guidelines, advertising, image campaigns and the codes of ethics of tech firms, banks and other AI start-ups. But it mostly serves as a meaningless filler word that is intended to evoke some positive connotations. It could also be called “trust-washing”.

It’s high time to clear away the myths, speak plainly and stop asking the wrong questions:

Do you [the user] trust AI?

The question of whether someone trusts AI, or to what extent, is in fact completely pointless. The trust issue always has three dimensions: who trusts, who is trusted, and what the goal of this trust is. For example: I trust that Amazon will deliver my order promptly. But I don’t trust Amazon to use my personal data “ethically”, or that it won’t misuse it for marketing purposes and analyse me using questionable “psychographic” means.

⇒ A better question would be: do you trust this [AI-based product] to achieve objective X?

How can we [the tech company] increase trust in AI?

The folks in marketing and sales departments will be clamouring to work out how to control, influence or manipulate the consumer so that trust in AI product X and in turn the likelihood of adoption is increased. In this respect, a clear demarcation and a change in focus are needed: trust and trustworthiness are fundamentally different concepts. Trust by consumers is a mindset, whilst trustworthiness is a property of products, processes or a company. Guidelines for working on these aspects are popping up en masse. It’s clear that trust cannot be bought; it needs to be earned by demonstrating that you’re worthy of trust.

⇒ A better question would be: how can we be trustworthy?

Should we [the society] trust AI?

Never – as J. Bryson would say. AI-based programs are not about trust. Software needs to be trustworthy, i.e. built in such a way that its developers can be held accountable. This means we need to know and verify what a particular system is capable of and what it is not. Trust is irrelevant, as it is with bookkeeping. From an ethical perspective, the question is definitely misplaced.

⇒ A better question would be: how can we better understand AI?

Countless psychological research groups are rightly working to decode the mystery of trust and technology: how does trust influence the way in which we place our trust in technology and use it? What role do other factors play, such as the understanding of AI or perceived sense of agency in user behaviour? Fatal accidents have been documented where people had too much or too little trust in technology: e.g. the well-known “death by GPS” phenomenon or the engineer who had an accident in a Tesla because he trusted the system 100% to take him to his destination without any user input. This led him to think it was safe to play video games during the trip.

Conclusion

To summarize, we need a nuanced view of the situation and to take a transdisciplinary path that integrates science, practice, politics and other stakeholders. It’s high time that we ask the right questions and jointly discuss them.

About the Author

Marisa Tschopp

Marisa Tschopp has studied Organizational Psychology at the Ludwig-Maximilians-University in Munich. She is conducting research about Artificial Intelligence from a humanities perspective, focusing on psychological and ethical aspects. She held different talks like for TEDx and is also board member of the global Women in AI (WAI) initiative. (ORCID 0000-0001-5221-5327)

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

TEDxBoston Countdown to AGI

TEDxBoston Countdown to AGI

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here