Human and AI
Marisa Tschopp
Three Wrong Questions about AI
The use of ubiquitous smart technologies is increasing tensions between humans, machines and society. Passionate debates are held in which the extremes of the-end-is-nigh Nostradamus followers and fervent tech evangelists seem to hold sway.
Uncertainty and scepticism are growing, which inevitably puts the spotlight on trust: how can we convince consumers to trust us? It’s a clever diversionary manoeuvre to distract from the weaknesses of your own company culture or the quality of your products. Real-world use of the word trust is soaring in design guidelines, advertising, image campaigns and the codes of ethics of tech firms, banks and other AI start-ups. But it mostly serves as a meaningless filler word that is intended to evoke some positive connotations. It could also be called “trust-washing”.
It’s high time to clear away the myths, speak plainly and stop asking the wrong questions:
The question of whether someone trusts AI, or to what extent, is in fact completely pointless. The trust issue always has three dimensions: who trusts, who is trusted, and what the goal of this trust is. For example: I trust that Amazon will deliver my order promptly. But I don’t trust Amazon to use my personal data “ethically”, or that it won’t misuse it for marketing purposes and analyse me using questionable “psychographic” means.
⇒ A better question would be: do you trust this [AI-based product] to achieve objective X?
The folks in marketing and sales departments will be clamouring to work out how to control, influence or manipulate the consumer so that trust in AI product X and in turn the likelihood of adoption is increased. In this respect, a clear demarcation and a change in focus are needed: trust and trustworthiness are fundamentally different concepts. Trust by consumers is a mindset, whilst trustworthiness is a property of products, processes or a company. Guidelines for working on these aspects are popping up en masse. It’s clear that trust cannot be bought; it needs to be earned by demonstrating that you’re worthy of trust.
⇒ A better question would be: how can we be trustworthy?
Never – as J. Bryson would say. AI-based programs are not about trust. Software needs to be trustworthy, i.e. built in such a way that its developers can be held accountable. This means we need to know and verify what a particular system is capable of and what it is not. Trust is irrelevant, as it is with bookkeeping. From an ethical perspective, the question is definitely misplaced.
⇒ A better question would be: how can we better understand AI?
Countless psychological research groups are rightly working to decode the mystery of trust and technology: how does trust influence the way in which we place our trust in technology and use it? What role do other factors play, such as the understanding of AI or perceived sense of agency in user behaviour? Fatal accidents have been documented where people had too much or too little trust in technology: e.g. the well-known “death by GPS” phenomenon or the engineer who had an accident in a Tesla because he trusted the system 100% to take him to his destination without any user input. This led him to think it was safe to play video games during the trip.
To summarize, we need a nuanced view of the situation and to take a transdisciplinary path that integrates science, practice, politics and other stakeholders. It’s high time that we ask the right questions and jointly discuss them.
Our experts will get in contact with you!
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Our experts will get in contact with you!