AI & Trust - Stop asking how to increase trust in AI

AI & Trust

Stop asking how to increase trust in AI

Marisa Tschopp
by Marisa Tschopp
time to read: 19 minutes

Keypoints

This is how we trust technology

  • This report offers a theoretical background on trust, outcomes of a workshop on trust and personal reflections of the author
  • Confusion about terminologies (trust, reliance, trustworthiness) is prevailing and impedes progress
  • Trustworthiness as a property is distinct from but related to trust as an attitude
  • Interpersonal trust can be partially translated to trust in automation and AI
  • Trust is critical because it mediates reliance on automation and AI
  • Users must be skeptical and find the right level of trust to use technology properly avoiding under- or overreliance
  • Technology providers must earn user trust through demonstrating trustworthiness

For over two and a half years the Titanium research department has been working on the role of trust in the context of artificial intelligence. The first idea was to develop an Artificial Intelligence Quotient. A behavioral measurement method to test the skills of conversational AI. Or in other words, it should tell how smart Siri is. The rationale was that if there was some sort of proof of performance, as a trust mark or stamp, people would be more likely to trust e.g. Siri, and therefore, more likely to use (see Tschopp & Ruef, 2019a). The underlying assumption is the no trust, no use hypothesis. If users do not trust the technology they are not going to use it (Hancock et al., 2011), where performance is one important dimension of trust in automation (Lee & See, 2004). However, performance, referring to a reliable, robust product, is not the only dimension that influences trust. To better understand the nature of trust and trustworthiness, and specifically the no trust, no use hypothesis, we developed the Titanium Trust Report 2019 (Tschopp & Ruef, 2019b,c). The Titanium Trust Report 2019 is a mixed-methods analysis with 111 participants from heterogeneous backgrounds with differing levels of expertise. It aims to investigate trust in and usage of artificial intelligence.

At the NZZ Future Health Conference in Basel, we had the opportunity to put years of theoretical research into practice. With over 30 participants we held a workshop, working on a mutual understanding of trust in artificial intelligence, what it means, what it does, how to deal with it. The organizers called the session: How can we strengthen peoples’ trust in AI? After a theoretical background, outcomes of the workshop and personal reflections, why we should stop asking how to increase trust, will be presented in this report.

Trust, Reliance, and Accountability – Can we count on an algorithm?

No one should trust Artificial Intelligence . Is the headline of an article by Dr. J. Bryson (2018), Professor of Technology and Ethics at the Hertie School in Germany. In the article, she states that we should not need to trust AI because it is both possible and desirable to engineer AI for accountability (Bryson, 2018). Does this mean that we do not need trust, when we rely on AI and can hold someone accountable in case of failure? Designing for accountability often means transparency in processes. It should be clear how an algorithm came to a conclusion (e.g. why did person X get a loan and person Y not). However, there is a misconception about transparency and trust. The prevailing myth is, when we enhance transparency, we enhance trust. However, when we enhance transparency, we do not enhance trust. In fact, the opposite is the case. When we design for transparency, we give up on trust (Botsman, 2020) and replace it through control.

There seems to be a lot of confusion on the nature of trust in the context of technology, which needs urgent clarifying. Trust is more and more used as a buzzword (#trustwashing), a nice-to-have or something to advertise for. However, you simply cannot advertise for trust. Doesn’t your level of trust automatically decrease if someone says: Trust me!? Trust is an effort, and it must be earned by demonstrating trustworthy behavior (Botsman, 2020). Trust as an attitude is equally important as trustworthiness as a property and both influence how we use technology.

Towards Understanding Trust and Trustworthiness

In interpersonal relationships, trust is a coping mechanism, that helps humans deal with uncertainty and risk. It requires that a trustor is vulnerable to a potential betrayal of the trustee, whom the trustor entrusts with a certain task (for example being a loyal partner or a competent mechanic) in a specific context at a specific point of time. According to A. Baier, understanding betrayal is critical, if we want to distinguish trust from reliance. In her example, she explains that you can rely on inanimate objects such as an alarm clock. But if it does not wake you appropriately, you are disappointed, but not betrayed. Reliance without the possibility of betrayal is not trust (for criticism of this view see McLeod, 2015). There are different views on how trust versus reliance plays a role. Within this report, we conclude that a trustor relies on artificial intelligence. Reliance on artificial intelligence is mediated by trust in the designers of the product, which in turn should possess trustworthy properties.

Trustworthiness is distinct from trust as an attitude. But ideally, the trustor trusts in a trustee, who is trustworthy. Trustworthiness refers to a range of properties of the trusted person. Trustworthiness can be visible characteristics or features. Trust is largely based on observation of cues, which are analyzed on rational, affective analogous levels (Lee & See, 2004). Snap judgments of trustworthiness can also be made rather subconsciously when we are in the early stages of trust formation, where no previous information about the trustee is available. Humans perform quite well in using subtle cues to judge the trustworthiness of other humans, which has been investigated through fMRI in experimental settings. However, this rather subconscious assessment cannot be well transferred to trust in technology (Hoff & Bashir, 2015). Such experimental settings using fMRI to measure trust could possibly work well in the context of human-robot interaction. The results of these experimental settings should be viewed with high caution.

Translating interpersonal Trust in trust in Technology

Vulnerability is the heart of all trust theories

Vulnerability is the heart of all trust theories. Psychological vulnerability when it comes to interpersonal trust refers to the will to be vulnerable to others. The level of risk humans are willing to take differs and is often investigated under the concept of dispositional or propensity to trust. It can best be described as an individual trait, often correlated to optimism or pessimism. Researchers look at gender or cultural differences, to describe variances why some trust more easily than others. For instance, using Hofstede’s power distance dimension, some variance in the reluctance within the interpersonal trust of participants with Japanese background has been explained (Hoff & Bashir, 2019; Lee & See, 2004).

When translating interpersonal trust in trust in automation, vulnerability is also at the core. However, not only the psychological vulnerability of the trustor (what risk is the trustor willing to take?), also technological vulnerabilities, play a key role as they affect the trustworthiness of a product. Technological vulnerabilities can be best described as weaknesses in performance or process: Is it working reliably? Does it do what it is supposed to do all the time I want it to do it?

It is desirable to design such a perfect machine that always does exactly what it is supposed to do, all the time when I want it to do it. But the perfect machine is an illusion. We are talking about software and hardware, which is always faulty, never perfect. Research suggests that many humans have a positivity bias towards machines. This means that people overestimate the capabilities and tend to believe machines are perfect and as a result overrely on them. This is one major distinction to interpersonal trust because, in interpersonal trust, it has been shown that humans are rather skeptical at the beginning (Dzindolet et al, 2003). Consequently, the dynamics of trust in automation and trust in humans proceed in reverse order. Psychological biases are difficult to overcome, but learning through experience (experiencing a mistake yourself), awareness and training (e.g. reflecting in heterogeneous teams) are available as possible countermeasures.

In short, trust in automation is not the same as merely relying on automation, however, interpersonal trust is the dominant paradigm of a substantial corpus of research as it shares critical features with trust in automation. Trust is an important variable when investigating how people interact, use and rely on technology. This is the core of the significant work by Parasuraman et al (2000) and Parasuraman & Riley (1997). They distinguished between different forms of abuse, i.e. relying too much, too little or not at all on technology. An often quoted example is, how the captain of the cruise ship Costa Concordia undertrusted the ship’s navigation system and decided to control manually, which may have been a cause for a great disaster killing many passengers. On the other hand, many examples have also shown how overtrusting a system has led to fatal errors (Hoff & Bashir, 2015).

Of course, inconsistencies remain upon the exact definitions of trust and trustworthiness, the distinction to reliance and how interpersonal trust translates into trust in automation. Another important topic is how trusting can be measured, through questionnaires, behavioral observation or physiological measurements (e.g. from neuropsychological research). This will be the subject of the next report. For this report, the underlying understanding of the terms should be in focus: Human automation trust can be viewed as a specific type of interpersonal trust in which the trustee is one step removed from the trustor (Hoff & Bashir, 2015, S.10)

Interpersonal Trust and Trust in Technology are both a three-way relationships

Last but not least, it is important to keep in mind that trust in technology always has three components. The trustor trusts the trustee (human or technology) to do X but not Y. For example, I trust Amazon to deliver my package on time, but I do not trust that Amazon respects my privacy. Generalization of trust and imprecise usage of terminology are counterproductive. Simplification is the enemy.

Personal reflection: Why we should care about trust

Heidi Dohse becoming a professional heart patient, picture given with permission

Automation, defined as a technology that actively seeks data, transforms information and makes decisions (often based on inherent AI, see Hengstler et al, 2015), has the potential to improve efficiency and safety in many areas of applications. However, the appropriate level of trust is key in relying on automation or other kinds of technology including artificial intelligence based systems. Ideally, we should avoid over- or under-confidence, because we could run the risk of missing opportunities, such as not only saving someone’s life but also making it more worth living. Trust mediates how humans rely on technology. The case of Heidi Dohse, a professional heart patient, demonstrates the intricacies of trust in technology. I had an interesting conversation with Heidi and this is her story. For over 35 years, Heidi Dohse is 100% dependent on her pacemaker and would not have survived without this technology.

Although Heidi said, at one point she had to trust her pacemaker, the context was one where there was clearly no real choice (given the fact that death is not a desirable choice). She had to rely on technology to survive. She said she had to trust that it works well. In this case, trust did not affect adoption, but usage after her pacemaker was implemented. In the beginning, she checked her pulse manually very often, many times per day, per hour. Time goes by and Heidi got used to the pacemaker and checked less often. According to her words, she learned to trust, that everything works well (compare with the concept of learned trust by Hoff & Bashir, 2015). Can these trust-questions be transferred to AI and automation?

Outcomes of the Workshop at Future Health Basel

Marisa Tschopp explaining the three dimensions of trust in automation: Performance, process, and purpose

Heidi Dohse, Loubna Bouarfa, CEO of OKRA, and myself, we co-lead a workshop at the NZZ Future Health Conference 2020 in Basel. A deep dive session, where we worked on various questions around trust with attendees from the health sector. The question of the workshop formulated by the organizers was: How can we strengthen peoples’ trust in AI? Around 30 people attended the session, where they had to reflect on their personal, critical challenges around trust. It soon became very clear that terminology is an issue. What does trust mean? How does it affect me, my company, my products and clients? The distinction between trust as an attitude versus trustworthiness as a property was a critical point, which lead to a lot of confusion.

The attendees wrote their questions and challenges in small groups on sticky notes, which we discussed together in the end. Questions and comments could at least be structured in two categories tech-related and tech-unrelated. The tech-related questions refer more to trustworthiness as a property. Topics such as transparency, regulation, data management, and sharing were examples.


No trust, without transparency or How know if your AI application is not biased?

How to know if an AI is not biased?

The technology-related questions can be linked to the cognitive influences on trust. Cognitive influences relate to the hypothesis that people think about whether they trust artificial intelligence or not. It’s a more conscious, rational perspective. The technology-unrelated topics deal with questions such as the fear of being replaced by algorithms or how a company can create a cultural environment to unlock the full potential of artificial intelligence. These are rather affective (emotional) influences on trust. This approach means that people not only think about whether they trust AI or not, but also or maybe rather feel it.


How can we create a cultural environment in a company to unleash the full potential of AI?

According to Lee & See, we tend to overestimate the cognitive influences on trust (we focus on rules, guidelines or explainability). Along these lines, we tend to underestimate the emotional influences on trust. We tend to forget, that trust, ultimately, is an emotional response (2004). Furthermore, we also need to consider that trust comes in layers and is dynamic. Hoff & Bashir differentiated three layers of human automation trust: Dispositional, situational and learned trust. Defining all would go beyond the scope of this article, but in the context of artificial intelligence and within the discussion of the workshop, the distinction of initial and dynamic learned trust is relevant to mention. Initial learned trust represents trust prior to interacting with a system, while dynamic learned trust represents trust during an interaction (2015, S. 30). This distinction is relevant because measures relevant to trust have different meanings and effects depending on the layer.

The fact that trust relationships are unstable and dynamic is comforting and discomforting at the same time. Discomforting, because there is no control, no consistency, no stability we can calculate with. Comforting, because of the trust dynamics we must constantly keep in touch with our inner self, reflect on the status quo, and if something went wrong, there is always a chance to undo and regain trust, if we only work hard enough.

Personal Reflections on Heidi Dohse’s Case

Heidi Dohse, a professional heart patient takes part in the Ironman marathon

Heidi Dohse, not only survived thanks to great hardware and reliable algorithms, and connected devices. Now she connected her pacemaker to various IoT devices and data sharing platforms so that she is now able to run the Ironman thanks to constant monitoring and adaption of the pacemaker during training sessions (Ironman is a form of a marathon, an athletic event). Now, 20 years after the first heart surgery, automation does not only help her to survive but to strive and live a life, where she can pursue her dreams and goals, unthinkable 10 years ago. As much as the privacy advocate in me would like to disapprove of data exchange platforms like Strava, and the researcher in me warns not to generalise about an individual case, it was comforting and moving to hear her story. I am sincerely happy that thanks to the technological developments she can live a happy, fulfilled life.

Final thoughts

Artificial plus Human Intelligence

As always, AI and automation, is not only about replacing human capabilities, let it be the heartbeat or intelligence. It is about adequate use. Trust is one of the most important mediators when it comes to reliance on technology. It is critical to invest in all facets of trust. If the potential of artificial intelligence is to be harnessed, it needs something more intangible than good soft- and hardware or privacy laws: It needs integrity. Laws and technological security are necessary conditions to build trust in AI, but that’s not enough. Furthermore, the dimensions named above influence each other in ways that still need exploration. Frison et al. have demonstrated in their study that drivers infer the trustworthiness of vehicles based on design aspects that have nothing to do with objective performance. Referring to the Halo-effect, the authors recommend that designers, in their case vehicle designers, should carefully consider halo-effects and [avoid to] give users the impression that systems perform better than they actually do (Frison et al., 2019). This can have dangerous consequences.

In order to create an adequate level of trust in AI, users need to learn how to think critically. Skepticism and trust are not opponents, but rather dance partners that can guide users to the right decision. Users must question now and with the help of policymakers and educational institutions must find answers to the questions: How far can and should I trust artificial intelligence and where do I have to set clear boundaries? Developers, companies and such, must stop asking, how can we increase users’ trust in AI products? Trust is an effort and you cannot advertise for it or buy it. Trust must be earned. The role of Tech-Companies is to earn trust by demonstrating that they are trustworthy (Botsman, 2020).

References

About the Author

Marisa Tschopp

Marisa Tschopp (Dr. rer. nat., University of Tübingen) is actively engaged in research on Artificial Intelligence from a human perspective, focusing on psychological and ethical aspects. She has shared her expertise on TEDx stages, among others, and represents Switzerland on gender issues within the Women in AI Initiative. (ORCID 0000-0001-5221-5327)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI

Human and AI

Marisa Tschopp

Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here