Human and AI - A Matter of Trust

Human and AI

A Matter of Trust

Marisa Tschopp
by Marisa Tschopp
on November 04, 2024
time to read: 12 minutes

Keypoints

Trust shapes if and how we use artificial intelligence (AI)

  • Human-AI relationships are shifting from tools to collaborators, like co-pilots
  • Trust and trustworthiness play a key role in human-AI interaction
  • Distinguishing between actual and perceived trustworthiness is important
  • Technical standards define trustworthiness, but users also trust non-technical cues like design
  • Building trustworthy AI requires transparency and aligning trust with reality

As AI shifts from simple tools to co-pilots and companions, trusting AI becomes even more complex. Trust depends on the system’s real trustworthiness, but most of us can only judge trustworthiness based on what we see, like design or tone of a chat, or a label achieved. So, how can we bridge the gap between what AI can actually do and how trustworthy it feels?

Trust in AI: A Psychologist’s Journey in Cybersecurity

Seven years ago, I found myself at an unusual intersection: Working as a Psychologist in a cybersecurity company. My goal was to study the relationship between humans and technology, particularly artificial intelligence, and to work in practical settings where human factors play a major role. Unlike academic research, where work can feel distant, I wanted a role that would directly impact people. To be honest, I did not think psychology would be especially useful in cybersecurity, apart from understanding why people fall for spam or social engineering tricks.

When I first joined, we explored how psychology could support cybersecurity’s core work. I was not there to hack systems, they had experts for that. Maybe I was there to hack humans, though that term feels strange to me. But in a way, it describes the core of what psychologists do: We observe human behavior, analyze data, and try to predict what people might do next. Through these discussions, one theme stood out: Trust. Something to observe in human-AI interaction and evaluate how it may predict how humans use or rely on AI systems. This theme has stayed relevant globally. In cybersecurity, trust is often discussed but not always welcomed, especially since zero trust is a common strategy. However, trust remains fundamental. It shapes how we interact with each other and with technology, even though it can bring anxiety when it is so easily misused by people and machines alike.

Trusting AI: An Attitude Built on Cognitive and Emotional Grounds.

In recent years, we have dedicated a lot of work to understanding trust in AI. Trust essentially, evolves from a mix of thoughts and feelings, more like some sort of attitude. These thoughts and feelings help us decide if an AI can assist us in reaching a goal, especially when there is uncertainty or risk involved. For example, should we trust ChatGPT to write something important, or Alexa to make a purchase? Often, we do not know all the details about these systems, so trusting them requires a leap of faith. Full certainty, after all, is an illusion.

When using AI, we can either go in blind or resist. Or, we can find a middle ground what some coined calibrated trust, a term we adopted in our past efforts in the company. This approach combines gut feeling with logical judgments about how trustworthy a machine is. Recently, I heard the term informed trust from Professor Markus Langer, which focuses on making thoughtful choices. Although to me, both terms may feel too rational. We should not forget that trusting AI is not only a mental calculation, it also involves emotions. Maybe more than we want to acknowledge, because we still somehow cling to the thought of ourselves as homo economicus, being all too smart and rational in our decision-making.

People do not only think whether they trust AI, they also feel it. And with these feelings comes a sense of vulnerability, which we rarely talk about when it comes to machines. In cybersecurity, vulnerability is usually seen as a technical issue, weak points that hackers can exploit. In human terms, trust involves opening ourselves to risk, or the possibility of disappointment. This is why some see trust as a valuable present we give to others. An openness to others that we find in everyday relationships, like trusting a partner, boss, or doctor. Trusting AI brings its own form of vulnerability. But in cybersecurity, we are often encouraged to avoid this overly humanized term. Trust is only happening between humans. Instead, we focus on control, reliability, and zero trust, to be on the safer side. These approaches make us feel stronger but often ignore the emotional complexity of human trust. Trusting AI means recognizing this emotional edge next to the cognitive part of it, in fact a vulnerability we are only starting to understand.

Consider personal assistants like Alexa or Google Assistant. Trusting them goes beyond privacy settings. It involves allowing a machine into our private lives, to answer questions and even predict needs. This type of trust requires us to lower our guard, admitting that allowing technology this close has risks. In high-stakes situations, like autonomous driving or medical AI, trust brings even deeper vulnerability. We are placing our safety in the hands of systems we don’t fully understand, raising tough questions. Can we trust AI to make the right choices for our health? What happens if it fails? Who do we turn to? Trusting AI – or humans – means letting go of complete control, even when we only partially understand the risks. And this, letting go of control, whatever the reason or excuse, is certainly not a good idea when the stakes are high. Thinking about this, perhaps trust itself is the vulnerability in cyberspace. When we choose to trust, we open ourselves to possible harm. So, why do we do it? Is it all for convenience, efficiency, or the need to keep up?

Beyond Trust: Human-AI Relationships and Collaboration

Maybe, deep down, we hope that trust will lead to new ways of working with AI, so that AI truly enhances our arugments our thinking and doing. We seem to be moving toward an era where AI is more than just a tool. This shift is marked by the new marketing paradigm: The rise of AI as co-pilots, peers and companions. A few years ago, I predicted that digital assistants will die out. It started with the death of Cortana, Microsoft’s digital assistant. Alexa, and Siri still there, were once designed to assist but were limited to wake words, awkward pauses and simple, and impolite commands in communication. Then new AI models like GPT came up. These systems no longer fit the old idea of tools under our control. Instead, they are presented as collaborators, engaging with us at eye level and responding (supposedly) intelligently, and much more smoothly.

This reminds me of Professor Joanna Bryson’s provocative stance in her well-known article, Robots Should Be Slaves. Main point for me being, that robots are mere tools, not partners, nothing you can or should trust or collaborate with. Yet today, AI is marketed as much more than a simple instrument. It is sold as a companion, a co-pilot, and even a confidant. Microsoft’s Copilot promises to co-create documents and presentations, and ChatGPT pitches itself as a brainstorming aid for daily tasks, and Replika AI even meditates with you, subtly shifting from tool to emotional partner or Guru. These systems are not just marketed as productivity tools. They are positioned as collaborators or even surrogate relationships.

We should approach this shift with caution. Are we muddying the line between assistance and autonomy, growing dependent on systems that are not, and arguably should not be, true partners? Have we stopped simply commanding these tools and instead started working, thinking, even loving, alongside them? The question about collaboration was sharply raised in an insightful article by Katie Evans et al. last year (2023). Since reading it, I have adopted a habit: Whenever I see AI, I mentally replace it with toaster or hammer. It is a grounding reminder that these systems are tools. After all, I’d find it rather odd if someone claimed they co-baked their toast with their toaster.

And yet. For now, though, AI at least feels different from ordinary tools, despite that fact that it is not. Its responsiveness and adaptability give it a presence that feels more dynamic than simple machines and software. And with ongoing discussions about advanced AI that might surpass human intelligence, comparing AI to hammers feels outdated, unthinkable or awkward to many (while in fact it helps a lot to put many claims into perspective).

The Thing About Trustworthiness

Recently, I had the opportunity to collaborate (well and make friends) with Nadine Schlicker from Marburg University, and we engaged in deep and fruitful discussions on trust and trustworthiness in AI (also guinea pigs, Lord of the Rings, IPA, and volleyball, but that’s a different story). Trust – the willingness to be vulnerable – is complex, influenced by many factors, right? So, Nadine and her co-authors offer a model that focuses on one specific factor that influences trust: The perceived trustworthiness of the other, in this case, the system. One of the main ideas in their model is that people with different values and goals may see the trustworthiness of a system differently.

Basically, we can assume that the actual trustworthiness of a system is hidden and cannot be fully observed. This is partly because the system is like a black box, meaning we cannot see everything inside it, and also because we cannot test it on every possible dataset in the world. Instead, we rely on, for instance, expert assessments, but they also only capture part of the system’s actual trustworthiness, since they are based on limited data, specific scenarios, and varying standards. I appreciate this model’s distinction between actual and perceived trustworthiness. In cybersecurity, trustworthiness is often defined by technical standards like safety protocols. Although these standards are essential, most people lack the expertise to evaluate them. For many, trust still mostly forms based on perceived qualities, not the system’s actual features. This raises a big question: How can we build trust if actual and perceived trustworthiness do not align? Ideally, we should only trust systems we fully understand, but this is rarely possible. The challenge is bridging the gap, finding ways to align actual and perceived trustworthiness. This alignment is crucial to protect users and allow them to make informed decisions. By understanding this, we can create AI that not only is trustworthy but also feels trustworthy.

Practical Takeaways: Trustworthy Systems

It is important to understand the two sides of trustworthiness: True trustworthiness, which is how reliable a system actually is, and perceived trustworthiness, which is how reliable it seems to users. True trustworthiness depends on solid technical standards, like a medical AI that has been carefully tested. Perceived trustworthiness, though, can be influenced by other things, like a chatbot’s friendly tone or a financial app’s security labels, which do not always reflect the system’s real reliability. We should aim to base our trust on relevant sources, like technical reports or validation, instead of emotions triggered by the system’s appearance. However, perceived trustworthiness can be accurate if it is based on the right signals, so we should not assume it is always wrong.

As Nadine Schlicker puts it, current efforts to build trustworthy AI should consider more than just how to make AI reliable. We should also ask how to design systems that let different users assess AI’s trustworthiness based on their own needs. This requires clear standards and cues to help diverse user groups evaluate AI effectively.

Conclusion

Trust in AI has become a central issue, especially in my work as a psychologist within cybersecurity. Trust is a complex interaction between what we consider trustworthy and how we perceive the trustee, the system. It’s a mix of rational judgment and emotional response, shaped by our willingness to take risks and be vulnerable, an aspect that’s easy to overlook in the world of technology.

With AI evolving from simple tools to what we now call co-pilots or companions, our perception of these systems might be changing fast. Tools like ChatGPT and Microsoft’s Copilot aren’t just there to follow orders, they are now presented as partners. This shift leads to new questions: Can (or should) we truly trust these systems, and how do we know if they are reliable? For most people, technical evaluations are difficult to understand, so we rely on cues, like design or user reviews, which don’t always reflect reality. To make AI genuinely trustworthy, we need more than just good technology. We have to make sure that users can see and understand what makes a system reliable. Only when true trustworthiness and what people feel align can we build AI that evokes well-placed trust and helps us navigate this complex new world.

Literature

About the Author

Marisa Tschopp

Marisa Tschopp (Dr. rer. nat., University of Tübingen) is actively engaged in research on Artificial Intelligence from a human perspective, focusing on psychological and ethical aspects. She has shared her expertise on TEDx stages, among others, and represents Switzerland on gender issues within the Women in AI Initiative. (ORCID 0000-0001-5221-5327)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

TEDxBoston Countdown to AGI

TEDxBoston Countdown to AGI

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here