Vulnerability of Humans and Machines - A Paradigm Shift

Vulnerability of Humans and Machines

A Paradigm Shift

Marisa Tschopp
by Marisa Tschopp
on June 02, 2022
time to read: 6 minutes

Keypoints

Vulnerabilities of Humans and Machines are Important

  • Trust influences how we use machines and is widely discussed in research and practice
  • Vulnerability is at the heart of many trust theories
  • Yet, vulnerability does not yet receive enough attention in human-machine interaction research
  • A shift in perspective on human-machine vulnerabilities has the potential to elicit more effective and safe recommendations for action in the long run

How much we trust artificial intelligence (AI) systems influences whether we rely on them and how we use them. Exploring the influencing factors on trust is attracting a lot of interest, while vulnerability in human-machine interaction doesn’t really get much attention. Yet vulnerability is at the heart of theories of trust in human interactions. Vulnerability, however, is not at the heart of theories of trust in human-machine interaction. Not yet. What is vulnerability? In humans, vulnerability is taking some risk of being disappointed by the other person in an interaction: They make themselves vulnerable to others. Only trust makes it possible for people to dare the so-called leap of faith into unknown waters with unknown people, despite poor predictability and uncertainty.

There is currently much discussion about trustworthy AI. This aims to reduce the risk of humans getting hurt. To achieve trustworthy AI, ideals are set for what an AI system should look like. Among other things, AI must be technically robust and secure, and the models must be transparent, explainable, or auditable. These guidelines are useful because AI systems have vulnerabilities. They never work perfectly, which also makes an AI system vulnerable. Whereas in IT, the term vulnerable is more appropriate and already has an established meaning, especially in IT security.

Vulnerabilities are Everywhere

These weaknesses (or vulnerabilities) are one of many reasons why a situation characterized by uncertainty occurs in AI systems. It is also the reason why trust relationships between humans and machines occur in the first place. The crux, however, is that in the end only humans can really get hurt. A human trusts an AI system to do something. There is a lack of predictability, so the situation is characterized by uncertainty and possibly even great risk. The human makes himself vulnerable when he takes the risk of relying on the machine and acting on it. The AI system does not perform, the goal of human-machine interaction is not achieved, and the human is hurt.

Technical vulnerabilities may damage a machine, but it is a human that is hurt in the end. The machine does not suffer in the proper sense. It can be concluded that the trust relationship in human-machine interaction is unidirectional: Only the human can trust, can take risks, and can be hurt. AI systems are vulnerable to technical vulnerabilities, but do not suffer. Only humans suffer, and perhaps in more ways than on: a user who has been hurt because he has been harmed, and the developer who has been hurt because he feels guilty and has held himself responsible. Maybe the trust relationship in human-machine interaction is not so unidirectional after all?

Time for a Paradigm Shift?

While trust only emerges in human actors, vulnerability emerges in all actors. And it does so in different forms that influence each other in ways that are still unknown. As an example, let’s bring in a hacker who intentionally tries to exploit software vulnerabilities to harm an individual. Or a company that maliciously develops manipulative design strategies to promote trust. We quickly notice how human-machine interaction becomes more complex. It might be simpler to look at it from the perspective of vulnerabilities. Then the AI system actually acts as a kind of necessary middleman between the human actors. After all, in the end, a vulnerability in the machine only works in conjunction with the vulnerability in the human.

This novel idea of management, which focuses on vulnerabilities, is based on the following theses:

  1. Humans are vulnerable no matter which side of the AI system they are on
  2. Humans have weaknesses that make them vulnerable: E.g. too much trust (overtrust) or the cognitive bias that machines always work perfectly (automation bias)
  3. The AI system is vulnerable because it is either poorly built (multiple technical vulnerabilities) and/or poorly used (e.g., technical vulnerabilities are exploited)
  4. The victims are always the humans: Negative consequences, blame, responsibility, etc.

The goal of trustworthy AI is to minimize negative consequences. However, it is worth the venture to propose a paradigm shift: Away from a focus on trust and trustworthiness, to a focus on vulnerability. This could result in a view of AI systems that holistically considers the vulnerability of both humans and machines. After all, neither humans nor machines ever work perfectly, even if our brains or advertisements sometimes lead us to believe they do. We need to be constantly vigilant and – to stay in tech jargon – we have to constantly keep patching. On the one hand, this affects IT systems, but on the other hand, it also affects our level of human trust.

Conclusion

The vision is to establish paradigms for holistic vulnerability management. Developing guidelines on vulnerability seems a visionary endeavor. There is much room for interpretation and perhaps guidelines would be too rigid. Indeed, this does not reflect the fact that the trust relationship between humans and machines is a dynamic, continuous process. Ongoing, this relationship must be monitored, reconsidered, and tinkered with. Focusing on the vulnerabilities of humans and machines has the potential to provide a better understanding and better recommendations for action so that we can use AI systems effectively, sustainably, and most importantly, safely in the future.

About the Author

Marisa Tschopp

Marisa Tschopp has studied Organizational Psychology at the Ludwig-Maximilians-University in Munich. She is conducting research about Artificial Intelligence from a humanities perspective, focusing on psychological and ethical aspects. She held different talks like for TEDx and is also board member of the global Women in AI (WAI) initiative. (ORCID 0000-0001-5221-5327)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

TEDxBoston Countdown to AGI

TEDxBoston Countdown to AGI

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here