Human and AI
Marisa Tschopp
Vulnerabilities of Humans and Machines are Important
There is currently much discussion about trustworthy AI. This aims to reduce the risk of humans getting hurt. To achieve trustworthy AI, ideals are set for what an AI system should look like. Among other things, AI must be technically robust and secure, and the models must be transparent, explainable, or auditable. These guidelines are useful because AI systems have vulnerabilities. They never work perfectly, which also makes an AI system vulnerable. Whereas in IT, the term vulnerable is more appropriate and already has an established meaning, especially in IT security.
These weaknesses (or vulnerabilities) are one of many reasons why a situation characterized by uncertainty occurs in AI systems. It is also the reason why trust relationships between humans and machines occur in the first place. The crux, however, is that in the end only humans can really get hurt. A human trusts an AI system to do something. There is a lack of predictability, so the situation is characterized by uncertainty and possibly even great risk. The human makes himself vulnerable when he takes the risk of relying on the machine and acting on it. The AI system does not perform, the goal of human-machine interaction is not achieved, and the human is hurt.
Technical vulnerabilities may damage a machine, but it is a human that is hurt in the end. The machine does not suffer in the proper sense. It can be concluded that the trust relationship in human-machine interaction is unidirectional: Only the human can trust, can take risks, and can be hurt. AI systems are vulnerable to technical vulnerabilities, but do not suffer. Only humans suffer, and perhaps in more ways than on: a user who has been hurt because he has been harmed, and the developer who has been hurt because he feels guilty and has held himself responsible. Maybe the trust relationship in human-machine interaction is not so unidirectional after all?
While trust only emerges in human actors, vulnerability emerges in all actors. And it does so in different forms that influence each other in ways that are still unknown. As an example, let’s bring in a hacker who intentionally tries to exploit software vulnerabilities to harm an individual. Or a company that maliciously develops manipulative design strategies to promote trust. We quickly notice how human-machine interaction becomes more complex. It might be simpler to look at it from the perspective of vulnerabilities. Then the AI system actually acts as a kind of necessary middleman between the human actors. After all, in the end, a vulnerability in the machine only works in conjunction with the vulnerability in the human.
This novel idea of management, which focuses on vulnerabilities, is based on the following theses:
The goal of trustworthy AI is to minimize negative consequences. However, it is worth the venture to propose a paradigm shift: Away from a focus on trust and trustworthiness, to a focus on vulnerability. This could result in a view of AI systems that holistically considers the vulnerability of both humans and machines. After all, neither humans nor machines ever work perfectly, even if our brains or advertisements sometimes lead us to believe they do. We need to be constantly vigilant and – to stay in tech jargon – we have to constantly keep patching. On the one hand, this affects IT systems, but on the other hand, it also affects our level of human trust.
The vision is to establish paradigms for holistic vulnerability management. Developing guidelines on vulnerability seems a visionary endeavor. There is much room for interpretation and perhaps guidelines would be too rigid. Indeed, this does not reflect the fact that the trust relationship between humans and machines is a dynamic, continuous process. Ongoing, this relationship must be monitored, reconsidered, and tinkered with. Focusing on the vulnerabilities of humans and machines has the potential to provide a better understanding and better recommendations for action so that we can use AI systems effectively, sustainably, and most importantly, safely in the future.
Our experts will get in contact with you!
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Our experts will get in contact with you!