Human and AI
Marisa Tschopp
Agency in Human-Machine Interaction
Trust and agency in the context of technology are discussed in various fields, for instance, human-robot interaction, human factors, user experience (UX), through the lens of various disciplines with philosophy, psychology, sociology, and ethics being at the forefront of these discussions. However, there seems to be little agreement in definitions and methods used in research and practice, as well as interpretations of trust and reliance, the way people rely on machines. Trust has even been questioned as a whole by ‘trust skeptics’, postulating that AI systems are nothing to be trusted – or, in the famous words of Joanna Bryson: AI is nothing to be trusted! The IEEE Trust and Agency Committee aims to provide a basis to openly discuss and reassess the ideas of trust, reliance, and agency in the AI context. Understanding of those concepts and how they relate to other factors (a great variety from dispositional to situational variables is waiting to be explored), will lead to better reliance behaviors of users, understood as the way people use these systems. It shall also inspire designers of such systems in a positive way, for instance if and how they want to ‘design for agency’ and educate their users, how to use their systems. To be more precise, how to market products more accurately, to be even more precise, also more ethically. Ultimately, the committee wants to be among the first to draft standards to enable end-user agency.
Lee & See have provided significant, far-reaching knowledge on the topic of trust in automation, focusing on (at least) three components: (1) Trust being an attitude, (2) trust being goal-directed, and (3) trusting as being vulnearble to getting hurt. This means the situation is characterized by uncertainty leaving the trustor vulnerable to the trustee. Can we translate human trust and trustworthiness directly into the human machine interaction? Some people say yes, but with modifications: Interpersonal trust, meaning trust in or between humans can in principle be translated into trust in machines. Human-machine trust, however, is a rather unidirectional relationship.
Human automation trust can be viewed as a specific type of interpersonal trust in which the trustee is one step removed from the trustor. (Hoff & Bashir, 2015)
Predominantly, trust is explored by looking at the nature of humans or, in human-machine interaction – the characteristics of machines. This is when we talk about trustworthiness: Trustworthiness as a characteristic of humans or properties of machines. In interpersonal trust, three properties have been identified as items of trustworthiness: Ability, integrity, and benevolence. Similarly, different attributes of machine trustworthiness depend on performance based attributes (i.e., how good is the product), process characteristics (i.e., how is the system understandable by an operator) and benevolence, which is a purpose based attribute referring to the intent of the designers, why this system was built. Trust and trustworthiness have an intricate relationship, not always functioning in the most logical way. Ideally, people would place their trust in people or rely on machines that are deemed trustworthy and would reject and not rely on people or machines that are not worthy of their trust and the consequences of being vulnerable to them. For instance, a company or system (it is yet to be explored who the actual receiver of trust is) can be deemed trustworthy, the operator trusts and uses the system. In another case the operator has no clue about the company or system, does not trust it, yet uses it. Various scenarios are possible that lead to different trust – use relationships which may or may not be associated with or dependent on characteristics of trustworthiness.
Although it is logical to focus on trustworthiness, it is not necessarily the magic stick to trust-building and does not guarantee reliance behaviors. The concrete conditions of trustworthiness and its correlates, remain unclear. The prevailing mystery of the role of transparency is a good example how messy and contradictory these relationships can be. A substantial amount of empirical evidence has shown that in human-automation interactions, transparency is positively correlated with trust (mostly it has been called cognitive trust), meaning, the more we can understand and are more able to control, the more we trust and are likely to use it. However, researchers have also found evidence for the contrary, namely, that transparency can actually have a negative effect on trust. Along with this rationale, one could ask, how can we create an effective trust and reliance relationship with machines where we keep some sort of or an adequate level of authority over the machine? We theorize that enabling user agency, as a means of control, could be one key to answering this question.
Most people will never understand how AIS work. However, some do and one can ask them for help and advice. Furthermore, some laws are also protecting the users from maleficent actions, for instance, not respecting data privacy. There is a third option, we want to highlight. It is impractical and impossible to educate everyone on the details of AIS. Just as you cannot teach everyone how a car functions. However, you teach the basics of mechanics and you need to understand the rules and guidelines as to how to use a car (or an AIS) safely. This may account for AIS as well. Human agency means that people have the power to shape their life circumstances, create their futures and modify alternative courses of action if the current status quo is not respecting one’s own values or goals (Bandura, 2006).
Humans should have the power to shape how they use technology. However, the ‘should’ is in quotation marks because many systems are built that harm decision-making by humans, for instance, through addictive or anthropomorphic design features, or over-hyped marketing, just to name a few. Enabling end-user agency as a standard for designing and developing products will give people the opportunity to develop self-regulatory skills and beliefs in their efficacy in order to generate alternatives, enhance their freedom of action, and thus be more successful in choosing reliance on technologies and strategies that they truly desire. What makes human agency such a delicate topic is its dynamic interaction with machine agency. We have yet to explore and understand how to solve the tension between the power of machines and the power of humans: Which decisions lie in the machine’s hands and which are in the hands of the user? Are those stable or dynamic processes and do they depend on the reliability of the machine or the involved risk in the decision making? Does more machine agency automatically mean less end-user agency and vice versa? What are the exact connections, conditions, correlations, or dynamics here?
Many open questions remain about trust and agency in the human-AI interaction context: How to map the temporal and dynamic context, trust trajectories or agency thresholds? What is the influence of laws and culture, education, and much more. Is trust an antecedent of human agency or is human agency an antecedent trust? Is trust relevant after all and should we better stop talking about trust? Synthesizing these topics and trying to model them is quite a challenge. Especially, given the novelty and dynamics of the situation and given that the nature of the artifact of interest, namely, artificial intelligence systems, is in dispute. The newly formed IEEE Trust in Agency Committee aims to support the development of successful transdisciplinarity around AIS, combined with the desire not to leave the practical field of technology to a supposedly sublime perspective alone.
Thank you to Shyam Sundar and John Havens for your support in writing this article. A short version of the article with additional insights on the committee work can be found on the IEEE Beyond Standards homepage.
Our experts will get in contact with you!
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Our experts will get in contact with you!