Image, Compliance, and Resistance - In the AI Context

Image, Compliance, and Resistance

In the AI Context

Marisa Tschopp
by Marisa Tschopp
time to read: 13 minutes

Keypoints

This is how an AI is able to overcome resistance

  • AI causes motivational conflicts in many people
  • Attitude to AI influences consumer and user behavior
  • Persuasion and compliance can influence attitudes and behavior
  • Reciprocity, obligation, scarcity, and modeling are crucial influencing techniques
  • Resistance to AI is mainly due to irritation and reactance

What is your opinion on Artificial Intelligence? The attitude of a person towards AI encompasses both cognitive and affective processes and can be equated with the term image. Maybe you are positive about self-driving cars, but rate lethal autonomous weapons as negative? How do you rate the image of Siri as opposed to Alexa?

Attitude represents a combination of motivation and cognitive assessment and is of great importance, as it affects behavior and construction of social reality (Kroeber-Riel et al., 2009, Zimbardo & Gerrig, 2004). There are different theories of how people try, consciously or unconsciously, to influence attitudes and consumer behavior. In the AI context, today’s society is much challenged to act like a mature consumer who can think critically without becoming cynical, in danger of missing any chances.

Image of Artificial Intelligence

Siri, Alexa, Google home, and many more products and programs have entered our daily lives. Have you ever thought more about how you feel about it?

Take the chance for a small self-test. How much do you agree with the following statements?

Rating an AI

For example, suppose you ticked the question 1, Siri is intelligent with a 3, so you rather disagree with the attitude. The image you have of Siri’s intelligence can be a combination of three different sources of information:

The question consumer behavior researchers are most interested in is whether the attitude to a product influences behavior:

Under what circumstances, does a particular attitude predict a certain behavior?

According to Zimbardo & Gerrig (2004), availability is an important indicator of behavior and attitude. Did you have to think longer about Siri being intelligent? Then your attitude was not readily available. If you ticked quickly between 7 and 9, then your attitude was readily available and the likelihood of using it could be greater.

There is a lack of immediate experience in the public, which would make an attitude or image you have about voice-operated systems more accessible. Further, your answer would probably have changed if the question had been asked more specifically: There are several factors involved, depending on whether you were asked about AI in general, Siri or Alexa, voice control using Siri, or using Siri to make a phone call (very specific).

The examples that come to mind when asked about your attitude towards AI also play an important role: Are you thinking about Amazon buyer recommendations or programs for diagnosing cancer? About self-driving cars or killer robots? These relationships and attitudes are based on different subsets of information and are currently very unstable due to the great hype, making predictions of the image and subsequent action (use) very difficult.

Changing the Image of AI through Persuasion

The attitude that people have for certain things is not always a flag in the wind. But it is not as stable as many would like. We live in an age of massive information overload, social media consumption and fake news paired with a rapid, unpredictable evolution of technology. It is not always easy and perhaps not meaningful to get a stable, rational picture of a difficult topic like AI or climate change. Politicians, companies, activists, or parents make conscious efforts to change the attitudes of others – it’s the art of persuasion.

The most important theory in the context of persuasion is the Elaboration-Likelihood Model. A framework that shows how likely individuals are to deal strongly with a persuasive message (Kroeber-Riel et al., 2009, Zimbardo & Gerrig, 2004).

The theory distinguishes two routes:

The model assumes that high involvement information processing via the central route leads to a deeper attitude change, considering the personal relevance (is AI important to me?) and the ability of the person (what do I know about AI?), who is to be persuaded.

Consider the following TEDxSanFrancisco talk by Rachel Thomas with the call: Artificial intelligence needs all of us.

Let’s assume, Rachel Thomas wants to convince a sociologist to participate in the AI discussion. This AI-critical sociologist wants to know more and decides to watch the talk, he chooses the central route and deals intensively with her arguments. If the arguments are good, a change in attitude is very likely and relatively resistant to future persuasion. If the arguments are weak, then there is a boomerang effect and the critical, own position is even strengthened!

One of the biggest discrepancies in the context of AI is the cognitive as well as affective component of AI. One way of thinking about AI can be both cognitive: How much does the program cost? How powerful is it? as well as affective: Does the program scare you? Does it affect your quality of life?

Empirical studies have shown that cognitive-based attitudes must be confronted with cognitive-based arguments on the same level. Only then can they convince. Vice versa, affective-based attitudes, must be met with affective arguments. For example, if a company wants to create a positive attitude towards digital voice assistants, which in turn increases the likelihood of buying or using the product, then ideally it should be clear whether the target group’s attitudes are more cognitive or affective.

It is very unclear, where the attitude of the population towards AI is located on the affective-cognitive continuum. One hypothesis suggests, that the attitude of people with a science and technology background are based on cognitive experiences, while those with a social-artistic background are more likely to adhere to affective arguments. One possibility would be to verify the image empirically with measurement methods on the desired target group before implementing communication measures.

Such an image test can be carried out, for example, in the form of the semantic differential (Kroeber-Riel et al., 2009). This method allows an affective assessment, for example, on the questions of positive or negative attitudes towards voice control in online banking. It is also possible to compare the mean values of products with one another, e.g. Alexa and Cortana.

Semantic differentials of an AI

Influencing Behavior through Compliance

While Persuasion focuses on long-term image change, compliance is about bringing about changes in concrete behaviors. Whether it is your doctor, whose medical advice you are following, the charity you should donate for, or the president you should vote for. How do you get someone to take climate change seriously and commit to the environment? How do you get someone to take AI seriously, and for example, sign a ban on autonomous weapons? How do you get someone to change their password 123456 or post any kid photos on social media?

Empirically examined techniques especially in sales, but which can be adapted to the context, are (see framework Zimbardo & Gerrig, 2004):

In the context of AI, there are subtler and more obvious strategies to bring about compliance: for example, the subtle invasion of digital assistants such as Siri, where people are slowly getting used to an AI in their daily lives. The foot in the door is thus ensured, to reduce resistance and to accept AI with its ubiquity.

Especially in marketing, the FOMO technique is overused, unfortunately. Everything is “washed” with AI. AI becomes a brand or USP and everyone who does not have AI, will not survive in the market.

Not infrequently, those convictions are met with resistance and skepticism.

Resistance towards AI

Not all people are easily influenced. However, the explanations underlying this are lacking empirical evidence. The few empirical studies say that people are easier to influence:

On the other hand, people with type A behavior or low self-monitoring behavior are more difficult to influence. The control behavior plays a critical role in relation to the development of resistance towards persuasion endeavors (Kroeber-Riel et al., 2009).

In the context of AI, forms of resistance are understandably very pronounced and equally alarming. Resistance is generated by:

If AI really affects our entire society and we all have something to contribute, then we are in an uncomfortable do-or-die situation. The threat to our behavioral freedom and the compulsion to trust the big tech giants to do the right thing can trigger reactance. We are in a similar situation regarding the climate debate. The compulsive search for arguments against global warming is a form of reactance. The same applies to the rebellion against vaccination obligation in Germany. Even if the decision to go into resistance or rejection is irrational, restoring cognitive freedom has top priority. Who wants to be threatened or patronized? In general, fear of loss of control is a very powerful one. It tempts people to behave irrationally or to go into complete denial.

It is extremely difficult to say which type of communication is effective in terms of AI. This has different reasons (selected list):

The mother of all problems is probably the fact that nobody knows exactly where the path will lead us to. What we can do is to let our discussions pass through four gates. An examination of the topic should at least consider four major pillars, let’s call it the E-triple I Approach:

Conclusion

To date, both the horror and panacea scenarios about AI are still going on in the future. Although much is possible, good or bad, the big stories are missing. What remains is the strange aftertaste and warnings of some important people, like Stephen Hawking or Elon Musk. More and more people are dealing with the topic in research, practice, schools and now also at home when kids are talking to Alexa. The amount of information will continue to increase, as well as the quality of the information. The requirement is, that the AI-terminator-hype flattened and real opportunities are evaluated and dangers are regulated. Meaningless AI-washing, exaggerated predictions and threats are being demystified piece by piece. People are more than ever required to become well-informed, mature consumers.

Links

References

About the Author

Marisa Tschopp

Marisa Tschopp has studied Organizational Psychology at the Ludwig-Maximilians-University in Munich. She is conducting research about Artificial Intelligence from a humanities perspective, focusing on psychological and ethical aspects. She held different talks like for TEDx and is also board member of the global Women in AI (WAI) initiative. (ORCID 0000-0001-5221-5327)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

TEDxBoston Countdown to AGI

TEDxBoston Countdown to AGI

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here