Human and AI Art
Although the storyline may not seem surprising at first glance, the film highlights numerous current problems in our society that are worthy of deeper consideration. The film takes on topics such as artificial intelligence, human emotions, and ethical issues related to technology. These issues are of great importance to us and also affect our research and work in this area. We therefore deliberately wanted to take time to look at these aspects in more detail in M3GAN and reflect on how we will deal with these challenges in the future.
The talented programmer’s idea to create a lovable and intelligent robot friend for her niece, who resembles her, is always there for her and protects her, seems to work at first. But then the robot, M3gan, unexpectedly develops consciousness and becomes overly protective of any threat to the girl. In an eerie twist, M3gan spirals out of control, triggering a series of events in which several people die. A whimsical dance performance goes viral and a final conflict between man and machine unfolds. But in the end, more or less harmonious balance reigns again between all involved.
The film is about a young girl who lives with her aunt after losing her parents in a car accident. However, the aunt, an inventive toy builder and programmer, has little experience with parenting and puts more emphasis on her work, which compounds the girl’s sadness. To solve the problem, the aunt builds an intelligent robot friend for the girl. The doll has a human name, a childlike appearance similar to the girl, and is ready to assist the girl at any time. Problem solved – What could possibly go wrong?
No matter where we are and what we do, solving problems is part of our human existence. But how do we do it? Every generation has its way of dealing with problems that arise in our social lives. Our generation seems particularly inclined to choose a technical way to solve problems. The tendency to primarily or even exclusively resort to technology in order to solve problems is known as techno-solutionism (or tech solutionism). Often the term has a rather negative connotation. Related to that we observe AI Solutionism, which specifically refers to the exaggerated hopes of those who see AI as a holistic cure-all for almost all of our problems. In the course of our work on gender equality in AI, we have described this firsthand. Our book chapter AI For Gender Equality addresses how AI can be used to improve gender equality. Unfortunately, reality often shows a sobering result, as products that are developed with much enthusiasm, such as a bracelet to help with domestic violence, often quickly do more harm than good by being easily misused as a stalking tool. The example highlights how important it is to carefully consider the implications and potential negative consequences of technologies, especially AI, before they are deployed.
The story in M3gan also shows what it means when new technology is deployed without proper ethical assessment, under time pressure and with a predominant focus on market success. In a way, the doll is in a beta phase, the girl is the first test user and the aunt is under pressure because the competition in the market is so high. This practice of introducing new technologies without sufficient review of their impact on society supports the concept of move fast, break things. The idea is to get to market as quickly as possible without regard to possible negative consequences among users. The negative consequences for the company (e.g. fines for non-compliance with certain regulations, e.g. regarding security and data protection) are budgeted in. Needless to say: This attitude is highly problematic, especially for disadvantaged or vulnerable groups. What the film highlights here is a real problem, masked behind a pretty face with big, gleaming eyes: Technological achievements are thrown onto the market without consideration and responsibility is preferably shifted onto the user.
Along with our relentless desire to solve problems, human existence is characterized by the central role that relationships to other humans. However, we build relationships not only with other humans and animals, but also with objects that are not alive in the sense of our existence. Humans tend to develop emotions for non-human agents. From children who love their teddy bears to teenagers of the 90s who had their first digital relationship with a Tamagotchi, a trend of endowing things with human characteristics is evident. This phenomenon also manifests itself in worshipping statues, gods and goddesses. Even with moving triangles and dots, we tend to suspect some will or intention behind the movements. Try it out – can you really resist seeing a story behind the movements?
In the case of the Heider Simmel Illusion, it does not really matter whether we resist from interpreting the symbolic movements as a story with a message. After all, nobody comes to harm if you think that the triangle is chasing the point. The stakes are much higher though with so-called AI companions, or just M3gan, which aim to replace a friend for the girl. AI companions are digital, synthetic entities based (mostly) on AI technology and designed to provide human companionship and interaction. They can exist in the form of chatbots, virtual assistants, interactive toys, or humanoid robots, and are designed to provide emotional or practical companionship to the user. In contrast to the Heider Simmel Illusion, it is safe to assume that seeking companionship puts humans in an emotionally vulnerable state. Therefore, the consequences are much more severe for them, when they misinterpret the AI companion’s behavior or when they get overly attached to it.
When it comes to the robot girl M3gan, the first meeting can be accompanied by different reactions: Joy? Fascination? Fear? Curiosity? Often a certain disharmony arises in people’s minds. Confusion arises when the robot seems so realistic and automatically people ask the question: What is human, what is a machine and where do we still differ? Our brain is confused when it can no longer properly classify a machine as a machine. This is due to our outstanding cognitive ability, anthropomorphizing, because our brain tends to perceive non-human things as human in order to better understand the world or to satisfy other needs. Loneliness and/or isolation plays a particularly large role in this scenario.
Discussions in the media often focus on the design of the robot M3gan (played by a real child), but there is one moment in the film that points at something else: The moment when the girl slaps or panics at her aunt for trying to take the robot away from her. Admittedly, a stubborn toddler would do the same if any toy were taken away. This also has other causes, especially with small children. But what the film again points out is a real problem, masked behind a pretty face with big, shining eyes: Humans develop a human-AI-relationship to agents, with similar relationship mechanisms, which can also be found in human interaction. While deliberate social design can, at its best, promote user experience and fun, it is becoming increasingly apparent that it can also have negative psychological consequences for the user.
There are people who build such a strong emotional bond with their AI chatbots, such as Microsoft’s Xiaoice, that they suffer when the connection breaks down, to the point of depression. Or they become addicted and forget what normal life is like and how we deal with real people and relationships that also involve conflict and are not available 24/7. It is thus also plausible that such close association with a chatbot has negative effects on social behavior in real life. The data situation is not sufficient to make a clear statement conclusively, but initial studies point to the problem situation in addition to newspaper reports. In the worst case, these relationship mechanisms can be exploited to entice users to invest more money, share more data and stay longer (referring to Kate Darling’s work). This can be achieved through purposeful design to exploit the user’s need for interaction and engagement. In this context, we refer to this practice as relationship exploit.
M3gan has moderate entertainment value as a film, and even those who want to use it as the basis of an ethical debate will have to dig deep. Because of the special effects and inconsistencies in logic, these deeper issues somehow get lost. There is far too much discussion about whether it would be feasible to build such a robot and whether or when AI will develop consciousness. A problem we know from the AGI debates, especially referring to AGI Believers. AGI Believers are people who are convinced that it is possible to create artificial intelligence that is just as advanced and powerful as human intelligence. They believe in the possibility of an Artificial General Intelligence (AGI) that is capable of solving a variety of tasks and making decisions much like a human brain. These literally fantastic questions and discussions, as is so often the case, distract from the real problem associated with the introduction of new technologies and their often unreflective commercialization. However, this distraction from the real problems has consequences. There is a risk that important issues and potential risks will go unnoticed, such as privacy, ‘subtle’ ethical issues, and the impact on society. It is therefore of utmost importance that we continuously address these issues and engage in critical discourse, debunking hype and pipe dreams to ensure that we use technology for our good instead of letting it – and its masters of creation – take advantage of us.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here