Human and AI
Marisa Tschopp
TEDx discussing the Future of AI
TEDx events are usually organized around a specific theme and includes a lineup of speakers who are experts in their field or who have an inspiring story to share. Usually, talks are typically 18 minutes long and designed to encourage dialogue and inspire action, and often feature opportunities for audience members to engage with the speakers and each other. There is a playlist of the most popular TED Talks of all times featuring Bill Gates, Elon Musk or Brene Brown. Sir Ken Robinson’s TED Talk from 2006 has over 74 million views.
The TEDxBoston is, according to John Werner, one of the largers and most successfull TEDx events worldwide. The latest event took place on March 6th at the Quin House in central Boston and was a special edition dedicated to Artificial Intelligence. Provocatively titled Countdown to Artificial Intelligence it featured 61 speakers, each of them provided 5-minutes to talk about their specific topic: From technology developments, over business opportunities to societal impact. The new format is a challenge and also opportunity to present a wider diversity of topics and speakers, but also a wider audience, where everyone is fighting for attention in the swiping tik-tok generation.
Abstract: 2022 was a breakthrough year for progress in Artificial Intelligence (AI). This year’s advances in generative models include large language models like GPT-3.5 and ChatGPT, cross-modal models like DALL-E 2 and Stable Diffusion, and multi-modal models like Gato. The Metaculus community estimate for the arrival date of “Weak AGI” collapsed from 2042 to 2027. In 2023, on the cusp of an inflection point in the capabilities of AI, we stand on the shoulders of these accelerating technologies, ready to influence every part of human society. Together, we gather to discuss what this future could look like – overcoming outstanding challenges that exist in deep learning and AI, and unlocking new heights in our own human intelligence. We have curated ideas/visions of what this future could look like.
The diversity of topics was immense: AI for smart homes, game development, or programming. Researchers and practitioners shared their ideas or startups. Healthcare was a very prominent topic in hope to fight cancer, improve CT scans or accelerating clinical trials. The theme of trustworthiness and explainable AI was also recurring. It was also fascinating to see how the people of indigenuous backgrounds help their communities to stay on top of the game and teach children how to code. The full list of topics (which are working titles) can be found online and the respective talks will be online within the next 2-3 months(Youtube or TED website).
Marisa Tschopp was invited to give a talk in the afternoon about our research on human-AI relationships, here is a summary of it:
Every day, 19,000 people express their love to their voice assistants and 6,000 people in India even propose to it. The media often brings attention to these not-so-serious studies, such as one that found that 14% of male smart home speaker users in the UK desire a sexual relationship with their systems. This fuels the public discourse on how people really feel about these talking machines that are, at their core, simply only hard and software.
But despite the fact that these machines are not human, a wealth of empirical research has shown that people tend to humanize them and treat them as social actors. However, just because someone says they love their AI-system, does that mean they are truly in love with it as they would be with another human? So, if people don’t actually love their voice assistant but also don’t see “her” simply as a tool, how do they perceive their relationship with conversational AI? And what can we learn from better understanding human-AI relationships?
To answer these questions we used a multidimensional relationship theory from psychology and applied or repurposed it to the context of human-AI relationships. We studied around 1000 conversational AI users and found that they relate to their conversational AI systems in three ways: The traditional master-servant relationship, a rational relationship with no hierarchies, and a friend-like, emotional relationship. Only a few characterize their relationship as a friend-like, emotional relationship. Most users, unsurprisingly, perceive the relationship to their voice assistant as the traditional master-servant relationship. What we found surprising is that the non-hierarchical relationship was almost as popular as the master servant-relationship.
This finding could be interesting for developers of voice user interfaces, because how much agency users attribute to a system may impact how they use it or what they use the voice assistant for. Consider a simple task such as setting an alarm. It requires little engagement, just short, simple interactions. A complex task, such as online shopping with a voice assistant requires more engagement. For instance, multi-turn dialogues and potentially more data sharing and financial risks. Put differently, more ‘shared’ responsibility. Thus, we want to use digital assistants also for more complex tasks it would makes sense to see or design them more as partners in decisions making, rather than the mere order takers. Does that mean the role of the digital assistants as our servants will somehow go out of fashion?
We explored this question by looking at voice shopping decisions. Specifically we asked, whether the way people relate to their voice assistant influences the kind of products they buy via their voice assistant. The results showed that seeing your voice assistant as a friend or servant, promoted voice shopping for cheap and simple products. Surprisingly, we also found that it was also the friend-like assumption that matters most, when it comes to shopping for expensive and complex products. In other words: Friendship sells!
However, there are moral concerns that are very much against recommending voice user interface developers to design their interfaces as AI Friends to increase sales Humanizing machines can have negative consequences such as oversharing personal data or nudging people to take decisions that may not be in their best interest. On top of that some have uncovered instances, where users have reported symptoms of depression upon losing their connection with a chatbot. What do we do when our attachment to these technologies turns into a pathological addiction or, into an inability to deal with real people in real life?
Marisa Tschopp concluded the talk by strongly emphasizing the need to better understand the relationship between humans and AI. Because how we relate to these systems can influence user behavior in ways we don’t fully comprehend yet. And these psychological mechanisms can be exploited or manipulated far too easily in ways that may be difficult to control. The talk will be available online soon.
The goal of the TEDxBoston event was to probe current challenges and unlock new insights into human intelligence with a well-chosen series of 5-minute talks, that were presented by esteemed speakers on topics such as the impact of AGI on ethics, creativity, intelligence augmentation, the future of work and education, and innovation in health, among others. From our opinion, the topics were well curated, as well as the speakers, from newcomers to experienced experts in the field. We were particularly impressed by the excellent curation of heterogeneity among the speakers regarding their origin and culture. This enrichment is immensely valuable for meaningful exchange to broaden one’s horizon and, in short, to build better AI for a better future.
Our experts will get in contact with you!
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Marisa Tschopp
Our experts will get in contact with you!