In Michael Ende’s fantasy novel The Never Ending Story a troubled boy starts reading in a somewhat dark and scary and yet exciting book about a beautiful place threatened by the so called Nothing. Not knowing if he is just observing or an actual part of this adventure the boy decides to go on, dive into the adventurous undertaking and (spoiler alert!) in the end – saves the world.
Does this kind of resemble our current situation when it comes to Artificial Intelligence? Nobody really knows his part in the story. The major difference is that in this time we cannot choose whether we want to dive into this adventure or not. We cannot just close the book and walk away. Almost everybody is able to play his part in this story: there are the cynics dismissing this as absolute, overrated non-sense, influencers proclaiming the apocalypse by AI Robots and the excited optimists, that feel strangely and inexplicably attracted to this topic. However, it may not be as absolute. You are not a victim of technology and unconsciously have to obey to the dictate of growth and development.
You still have a clear choice on the very individual level. Of course you are not forced to use any AI if you do not want to: you can go AI-free as well as a mother or father can choose to parent screen-free in their bubble at home if you are close to industrial world. In rural, very remote areas the impact on the individual level logically is not as massive. On a societal level though, denial of the impact of AI will not work in the long run.
When you look a bit back in history over a hundred years ago, there was probably a similar mega trend about human intelligence when researchers first began to measure human intelligence and boil it down to numbers to compare human abilities to each other. It was likewise exciting and yet scary and very dark when you think about how Adolf Hitler used so called Intelligence Tests as a legitimization to eradicate certain people or races.
It took about one hundred years to get at least some researchers to agree on a definition of (human) intelligence and you may think intelligence has lost its appeal; but with the remarkable advances in big data, internet of things, deep learning/machine learning etc. – the many-faceted discussion about intelligence human and machine – is reignited and attention in academia and press reached an all-time high.
In 2016, Microsoft’s chatbot was shut down after not even 24 hours of operation because Twitter users made it an insulting nazi-lover.
And in 2017, German police had to break into a house because Amazon’s Alexa was throwing a party on her own.
In many reported cases, AI systems have been accused of being racist, sexist or biased: for example, in an AI System to predict future crime, black offenders were at higher risk to a future crime, in an AI-judged beauty contest, mostly white women were awarded, and PokemonGo Stops were predominantly located in white neighbor hoods to the disadvantage of other players.
Intentionally or not, AI tends to be wrapped in inventive mysteries embedded in spectacular stories, complicating, but also keeping alive public discourse.
The interest in AI seems to be interminable. In 2016, over fifteen thousand papers across disciplines have been published in academia only. In addition, an innumerable corpus of articles online and in print add up to the almost epidemic spread of information about AI; from rock-solid science to trashy, urban legends and fake news.
What are the reasons for the continuous, passionate interest in AI? If you look at the stories above, only a few out of myriad of AI-tales, they all have one thing in common: They lead us to the darker places of the human mind. In these stories told, the AI exhibits maleficent, immoral, dubious and questionable behavior. And we all kind of know that it is us humans, that have laid the foundations for this behavior; it reminds us of our weaknesses, darkest fears or false attitudes we have but are not able to admit or handle. The AI mirrors human behavior – so it cruelly reminds us of how imperfect human nature, human behavior is. We simply must accept that it taps not only into the brighter but also into the darkest parts of human mind and behavior. Furthermore, AI strikes humanity where it hurts most: our fear of being vulnerable, imperfect and replaceable.
Humanity constantly strives to evolve, to grow and develop to accommodate and adapt to the current circumstances. And again, it is out of the same motivation as named above: the fear of being replaced by another species. There is a constant strive for getting to or staying at the top of the food chain. Because this is where humans can protect one of the most basic and necessary needs: the need for security. This evolutionary ideology after C. Darwin, the survival of the fittest, can be transferred into many areas of human life. Especially when you consider current economies. There is no panacea for the harm and problems which have risen and will continue to arise in our fast and ever-changing world, among other things influenced by the massive amount of information which a human mind is no longer able to cope with. Yet, if you want to make “good” decisions in for e.g. in business to survive, the current imperative is that you must have accurate information, reliable facts, realistic figures, empirical evidence to make a rational decision (for now, not considering the imperative of creativity and innovativeness as one of the most important competitive edges). So, we talk ourselves into believing that we can take rational decisions, but unsurprisingly we all know that this is impossible.
(Not even sure if this concept of human kind has ever lived!) Human kind must cope with the constant burden of their bounded rationality and has to admit that no one will ever be able to make a purely rational choice. More than ever, we will face and must deal with unreliable information, our limited mental capacity to work with information and less time and resources to make a decision. Among others this may constitute the underlying rationale why we invest in technology. It patches our ever-hurting wound of being imperfect. AI represents a logic consequence of the information overload challenge to cope with complex problems produced by the massive amount of information. It is built as an extension to human mental capacities, an assistant for doing unpleasant work and functions as additional manpower, in this case machine power, in order to complete multiple tasks at once and make faster decisions.
There is absolutely no way that AI as a topic solely for the computer sciences and movie makers.
A discourse across scientific disciplines is absolutely essential to examine AI not only as a matter of programming language, but as a concept with all its intricacies and its major impact on society as a whole. There is no AI without humanity. Hence, psychology as a scientific discipline is one of the major, essential lenses to use, among others such as philosophy and ethics, sociology and political science, health or neurosciences.
Psychology is especially well-suited to start a discourse and work on interdisciplinary concepts as it starts at the human level (versus sociology or politics, which start at a systems level). However, ethical or biological views are very much integrated into psychology. That is why a sharp distinction between the disciplines may not be possible and useful anyways. Psychology investigates mind, life and behavior of humans. As an academic discipline it has an immense scope from cognitive psychology to social psychology; from clinical psychology to organizational psychology and many others . When it comes to AI one thing is clear: you cannot take the human out of AI. Whether it is the human-agent interaction, perception, language, cognitive processes or soft skills such as empathy, emotions or communication skills. There is always a human side involved: whether it is writing the program, editing data or interacting with the system. As for now there is no AI Psychology or Artificial Psychology (a term coined by Dan Curtis in 1963 ), which implies that the AI has its own mind or even consciousness to make decisions without any human interaction or input. Yet the progress of imitating human behavior and various mental processes is quite amazing and much more can be expected, since this road is not a dead-end and many more are coming along to join the journey.
The more people join the discourse though, the more complexity (and competition!) will be added; and complexity is the enemy: complexity separates, disconnects and isolates. If AI shall be an entity to create a “better” life, it is essential to reduce complexity; we need to find a common ground. We need a common base, language and have the possibility to create stop-signs, if the path leads us to dangerous territories or search for sign-posts before we get lost.
This is the first paper to legitimate why psychology and AI are inseparable so to say. Within the course of the coming year this series will touch upon the very basic and profound concepts and theories of psychology as a science integrated into the AI environment. Relevant psychological constructs will be explained and how they connect to AI. For example, what is human perception and attention: reality, ambiguity and deception and how does AI resemble these processes? How much perception bias is in AI? What are the implications? For example, if you use an AI system in a recruiting of personnel process (Human Resources), how can you make sure that applicants of specific races or sex are not discriminated against?
The topics have a wide range, yet they are chosen based on relevance and use in day-to-day practice. Goal is to create general understanding, find a common ground and therefore reduce complexity. Key questions and answers about the understanding, measurement and comparison of human and AI skills will be the focus of this series, which incorporates both, invisible processes (so called black box of human brain) as well as visible behaviors. The following set of issues will be covered (concrete content subject to change):
The challenge is to find the perfect, in this context adequate level of depth and complexity to gain further insights and yet have everybody across disciplines, across research and practice in the same boat. Because after all it is a topic that has an impact on almost all of us. Inclusiveness and adequate degree of comprehensiveness are critical success factors to avoid failures or even worse disasters.
The “fail” of Microsoft’s Tay named in the beginning, truly is an interesting story as you can look at it from many different angles. This story provides perfect evidence that an objective, interdisciplinary approach to the development and implementation of AI is absolute necessary. It is so straight-forward and yet complex, impossible to be handled by one single person apparently. The Tay case raises a multitude of questions with only a few examples being:
The list of questions goes on and on and on if you start thinking about it thoroughly and as neutral as possible. As it is most often the case when you actually start doing research you do not get answers, you raise more questions, more problems, more dilemmas. It would be way easier to stay at the surface, brush aside the case of Microsofts Tay as a major, disastrous AI fail, rejoicing in the suffering of others. The easy way out it not always the best, this is not the way to reduce complexity. We shy away from complex topics. We are scared to ask dumb questions. We do not want to lose our face and sometimes choose to humiliate or focus on others instead of standing up to the challenge and take responsibility for our flaws and mistakes we make. The only way to reduce complexity is through shedding light where it is dark and decompose where there are huge blocks, all step by step. We need to take a shared approach to a common understanding by using a common language and a common ground.
More than ever, we need to focus on interdisciplinary, cooperative discourse in the best possible way. We even need to have discourse about discourse per se! If we have so many people from different disciplines, with very heterogeneous backgrounds and knowledge, how do we want to approach these topics after all?
Our research endeavors within this series comprise technical and non-technical considerations to guarantee the preparation of our community in the best possible way. Topics, problems, dilemmas, whatever may come up, are all examined through an interdisciplinary lens to measure social-psychological impact, ethical implications and additionally to forecast future development for the greater purpose, which eventually is, the public good.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here