AI and Leadership - Stop or Go?

AI and Leadership

Stop or Go?

Marisa Tschopp
by Marisa Tschopp
on February 18, 2021
time to read: 11 minutes

Keypoints

How leaders can shape Digital Transformation

  • AI influences organisations and collaboration
  • It is unclear, what role leadership plays in shaping digital transformation
  • This paper summarizes the findings of the Applied Machine Learning Days conference, topic AI & Leadership
  • A transdisciplinary dialogue is essential to meaningfully integrate AI and leadership

Artificial Intelligence (AI) influences the way organizations structure work and collaboration. It is unclear, however, what role leadership must play in shaping the digital transformation. In this paper, we argue for a broad and transdisciplinary dialogue as the means of choice to effectively integrate AI and leadership in the workplace. To this end, the central findings of the track AI & Leadership of the Applied Machine Learning Days Conference (EPFL, CH) are summarized and critically discussed. The findings are intended to serve as a guideline, on the basis of which a transdisciplinary dialogue on the design of technology implementation in companies can be profitably continued.

Recent developments in the areas of location-independent cloud technologies, so-called ubiquitous computing or self-developing algorithms, offer enormous potential for changing 21st century jobs in a lasting way, including their inherent (social) structures (see Cascio & Montealegre, 2016). Artificial intelligence (AI) and machine learning, for example, promise to provide precise control of employees, which does not seem to respect even their most personal sphere of life (Schafheitle et al., 2020). Some speak of the long-awaited blessing for business excellence or the fail-safe manual for building high-performance teams, while critics fear the emergence of a nationwide Big Brother or a primacy of the algorithm.

The influence of smart technologies on the way organizations structure work and collaboration cannot be denied. However, it remains unclear how these technologies can be integrated in the workplace and what role leadership plays. Algorithms as new teammates or even a complete automation of leadership, as suggested by Amazon’s Firing by Algorithm practice (Lecher, 2019), are some examples that raise pressing questions: Whether a robot can be a good boss or colleague and, if yes, how? Where are the limits of dehumanizing work? How can a panopticon of technology in the Foucault sense be prevented?

Answers are difficult for at least two reasons. First, systematic and reliable knowledge about the relationship between AI and leadership is still in the process of being developed. One reason for this is that doubt as a basic scientific principle contradicts a production of knowledge on the assembly line. Secondly, answering these questions requires a transdisciplinary dialogue from different perspectives. This seems particularly difficult and at the same time in demand against the background of growing AI echo-chambers (spaces that reinforce one’s own views) and a gap between science and practice (see also Fisher, 2020).

We want to contribute to answering the question of the Stop or Go?! of AI and leadership. For this purpose, we summarize central findings of the track AI & Leadership (Tschopp et al., 2019) in the context of the Applied ML Days 2020. Afterwards, we want to stimulate the debate about AI and leadership by discussing the findings and point out ways that may be worthwhile to go in search of answers.

Perspectives from Science and Practice

The aim of the track was to bring together perspectives from research and practice and to enable a transdisciplinary dialogue. Current research in the fields of technology and trust, innovation and leadership, and humanoid robots was presented, enriched with the practical perspective of business ethics and illustrated with concrete case studies from the financial and technology industry.

Simon Schafheitle (University of St. Gallen, CH) presented findings from the SNF-NFP75 research project on the question of how AI/ML algorithms affect processes and practices of human resource management in the workplace. As a result he presented a framework of eleven (socio-) technological design options, with the help of which algorithm-based personnel management can not only be categorized, but also designed with a view to the important relationship of trust between employees and employer. In summary: (1) Algorithmic workforce management and employee trust can go hand in hand when properly designed and managed. (2) In many areas of the employee life cycle, leadership is already automated and (3) leadership, in the context of AI, will require continuous weighing in complex decision-making situations and a certain amount of technical understanding in order to shatter the primacy of technology in the workplace.

Stephanie Kaudela-Baum (Lucerne University of Applied Sciences and Arts, CH) discussed the role of algorithms as new actors in the decision-making process of a participatory style of leadership. She emphasized the Leadership-as-Practice perspective and discussed the resulting dilemma of responsibility in everyday management, when algorithms are not only able to prepare decisions but also to make them. In summary: (1) Algorithms transform established decision modes of a participatory management style. For example, when and in which management task is the human being or the algorithm in charge? (2) What is needed is a constant, critical dialogue about the limits of the algorithm and (3) about who has the right to make the final decision in this mixture of interests.

Jamie Gloor (University of Zurich, CH) focused on the progressing automation of leadership. In particular, whether and if so, how leadership can be automated as an act of social influence. Using the example of a humanoid robots with programmed humor, she showed that this is possible in principle: Not only can the often quoted routine activities be automated, but also social influence can be automated through programmed soft skills. In summary: (1) By humanizing robots, tasks can be automated that were previously exclusively reserved for humans. (2) This has consequences for education and training: How do managers and robo-manager differ, when machines can theoretically also cheer up or motivate?

John Havens (IEEE, USA) took up the concept of the triple bottom line in order to connect the tension between AI and leadership to the triad of human well-being, environment and business success to be aligned. He presented the Ethically Aligned Design Framework, a standard work for certification and regulation of AI development and deployment. To summarize: (1) There is no silver bullet, though the Ethically Aligned Design Framework can be useful as a guard rail for an ethical approach to AI by (2) identifying concrete benchmarks for human-centred design and (3) preventing managers from falling for the blind dictation of the algorithm.

Pascal Strölin (UBS, CH) presented experiences from the Trudi pilot project, in which internal employee reporting was automated using voice recognition. The central thesis was that Trudi posed new challenges for managers because it was challenging to combine practical benefits and error-free operation and regular feedback possibilities. Summarized: Initial scepticism of the users, can be overcome by (1) open communication and (2) systematic documentation of “What can and may the algorithm do? In terms of leadership, be it important to (3) limits to define the human-machine cooperation, e.g. when it comes to the evaluation of the quality of work

Afke Schouten (AI-Consultant, CH) presented findings from consulting practice and discussed that a lack of AI-literacy of managers could lead to exaggerated expectations in AI projects. Summarized: (1) Successful AI project management requires leaders with technical skills and a sense for ethical dilemmas. (2) AI increases the complexity of the management task, so that technology experts too should invest in business literacy, i.e. understanding complex business interrelationships.

Benedikt Ramsauer (Swiss Statistical Design & Innovation, CH) presented a project management tool which, in the course of progressing automation of decision-making should help to define goals and milestones with the involvement of different stakeholders. Summarized: (1) Managers should dampen excessive expectations of technology and instead (2) increase their knowledge of of the technical blueprint, so that (3) an integration of AI into corporate culture can succeed.

Ulli Waltinger (Siemens AI Lab, DE) described how AI poses challenges for managers, e.g. by transformation of business models or by the possibility of being able to analyze target groups precisely. As a result, he presented the Siemens Responsible AI Framework, which emphasizes data security and value codices. In summary: (1) Managers should continuously expand their experience with AI tools in a trial and error process in order to (2) integrate AI into the corporate culture and (3) protect employees from invasions of their privacy.

AI & Leadership – What’s it to be: Stop or Go?

AI and leadership – What’s it to be? Where will it go? What other directions are possible, be it left or right or for or against: Do AI and leadership fit together at all? The article points out: They do! It makes a contribution to a debate which seems to be stuck in its extremes between Nostradamus followers proclaiming the end of the world and tech evangelists promising salvation. Stop or go? Stop, but why? Go, but how fast? The findings provide at least two starting points to bring the discussion together.

Firstly, the transdisciplinary approach is worthwhile because it helps managers to deal with the increased complexity of their leadership tasks and the technology-inherent managing by sight. Otherwise, managers would run the risk to fail – to be stopped so to speak. This approach promotes the often demanded modesty of managers and the confession of not being able to know everything and plan in advance. Last but not least: Transdisciplinarity creates trust and without trust the integration of AI and leadership cannot succeed (see Schafheitle et al., 2020). Trust of the employees in the employer, in colleagues and superiors as well as in the technology.

Second, this article nuances questions such as which competencies of managers will be relevant in the future, how to deal with increasing complexity in the workplace or how people analytics can go hand in hand with a vital culture of trust. Is an external ethics certificate worthwhile? What about the AI literacy of managers? How much knowledge about AI is advisable given the dynamics of AI development?

Conclusion

We have pointed out various dials that can be tuned for a successful integration of AI and leadership. However, what exactly a fine-tuning looks like will keep future research and practice busy for some time. A transdisciplinary approach with different methodological approaches, schools of thought, and experiences is certainly promising, not only to realize that there are other driving commands waiting to be discovered beyond stop and go. There is no ideal way to integrate AI and leadership. The findings are suitable as guidelines, on the basis of which a transdisciplinary dialogue can be conducted on the design of technology implementation in companies. The videos can be downloaded from the conference homepage. We hope that this contribution will support the development of successful transdisciplinarity around AI and leadership, combined with the desire not to leave the practical field of technology in the workplace to a supposedly sublime perspective alone.

Citation

This is a peer-reviewed publication co-authored by Marisa Tschopp and Simon Schafheitle in Sonderband Zukunft der Arbeit. Citation:

Tschopp, M., & Schafheitle, S. (2020). KI & Führung – Heute Hü, Morgen Hott. In J. Nachtwei & A. Sureth (Hrsg.), Sonderband Zukunft der Arbeit (HR Consulting Review, Bd. 12, S. 420-423). VQP.

References

Cascio, W. F., & Montealegre, R. (2016). How technology is changing work and organizations. Annual Review of Organizational Psychology and Organizational Behavior, 3, 349-375. https://doi.org/10.1146/annurev-orgpsych-041015-062352

Fisher, G. (2020). Why every business professor should write practitioner-focused articles. Business Horizons, 63(4), 417-419. https://doi.org/10.1016/j.bushor.2020.03.004

Lecher, C. (2019, 25. April). How Amazon automatically tracks and fires warehouse workers for ‘productivity’. THE VERGE. https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations

Schafheitle, S. D., Weibel, A., Ebert, I. L., Kasper, G., Schank, C., & Leicht-Deobald, U. (2020). No stone left unturned? Towards a framework for the impact of datafication technologies on organizational control [Manuskript in Druck]. Academy of Management Discoveries. https://doi.org/10.5465/amd.2019.0002

Tschopp, M., Weibel, A., Schmidlin, D., & Schafheitle, S. (2019). AI & Leadership [Conference Proposal]. Applied Machine Learning Days, Lausanne, Switzerland. https://www.researchgate.net/publication/337424480_Conference_Proposal_AI_Leadership_Submission_for_Applied_Machine_Learning_Days_EPFL_accepted_2019_date_of_conference_2020

About the Author

Marisa Tschopp

Marisa Tschopp (Dr. rer. nat., University of Tübingen) is actively engaged in research on Artificial Intelligence from a human perspective, focusing on psychological and ethical aspects. She has shared her expertise on TEDx stages, among others, and represents Switzerland on gender issues within the Women in AI Initiative. (ORCID 0000-0001-5221-5327)

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
Human and AI

Human and AI

Marisa Tschopp

Human and AI Art

Human and AI Art

Marisa Tschopp

Conversational Commerce

Conversational Commerce

Marisa Tschopp

ChatGPT & Co.

ChatGPT & Co.

Marisa Tschopp

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here