Einladung zu Dagstuhl-Seminar Interactive Visualization for Fostering Trust in ML

Einladung zu Dagstuhl-Seminar Interactive Visualization for Fostering Trust in ML

Dienstag, 9. August 2022

In der letzten Augustwoche findet am Leibniz-Zentrum für Informatik das Dagstuhl-Seminar zum Thema Interactive Visualization for Fostering Trust in ML statt. Schloss Dagstuhl – Leibniz-Zentrum für Informatik GmbH hat zum Ziel die Informatikforschung auf internationalem Spitzenniveau zu fördern und zwar durch die Bereitstellung von Infrastrukturen zur wissenschaftlichen Kommunikation und für den Austausch zwischen Forschenden. Übergeordnetes Ziel des 5-tägigen Intensivseminars mit Teilnehmern aus Forschung und Praxis aus aller Welt ist es der Frage nachzugehen, wie man KI-Syteme verständlicher und vertrauenswürdiger machen kann.

In der Reihe Dagstuhl Reports werden alle Dagstuhl-Seminare und Dagstuhl-Perspektiven-Workshops dokumentiert. Die Teilnehmerliste kann online angesehen werden. Um das hohe Niveau der Seminare zu garantieren, werden Themen für Dagstuhl-Seminare und deren Teilnehmer im Antragsverfahren vom Wissenschaftlichen Direktorium begutachtet und ausgewählt. Eine Teilnahme ist deshalb nur auf persönliche Einladung möglich.

Organisatoren des Dagstuhl Seminars

Abstract

Artificial intelligence, and in particular machine learning algorithms, are of increasing importance in many application areas. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms’ results – all crucial for increasing humans’ trust into the systems – are still largely missing. All major industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published some form of Guidelines for the Use of AI.

While it is clear that the level of trust in AI systems does not only depend on technical but many other factors, including sociological and psychological factors, interactive visualization is one of the technologies that has strong potential to increase trust into AI systems. In our Dagstuhl Seminar, we want to comprehensively discuss the requirements for trustworthy AI systems including sociological and psychological aspects as well as the technological possibilities provided by interactive visualizations to increase human trust in AI. As a first step, we will identify the factors influencing the organizational and sociological as well as psychological aspects of AI. Next, the role that visualizations play in increasing trust in AI system will be illuminated. This includes questions such as: Which mechanisms exist to make AI systems trustworthy? How can interactive visualizations contribute? Under which circumstances are interactive visualizations the decisive factor for enabling responsible AI? And what are the research challenges that still have to be solved – in the area of machine learning or interactive visualization – to leverage this potential in real world applications?

The planned outcome of this seminar is a better understanding of how interactive visualizations can help to foster trust in artificial intelligence systems by making them more understandable and responsible. This should encourage innovative research and help to start joint research projects tackling the issue. Concrete outcomes may be a position paper describing the research challenges identified in the seminar or a special issue featuring interactive visualizations for fostering trust in AI.

Links

Tags

Sie wollen eine KI evaluieren oder entwickeln?

Unsere Spezialisten kontaktieren Sie gern!