Skip Navigation

Review: Three days of responsible AI – our first international symposium

What does it mean to develop, regulate, and apply artificial intelligence responsibly? With this central question in mind, the Center for Responsible AI Technologies (CReAITech), together with the Bavarian Academy of Sciences and Humanities (BAdW), hosted the first international symposium “Responsible AI: Promises, Pitfalls, and Practices” from April 8 to 10, 2025 at the Carl Friedrich von Siemens Foundation in Munich.

Featuring six keynotes from distinguished speakers across disciplines and five diverse, thematically curated sessions — each combining short impulse talks with joint discussions — the symposium created a vibrant space for interdisciplinary exchange. The event sent a clear message: critical reflection in AI research is not optional, but essential.

Many people contributed to the success of this symposium. Special thanks go to the Bavarian Academy of Sciences and Humanities (BAdW) and the Carl Friedrich von Siemens Foundation for their generous support and hospitality.

Day 1: Public engagement and opening keynote

To bring the ethical and societal dimensions of AI into broader public discourse, one full day of the symposium was dedicated to this purpose. In two interactive workshops — one for school students and another for interested members of the public — participants developed their own ideas of what “responsible AI” might mean, using the Lego Serious Play® method. These sessions, led by Dr. Marietta Menner, Head of MINT Education at the University of Augsburg, and her team, sparked inspiring visions and discussions.

On the evening of April 8, Jutta Haider (University of Borås) launched the academic program with her keynote “Infrastructures of Denial: The Banality of AI-powered Climate Obstruction.” She demonstrated how generative AI systems, through algorithmic bias, flawed sources, or economic interests, can contribute to the reproduction of societal ignorance about the climate crisis. Her central question: How can we ethically and politically confront these dynamics?

Day 2: Bias, responsibility and applications 

The second day began with the session “Bias and Responsibilities.” Yang Yiran (Radboud University) showed how deeply racial biases are embedded in AI image generators. Dalia Yousif (Technical University of Munich) emphasized that cultural background significantly shapes how AI-generated content is perceived. Marina Trautmann-Frick (HAW Hamburg) introduced VERIFAI, a tool designed to support the development of trustworthy and explainable AI.

Two keynotes addressed institutional responsibility in greater depth. Sandra Soo-Jin Lee (Irving Medical Center, Columbia University), in her talk “Responsible AI in Practice: Institutional Morality, Power, and Ethical Cultures,” stressed the importance of embedding ethical values early in AI development — as an ongoing process grounded in societal engagement. AI systems, she argued, are not just technical artifacts; they must reflect social values.

Jacob Metcalf (Data & Society Research Institute, New York), in his keynote “Auditing Work: Lessons for AI Regulation from NYC’s Hiring Algorithm Auditing Law,” examined the challenges of legal AI regulation. Using New York City’s bias-auditing law for hiring algorithms as a case study, he critiqued the law’s vague provisions — such as the lack of a clear definition of “independent auditors” — as hindrances to effective regulation and protection for affected individuals.

In the “Applications” session, the practical use of AI took center stage. Annemarie Friedrich (University of Augsburg) explained that current large language models still fall short of true multilingual capability — posing major challenges, especially in migration contexts. Eike Düvel and Michael Schmidt (Karlsruhe Institute of Technology) warned of the risks of generative AI subtly influencing political autonomy and opinion formation by enabling bias and manipulative dialogue paths. Hermann Diebel-Fischer (Technical University of Dresden) explored the ethical minimum standards required for an LLM-based chatbot in pastoral care.

The afternoon shifted focus to AI and Medicine. In his keynote “The European Health Data Space and the Global Politics of Data Dreams and Desires,” Klaus Lindgaard Høyer (University of Copenhagen) critiqued the commercial exploitation of health data within the European Health Data Space (EHDS). He called for greater clinical responsibility in data-driven decisions, warned against the dehumanization of technological systems, and urged that clinical accountability be central to AI-supported decisions.

Noam Shomron (Tel Aviv University) opened the medical session by illustrating how AI aids genetic research in uncovering disease mechanisms, outlining what “responsibility” means in the context of “responsible genomics.” Dominic Lammar (Technical University of Munich) countered the prevailing AI hype in medicine, calling for more realistic communication about the capabilities and limitations of AI applications in healthcare. Paul Trauttmansdorff (Technical University of Munich) closed the session by addressing the challenge of explainability in AI-powered medical diagnostics, especially concerning technologies like facial recognition — emphasizing that “explainable” does not automatically mean “responsible.”

Day 3: Sustainability infrastructures, and frameworks

Sabina Leonelli (Technical University of Munich) opened the third day with her keynote “AI for Democratic Societies: Convenience, Misinformation and the Struggle for Planetary Health.” She addressed the often-invisible social and ecological costs of AI, encouraging critical reflection on “convenience AI” systems that, while simplifying daily life, often externalize hidden societal impacts.

In the following session on sustainability, Theresa Willem and Marie Piraud (Technical University of Munich & Helmholtz Institute Munich) presented a study accompanying the introduction of the “PERUN” tool at the Helmholtz Institute. This tool allows developers to monitor the energy consumption of AI computations, and the study provided insights into users’ experiences and perspectives. Cian O’Donovan (University College London) critically examined “hopeful imaginaries” of AI and how these often, unknowingly, reproduce colonial structures.

In her talk “Taming the Environmental Fallout of AI: Between Strategic Ignorance and Enhanced Innovationism,” Ulrike Felt (University of Vienna) focused on the often-overlooked environmental consequences of AI — data waste, energy use, e-waste, and rare earth mining. She argued that this invisibility is structurally embedded, not accidental. Her analysis was based on EU policy documents, such as the EU Ethics Guidelines and the Commission’s “Eyes on the Future” strategy paper.

The final session, “Transformations & Frameworks,” was opened by Philipp Neudert (RWTH Aachen), who emphasized the need for systemic reflection and ethical integration into innovation systems aiming to develop responsible AI. Thomas Gremsl (University of Graz) introduced an “ethics compass” for evaluating AI systems in corporate settings. Job Timmermans (Netherlands Defence Academy) concluded with his concept of “Networked Responsibility” — a model of shared responsibility particularly relevant for autonomous systems like drones, where responsibility is distributed across complex networks.

Conclusion: Responsibility is a process, not a label

The first international CReAITech symposium, with its wide range of interdisciplinary talks and topics, clearly demonstrated: responsible AI is not merely a technical challenge, but one deeply rooted in societal, ethical, and political contexts. Responsibility is not a label — it is a process: one of dialogue, reflection, structural change, and transparency.

The interdisciplinary exchange between the technical, social, and human sciences opened up new perspectives and underscored a central point: building a just, inclusive, and sustainable AI future begins with the questions we choose to ask today.