Skip Navigation

Research project MedAIcine

AI technologies and machine learning are already widely used and researched in medicine, yet still seem (both in practice and theory) to resemble an opaque black box rather than a trustworthy control mechanism. Our pilot research project MedAIcine addresses key challenges and tensions regarding the responsible design and use of AI in medical imaging.

The possible areas of use for artificial intelligence (AI) appear to be almost unlimited: For example, the AI-supported processing and rapid evaluation of large amounts of data enables robot-assisted surgery, the use of chatbots and screening apps as diagnostic aids, or the continuous monitoring of chronic diseases with the help of medical wearables, such as fitness trackers, which measure, record and interpret the patient’s vital signs. In this context, AI acts as a kind of “sparring partner” for physicians in their clinical decision-making in the context of human-machine interaction (Helmholtz 2022). Thanks to specially developed algorithms and computer programs that possess deep-learning technology, AI thus has the potential to effectively improve medical care in terms of individual prevention, screening, diagnostics, prognosis and therapy.

Although the use of AI in medicine may sound promising at first, ethical considerations point to certain risks of the current use and design of AI in medicine: Lack of transparency, explicability and fairness, but also insufficient protection of patients’ privacy – or their sensitive health data – are just a few examples of the specific challenges in dealing with AI in medicine. For example, what data set is the AI-assisted diagnosis based on? Are the training data representative of the individuals being treated (implicit bias)? Has the General Data Protection Regulation (GDPR) been complied with when collecting the data? But it is not only technological aspects that take on an important role in the assessment of responsible AI. Genuinely philosophical questions, such as those about the good life, good coexistence, or freedom of action, must also be considered in the context of AI research and critically reflected upon in light of the now digital environment.

Project participants

Bernhard Bauer

Bernhard Bauer is Professor for Software Methodologies for Distributed Systems at Augsburg University.

Svenja Breuer

Svenja Breuer is a research associate and doctoral candidate in Science and Technology Studies at the Technical University of Munich and researches the social aspects of artificial intelligence in the healthcare sector.

Alena Buyx

Alena Buyx is Professor of Ethics in Medicine and Health Technologies, Director of the Institute for the History and Ethics of Medicine and holder of the Chair of Ethics in Medicine and Health Technologies at the Technical University of Munich.

Christopher Koska

Christopher Koska is a senior researcher at the Chair of Practical Philosophy at the Munich School of Philosophy. He conducts research on data and algorithm ethics, corporate digital responsibility (CDR) and digital learning and educational media.

Ruth Müller

Ruth Müller is Associate Professor of Science & Technology Policy and Vice Dean for Talent Management & Diversity at Technical University of Munich.

Benjamin Rathgeber

Benjamin Rathgeber is Professor of Philosophy of Science, Philosophy of Nature and Philosophy of Technology with a focus on Artificial Intelligence at Munich School of Philosophy.

Michael Reder

Michael Reder is a Professor of Practical Philosophy, holds the Chair of Practical Philosophy with a focus on International Understanding, and serves as Vice President for Research at Munich School of Philosophy.

Kerstin Schlögl-Flierl

Kerstin Schlögl-Flierl holds the Chair of Moral Theology at Augsburg University.

Paula Ziethmann

Paula Ziethmann is a research assistant and doctoral candidate specializing in the ethics of artificial intelligence at the Chair of Moral Theology at the University of Augsburg.