Ethical Innovation Lab
Autonomous and artificial intelligence systems are increasingly pervasive in economics, advertising, healthcare and the digitization of everyday life in general. Learning and data-based algorithms are being used in ways almost inconceivable only a few decades ago. While many of the researchers apply these algorithms with great care and only the best of intentions, it requires a thorough investigation into the potential societal and ethical implications to warrant fair, balanced and safe algorithmic designs already from the outset, resulting in what we call in short "ethically-aligned algorithms".
Consequently, at the IME we are committed to an inclusive and transdisciplinary approach to the design of autonomous intelligent systems. The Ethical Innovation Lab is here to investigate how these processes can be established and how we can identify and address ethical implications already on the lowest-possible implementation levels. As it must be the nature of this approach, we reach out to all stakeholders and interested partners to work towards the transdisciplinary design of ethical algorithms cooperatively.
In education, we try to endow students with both the abilities to explain technical approaches and hold discourse on ethical matters. Engineers of the future need to be able to anticipate, comprehend and take an active part in shaping the society in inclusive, fair and democratic ways. As such, the Ethical Innovation Lab works towards ways of inter- and transdisciplinary scientific communication and participative ways for the public in important technological matters.
Please do not hesitate to contact us, if you are interested in cooperations!
Projects and Theses
Click here to check out our current research projects and theses topics.
2022
On the Ethical and Epistemological Utility of Explicable AI in Medicine, Philosophy & Technology , vol. 35, no. 50, pp. 31, 2022.
DOI: | 10.1007/s13347-022-00546-y |
File: | s13347-022-00546-y |
Bibtex: | @article{He22, author = {Herzog, Christian}, doi = {10.1007/s11948-021-00283-z}, issn = {1353-3452}, journal = {Science and Engineering Ethics}, month = {feb}, number = {1}, pages = {3}, title = {Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use}, url = {http://link.springer.com/10.1007/s11948-021-00283-z}, volume = {27}, year = {2022}, keyword = {ResearchTopicEthics}, keywords = {ResearchTopicEthics} } |
2021
On formal ethics versus inclusive moral deliberation, AI and Ethics , 2021.
DOI: | 10.1007/s43681-021-00045-4 |
File: | s43681-021-00045-4 |
Bibtex: | @article{He21b, author = {Herzog, Christian}, doi = {10.1007/s43681-021-00045-4}, issn = {2730-5953}, journal = {AI and Ethics}, month = {March}, title = {On formal ethics versus inclusive moral deliberation}, url = {http://link.springer.com/10.1007/s43681-021-00045-4}, year = {2021}, keyword = {ResearchTopicEthics;KeyPub}, keywords = {ResearchTopicEthics;KeyPub} } |
On the risk of confusing interpretability with explicability, AI and Ethics , 2021.
DOI: | 10.1007/s43681-021-00121-9 |
File: | s43681-021-00121-9 |
Bibtex: | @article{He21c, title = {On the risk of confusing interpretability with explicability}, copyright = {All rights reserved}, issn = {2730-5953, 2730-5961}, url = {https://link.springer.com/10.1007/s43681-021-00121-9}, doi = {10.1007/s43681-021-00121-9}, abstract = {Abstract This Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.}, language = {en}, urldate = {2021-12-17}, journal = {AI and Ethics}, author = {Herzog, Christian}, month = {Dec}, year = {2021} } |
Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use, Science and Engineering Ethics , vol. 27, no. 1, pp. 3, 2021.
DOI: | 10.1007/s11948-021-00283-z |
File: | s11948-021-00283-z |
Bibtex: | @article{He21, author = {Herzog, Christian}, doi = {10.1007/s11948-021-00283-z}, issn = {1353-3452}, journal = {Science and Engineering Ethics}, month = {feb}, number = {1}, pages = {3}, title = {Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use}, url = {http://link.springer.com/10.1007/s11948-021-00283-z}, volume = {27}, year = {2021}, keyword = {ResearchTopicEthics}, keywords = {ResearchTopicEthics} } |
2019
Technological Opacity of Machine Learning in Healthcare, in 2nd Weizenbaum Conference - Challenges of Digital Inequality , Berlin, Germany , 2019.
DOI: | https://doi.org/10.34669/wi.cp/2.7 |
Bibtex: | @inproceedings{He19b, address = {Berlin, Germany}, author = {Herzog, Christian}, year = {2019}, booktitle = {2nd Weizenbaum Conference - Challenges of Digital Inequality}, title = {{Technological Opacity of Machine Learning in Healthcare}}, doi = {https://doi.org/10.34669/wi.cp/2.7} } |
Members
Christian Herzog, né Hoffmann
Gebäude 64
,
Raum 129
christian.herzog(at)uni-luebeck.de
+49 451 3101 6211