Organizers
We are both experienced lecturers and senior researchers focusing on the subfield of concept-based explainable AI. Find below our bios at the time of the tutorial.
|
Dr. Gesina Schwalbe
University of Lübeck ![]()
|
Gesina Schwalbe is a postdoctoral researcher at the University of Lübeck and since Sep 2024 leads her own junior research group Correctable Hybrid AI funded by the German Ministry for The group investigates how C-XAI can be used for model verification and correction of computer vision DNNs. This builds upon her general research of safety assurance and trustworthiness of AI using XAI techniques, which she pursued throughout her PhD (2018–2022, University of Bamberg) and subsequently as postdoctoral researcher (until 2023) at the Continental AG. Since joining the University of Lübeck she has taken over two lecture series. She received her masters degree in mathematics at the University of Regensburg. |
|
Dr. Jae Hee Lee
University of Hamburg ![]()
|
Jae Hee Lee is a postdoctoral researcher in the Knowledge Technology Group, University of Hamburg, Germany. His research interest is in building robust multimodal language models that generalize to new tasks while retaining previously learned knowledge, where XAI techniques are used to improve robustness. To pursue this XAI research, he was awarded with a funding from the German Research Foundation (title: Lifelong Multimodal Language Learning by Explaining and Exploiting Compositional Knowledge), which starts mid-2025. Previously, he was a postdoc at Cardiff University, the University of Technology Sydney, and the Australian National University. He obtained his PhD and Diplom degree from the University of Bremen, Germany. |