I am Gesina Schwalbe, a researcher in the field of responsible and trustworthy artificial intelligence (AI). My main research interests are about combining symbolic (i.e. human-interpretable) knowledge with deep neural networks (DNNs), in particular I am curious about how to extract, verify, integrate, and correct such knowledge in DNNs. My favorite approaches towards this are explainable AI methods that unravel the information stored in DNN structures and latent spaces into human-understandable and -controllable form, most notably concept embedding analysis. My guiding use-case for this are safe and trustworthy perception modules for automated driving and robotics, and further applications include planning tasks and generative AI based human-machine interfaces.

This research journey started with my doctoral thesis on safety assurance of automotive DNN-based perception, advised at the University of Bamberg by Prof. Dr. Ute Schmid, and located and funded at the Continental AG. One of the favorite projects I worked in regarding this is KI-Absicherung. Starting Jan 1, 2024, I do my habilitation on methods for neuro-symbolic integration for self-verifying and correctable AI in the group for Hybrid AI at the Institute for Software Engineering and Programming, University of Lübeck. See my cv for more details.

Besides via my publications you can find me under water or hear me online: check out the interests tab.