Schedule
The tutorial is split into two 90 minute sessions with a break inbetween.
Session 1:
- (5min) Welcome
- Introduction to XAI
- (20min) Background
- (30min) Overview over XAI approaches
- Practical Session
- (5min) Discussion:
- (30min) II.1 Hands-on
Session 2:
- Deep Dive
- (30min) Deep dive: C-XAI (for verification of CV)
- (30min) Deep dive: LLM explanations
- (10min) Outlook
- (5min) Conclusion
Some Details
- Background: What is a (good) explanation? Following the definitions collected and established by Schwalbe and Finzel, 20231, we first introduce main notions of explainability, transparency, and trustworthiness, as well as their motivation. This culminates in an overview of typical explainability problems and their aspects to consider (such as local versus global, model-agnostic versus -specific). Lastly, we visit desirables for good explanations, following the classical work by Miller et al.2.
- Overview of explainability methods: We give a broad overview of different types of explainability approaches, covering common explainability use-cases, such as local feature importance explanations, and global explanations of learned latent features. Advantages and disadvantages of the methods are discussed.
- Deep Dive: Concept-based explainability: As a highlight of the overview part, we dive deeper into the subfield of global latent space introspection. The audience learns how to explore the concepts learned and represented internally inside the DNNs, and gets a peek into applications to debugging and verifying DNNs.
- Outlook: Interesting open challenges Some interesting open challenges both from XAI1 and from C-XAI3 are presented to spread interest in XAI research.
Insights from the Social Sciences.” Artificial Intelligence 267 (February):1–38. https://doi.org/10.1016/j.artint.2018.07.007.
-
Schwalbe, Gesina, and Bettina Finzel. 2023. “A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts.” Data Mining and Knowledge Discovery, January. https://doi.org/10.1007/s10618-022-00867-8. ↩ ↩2
-
Miller, Tim. 2019. “Explanation in Artificial Intelligence: ↩
-
Lee, Jae Hee, Georgii Mikriukov, Gesina Schwalbe, Stefan Wermter, and Diedrich Wolter. 2024. “Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?” In . Milano, Italy. https://doi.org/10.48550/ARXIV.2409.13456. ↩