The attention mechanism has revolutionized deep learning research across many disciplines starting from NLP and expanding to vision, speech, and more. Different from other mechanisms, the elegant and general attention mechanism is easily adaptable and eliminates modality-specific inductive biases. As attention becomes increasingly popular, it is crucial to develop tools to allow researchers to understand and explain the inner workings of the mechanism to facilitate better and more responsible use of it. This tutorial focuses on understanding and interpreting attention in the vision and the multi-modal setting. We present state-of-the-art research on representation probing, interpretability, and attention-based semantic guidance, alongside hands-on demos to facilitate interactivity. Additionally, we discuss open questions arising from recent works and future research directions.
The following is an outline of the topics we will cover in the tutorial. A detailed description can be found in this document.
Our tutorial will be conducted in a hybrid manner on June 18, 2022 from 9 AM onwards. We aim to complete our tutorial by 12:00 PM. Due the VISA issues, Sayak won't be able to present in person. So, he will be joining and presenting virtually. However, Hila will be presenting in person. The tutorials will take place at the Vancouver Convention Center.
 Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, Chefer et al.
 Do Vision Transformers See Like Convolutional Neural Networks?, Raghu et al.
 What do Vision Transformers Learn? A Visual Exploration, Ghiasi et al.
 Quantifying Attention Flow in Transformers, Abnar et al.
 Optimizing Relevance Maps of Vision Transformers Improves Robustness, Chefer et al.