2022 Intelligent Sensing Winter School (December, 12-14 and 19)

The 2022 Intelligent Sensing Winter School is a fully remote event on explainable AI sensing, featuring international experts presenting their research related to explainable artificial intelligence and interpretable machine learning.

Theme: Explainable AI Sensing.

Target audience: PhD and MSc students; Postdocs and Researchers. QMUL PhD students will receive Skills Points for participating.

Platform: Zoom

Registration: free, but mandatory. You can register [here].

Call for Short Presentations: Winter School participants are invited to submit an expression of interest for a short (5-10 minutes) presentation related to their work, in the context of explainable AI sensing. Topics can include the application of explainable AI methods to participants’ research topics, or critical overviews on explainable AI on specific domains related to sensing. Presentations will be given on 19 December and will be attended by winter school participants, plus a panel of experts on explainable AI sensing, and feedback will be provided. Please email cis-web@eecs.qmul.ac.uk with your expression of interest for a short presentation at the Winter School (including a talk title, plus a 2-3 sentence description of the content of your talk) by 12 December 2022.

Programme at a glance

GMT times Monday, 12 December Tuesday, 13 December Wednesday, 14 December Monday, 19 December
14:00
Tutorial

Tutorial Tutorial 2 Talks
Short talks
15:00
3 Talks


3 Talks Tutorial


Programme (all times are in GMT)

12 December
13:50 Welcome and introduction
14:00 Explanation paradigms for deep neural networks

Explainable AI has proven a useful tool to build more trustworthy deep neural network models and to extract insights about the data the model has been trained on. This tutorial will provide an overview of paradigms for explaining deep neural networks, in particular, self-explainable models, perturbation-based explanations, and backpropagation approaches, discussing their advantages and limitations. The tutorial will also showcase some recent work on higher-order and hierarchical explanations.
Grégoire Montavon
Grégoire Montavon
Freie Universität Berlin, Germany
14:45 Q&A with the speaker
15:00 GraphNEx XAI canvas

I will present value proposition canvases for XAI emerged from collaborative and blended design thinking activities in the context of the GraphNEX project. I will cover the motivations why and the ways how these canvases were elicited and discuss how they can be used in developing explainability interfaces that are suitable to users.

Denis Gillet
Denis Gillet
EPFL, Switzerland
From local to global: explainable AI for computer audition in digital health

Computer audition (CA) is emerging as a crucial tool for the non-invasive, low-cost early screening of various diseases, such as respiratory and cardiovascular disorders. Developing explainable CA models is essential for providing clinicians and patients with interpretations they can comprehend. However, it is difficult to generalise from interpretations given by previous explainable artificial intelligence approaches in CA. To this end, I will discuss the most current developments in global explanations.
Zhao Ren
Zhao Ren
Leibniz Universität Hannover, Germany
Self-labelling images for interpretable representation learning

We will explore how self-supervised learning can be combined with optimal transport to "self-label" a dataset. This method is very generalisable and builds on foundational principles of augmentation invariance on entropy regularisation. The resulting method can also be used for modalities besides images, such as videos or audio.
Yuki M. Asano
Yuki M. Asano
University of Amsterdam, The Netherlands
15:45 Q&A with the speakers
16:15 Closing remarks for Day 1


13 December
13:50 Welcome
14:00 Explanation paradigms and methods for comparing explanations

The role of explainability has expanded and neural networks are now asked to justify 'What if?' counterfactual and 'Why P, rather than Q?' contrastive question modalities that the network did not explicitly train to answer (where P is the prediction of the network). This allows explanations to act as reasons to make further prediction. This tutorial presents a reasoning framework that allows for robust machine learning as well as trustworthy AI to be accepted in everyday lives.

Ghassan AlRegib
Ghassan AlRegib
Georgia Tech, USA
14:45 Q&A with the speaker
15:00 ProtoVAE: a trustworthy self-explainable prototypical variational model

The need for interpretable models has fostered the development of self-explainable classifiers. Prior approaches are either based on multi-stage optimization schemes, impacting the predictive performance of the model, or produce explanations that are not transparent, trustworthy or do not capture the diversity of the data. To address these shortcomings, we propose ProtoVAE, a variational autoencoder-based framework that learns class-specific prototypes in an end-to-end manner and enforces trustworthiness and diversity by regularizing the representation space and introducing an orthonormality constraint. Finally, the model is designed to be transparent by directly incorporating the prototypes into the decision process.
Srishti Gautam
Srishti Gautam
UiT The Arctic University of Norway, Norway
Using Explainable AI to Decipher Biological Rules

The phenotype of an organism emerges from the expression of a large set of genes (~20k in humans). This phenomenon can be formulated as a classification problem: the relationships between variables (gene expression) and classes (phenotypes) are inferred by deciphering the network decision using explainable AI techniques. Understanding of the biological rules can be achieved by attributing to each network prediction (the phenotype of an individual) the most important variables for the decision making. However, the attributed importance does not reflect the relationships between individuals. I will discuss how we aim to overcome this and other limitations using clustering and graph signal processing.
Myriam Bontonou‬
‪Myriam Bontonou‬
ENS Lyon, France
Scalable trustworthy AI -- beyond "what", towards "how"

ML models are not trustworthy often because they focus too much on "what" than "how". That is, they care only about whether they are solving the task at hand ("what") but not so much about solving it right ("how"). Having recognised this issue, the ML field has been shifting its focus from "what" to "how" for the last five years. Arguably, the most common approach to address "how" is to extend the familiar *benchmarking approach* that used to work well for the "what" phase: build a benchmark dataset and perform "fair" comparisons by fixing the allowed ingredients. This encourages more and more complex tricks that are likely to simply overfit to the given benchmark (e.g. ImageNet). However, for the "how" problem, I believe it is more important to transfer the way humans recognise the world (human explanations) into the computational domain and eventually utilise it for model training. I will give an overview of my previous search for such ingredients that make models more explainable and more robust to distribution shifts. I will then discuss exciting future sources of such ingredients.
Seong Joon Oh
Seong Joon Oh
University of Tübingen, Germany
15:45 Q&A with the speakers
16:15 Closing remarks for Day 2


14 December
13:50 Welcome
14:00 Explainable AI in neuroimaging

Explainable artificial intelligence methods are emerging as enabling technology in different fields, and biomedicine is no exception. Within this framework, I will present some showcases in the exploitation of explainable AI in the neuroimaging and imaging genetics fields, and provide hints for future investigations.

Gloria Menegaz
Gloria Menegaz
University of Verona, Italy
14:45 Q&A with the speaker
15:00 Methods for explaining deep neural networks and evaluating explanations

Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. In this tutorial I will discuss methodologies to evaluate explanations as well as applications of XAI. I will also cover recent developments in XAI-based model improvement.
Wojciech Samek
Wojciech Samek
Fraunhofer Heinrich Hertz Institute Learn, Germany
15:45 Q&A with the speaker
16:00 Closing remarks for Day 3


19 December
13:50 Welcome
14:00 Rethinking interpretable machine learning for audio

Most techniques used to interpret machine learning models for audio are borrowed from other domains such as image recognition and natural language processing. In this talk, I will go over the drawbacks of borrowing techniques from other domains, how to approach interpretable machine learning in ways that can be domain independent and how to design interpretable machine learning algorithms specifically for audio.
Vinod Subramanian
Vinod Subramanian
Queen Mary University of London, UK
Interpretable machine learning for sound classification

Explaining the decision-making process of modern AI-based systems is a challenging task especially when deep architectures are considered. After constructing a deep neural network for urban sound classification, this work analyzes its decisions via layer-wise relevance propagation towards forming a suitable etiology framework.
Stavros Ntalampiras
Stavros Ntalampiras
University of Milan, Italy
14:30 Q&A with the speakers
14:45 Authoring Explanations for Clinical Risk Prediction

Our work focuses on developing clinical risk prediction and decision support models using Bayesian networks (BNs). We aim to add explainability and transparency to our models for better adoption. We are developing an environment with capabilities to generate explanation content and convert it to a human-readable natural language narrative.

Erhan Pisirir
Queen Mary University of London, UK
Explainability AI in Coastal Monitoring and Beach Litter Detection

The talk will concern about the explainability and rationale behind neural models used for coastal monitoring tasks, such as beach litter detection using Computer Vision. Some experiments could follow as well as saliency maps on instance segmentation applied to waste retrieval.

Vincenzo Mariano Scarrica
University of Naples
Federico II, Italy
Fast Hierarchical Games for Image Explanations

In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients–Hierarchical Shap (h-Shap)–that resolves some of the limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation. Under certain distributional assumptions, such as those common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity.

Jacopo Teneggi
Johns Hopkins University, USA
A critical comparison of existing approaches in explainable AI

This talk will cover overviews of various explainable AI methods such as CLEVR-XAI, layer-wise Relevance Propagation (LRP), Integrated Gradients, and few more methods. The visualisation heatmaps obtained by these methods help us analyze the contributions of individual pixels to the prediction and where the methods can be used.

Vishal Yadav
Queen Mary University of London, UK
From Shapley back to Pearson: Hypothesis Testing via the Shapley Value

In this work, we show that Shapley-based explanation methods and conditional independence testing are closely related. We introduce the SHAPley Local Independence Test (SHAPLIT), a novel testing procedure inspired by the Conditional Randomization Test (CRT) for a specific notion of local (i.e., on a sample) conditional independence. With it, we prove that for binary classification problems, each marginal contribution in the Shapley value is an upper bound to the p-value of this conditional independence test. Furthermore, we show that the Shapley value itself provides an upper bound to the p-value of a global SHAPLIT null hypothesis. As a result, we grant the Shapley value with a precise statistical sense of importance with false positive rate control.

Beepul Bharti
Johns Hopkins University, USA
Explainable AI for Smart Cities

In this talk, I will first discuss if the state-of-the-art XAI methods are ready to apply for smart city applications and what are the challenges that smart city applications bring to the table. Next, I will present a recent user study we conducted to understand the end-user perception and expectations of XAI.

Sandareka Wickramanayake
University of Moratuwa,
Sri Lanka
15:40 Q&A with the speakers
16:00 Closing remarks for Day 4


Organisers
Emmanouil Benetos
Emmanouil Benetos

Changjae Oh
Changjae Oh

Andrea Cavallaro
Andrea Cavallaro

Olena Hrynenko
Olena Hrynenko

Sponsors
GraphNEx logo
SNF logo
ANR logo
EPSRC logo new



Past events

Summer schools

2021 Winter School

2020 Summer School

2019 Summer School

2018 Summer School

2017 Summer School

2016 Summer School

2015 Summer School

2014 Summer School

2013 Summer School


Other events

2018/19 CIS PhD Welcome day

2017/18 CIS PhD Welcome day

2016/17 CIS PhD Welcome day

2015/16 CIS PhD Welcome day

CIS Spring Camp 2016

Sensing and graphs week

Commercialisation bootcamp

Sensing and IoT week

Software workshop