27756 AI explainability with visual counterfactuals: Application to X-ray image analysis

Probleemstelling:

Explainability in Artificial Intelligence (XAI) has become a major research area [1].
Industry, healthcare professionals, and legislators demand trustworthy algorithms and guidance for responsible automatic classification. Complex models for classification, particularly those based on deep learning, are not as intuitive as the simpler ones and leave the user out of the decision making progress. Explainable methods solidify the user's confidence and give them full control over the final decision. Some of these methods produce counterfactuals, realistic alterations of a target sample. The counterfactual should be as similar as possible to the original sample, and the elements where they differ should be intuitive for a human. On the other hand, the model would classify the counterfactual differently from the original sample. 

 

The recent advances in generative models for images have allowed for image counterfactuals [2]. The similarity with the original sample is not measured in the pixel space but in the latent representation of the image. These promising methods have been successful for generic, standard datasets. The challenge now, and the scope of this thesis, is to apply them to real, smaller datasets and see their concrete potential for a realistic user. 

 


Doelstelling:

This thesis aims to apply a method for generating visual counterfactuals on chest X-ray images to improve prediction explainability in X-ray image analysis.

  1. Familiarize with the goals of (XAI) and the current literature regarding visual counterfactuals and how they aid explainability, especially in medical images. Familiarize yourself with the current literature on medical image analysis, especially for chest X-ray images. Gain an understanding of the different anomalies which are being predicted during chest X-ray analysis and how they present on X-ray images. 
  2. Become familiar with the VinDr-CXR [4] dataset, its statistical properties, class distribution etc.
  3. Reproduce the results of an existing deep learning-based model for X-ray image analysis using the VinDr-CXR dataset. The predictions of this model will be explained using visual counterfactuals.
  4. Based on existing work with hierarchical VAEs [3], train a generative neural network model for the dataset which generates X-ray images.
  5. Again, based on [3], use the generative model you trained to generate counterfactual examples. In other words, generate images that look similar to the original input image, but changed so that the model's prediction for the counterfactual image differs from the original prediction. The goal is to generate a plausible fake input image where it would be visible which features in the original image have made the model predict a certain class, and how changing those features would change the model's prediction.

References

[1] Y. Chou, C. Moreira, P. Bruza, C. Ouyang, J. Jorge. “Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications.
[2] N. Vercheval, a. Pi?urica, “Hierarchical Variational Autoencoder for Visual Counterfactuals”. ICIP 2021.
[3] H. Q. Nguyen and al., “VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations” 2020