Explainability in Artificial Intelligence (XAI) has become a major research area .
Industry, healthcare professionals, and legislators demand trustworthy algorithms and guidance for responsible automatic classification. Complex models for classification, particularly those based on deep learning, are not as intuitive as the simpler ones and leave the user out of the decision making progress. Explainable methods solidify the user's confidence and give them full control over the final decision. Some of these methods produce counterfactuals, realistic alterations of a target sample. The counterfactual should be as similar as possible to the original sample, and the elements where they differ should be intuitive for a human. On the other hand, the model would classify the counterfactual differently from the original sample.
The recent advances in generative models for images have allowed for image counterfactuals . The similarity with the original sample is not measured in the pixel space but in the latent representation of the image. These promising methods have been successful for generic, standard datasets. The challenge now, and the scope of this thesis, is to apply them to real, smaller datasets and see their concrete potential for a realistic user.
This thesis aims to apply a method for generating visual counterfactuals on chest X-ray images to improve prediction explainability in X-ray image analysis.
 Y. Chou, C. Moreira, P. Bruza, C. Ouyang, J. Jorge. “Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications.
 N. Vercheval, a. Pi?urica, “Hierarchical Variational Autoencoder for Visual Counterfactuals”. ICIP 2021.
 H. Q. Nguyen and al., “VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations” 2020