Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In the context of automatic visual inspection of infrastructures by drones, Deep Learning (DL) models are used to automatically process images for fault diagnostics. While explainable Artificial Intelligence (AI) algorithms can provide explanations to assess whether the DL models focus on relevant and meaningful parts of the input, the task of examining all the explanations by domain experts can become exceedingly tedious, especially when dealing with a large number of captured images. In this work, we propose a novel framework to identify misclassifications of DL models by automatically processing the related explanations. The proposed framework comprises a supervised DL classifier, an explainable AI method and an anomaly detection algorithm that can distinguish between explanations generated by correctly classified images and those generated by misclassifications.
Vinitra Swamy, Mirko Marras, Sijia Du
Vinitra Swamy, Jibril Albachir Frej