Concept

Explainable artificial intelligence

Related publications (117)

Explainable Fault Diagnosis of Oil-Immersed Transformers: A Glass-Box Model

Yi Zhang, Wenlong Liao, Zhe Yang

Recently, remarkable progress has been made in the application of machine learning (ML) techniques (e.g., neural networks) to transformer fault diagnosis. However, the diagnostic processes employed by these techniques often suffer from a lack of interpreta ...
Piscataway2024

Ethological computational psychiatry: Challenges and opportunities

Mackenzie Mathis

Studying the intricacies of individual subjects' moods and cognitive processing over extended periods of time presents a formidable challenge in medicine. While much of systems neuroscience appropriately focuses on the link between neural circuit functions ...
Current Biology Ltd2024

Towards Trustworthy Deep Learning for Image Reconstruction

Alexis Marie Frederic Goujon

The remarkable ability of deep learning (DL) models to approximate high-dimensional functions from samples has sparked a revolution across numerous scientific and industrial domains that cannot be overemphasized. In sensitive applications, the good perform ...
EPFL2024

InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts

Martin Jaggi, Vinitra Swamy, Jibril Albachir Frej, Julian Thomas Blackwell

Interpretability for neural networks is a trade-off between three key requirements: 1) faithfulness of the explanation (i.e., how perfectly it explains the prediction), 2) understandability of the explanation by humans, and 3) model performance. Most exist ...
2024

It’s All Relative: Learning Interpretable Models for Scoring Subjective Bias in Documents from Pairwise Comparisons

Matthias Grossglauser, Aswin Suresh, Chi Hsuan Wu

We propose an interpretable model to score the subjective bias present in documents, based only on their textual content. Our model is trained on pairs of revisions of the same Wikipedia article, where one version is more biased than the other. Although pr ...
2024

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

Vinitra Swamy, Jibril Albachir Frej

Explainable Artificial Intelligence (XAI) plays a crucial role in enabling human understanding and trust in deep learning systems, often defined as determining which features are most important to a model's prediction. As models get larger, more ubiquitous ...
2023

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Vinitra Swamy, Mirko Marras, Sijia Du

Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle th ...
2023

Exploiting Explanations to Detect Misclassifications of Deep Learning Models in Power Grid Visual Inspection

Olga Fink, Giovanni Floreale

In the context of automatic visual inspection of infrastructures by drones, Deep Learning (DL) models are used to automatically process images for fault diagnostics. While explainable Artificial Intelligence (AI) algorithms can provide explanations to asse ...
Research Publishing2023

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.