Article Dans Une Revue Proceedings of the ACM on Human-Computer Interaction Année : 2025

Impact of Explanation Technique and Representation on Users' Comprehension and Confidence in Explainable AI

Julien Delaunay
Luis Galárraga
Niels Van Berkel

Résumé

Local explainability, an important sub-field of eXplainable AI, focuses on describing the decisions of AI models for individual use cases by providing the underlying relationships between a model's inputs and outputs. While the machine learning community has made substantial progress in improving explanation accuracy and completeness, these explanations are rarely evaluated by the final users. In this paper, we evaluate the impact of various explanation and representation techniques on users' comprehension and confidence. Through a user study on two different domains, we assessed three commonly used local explanation techniquesfeature-attribution, rule-based, and counterfactual-and explored how their visual representation-graphical or text-based-influences users' comprehension and trust. Our results show that the choice of explanation technique primarily affects user comprehension, whereas the graphical representation impacts user confidence.

CCS Concepts: • Human-centered computing → Empirical studies in HCI; • Computing methodologies → Artificial intelligence.

Fichier principal
Vignette du fichier
delaunay et al.-2025.pdf (2) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
licence

Dates et versions

hal-04948723 , version 1 (14-02-2025)

Licence

Identifiants

Citer

Julien Delaunay, Luis Galárraga, Christine Largouët, Niels Van Berkel. Impact of Explanation Technique and Representation on Users' Comprehension and Confidence in Explainable AI. Proceedings of the ACM on Human-Computer Interaction , 2025, 9 (2), pp.Article CSCW113. ⟨10.1145/3711011⟩. ⟨hal-04948723⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More