
Research Article
User Study on the Effects Explainable AI Visualizations on Non-experts
@INPROCEEDINGS{10.1007/978-3-030-95531-1_31, author={Sophia Schulze-Weddige and Thorsten Zylowski}, title={User Study on the Effects Explainable AI Visualizations on Non-experts}, proceedings={ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings}, proceedings_a={ARTSIT}, year={2022}, month={2}, keywords={Explainable AI Human-centric AI User study}, doi={10.1007/978-3-030-95531-1_31} }
- Sophia Schulze-Weddige
Thorsten Zylowski
Year: 2022
User Study on the Effects Explainable AI Visualizations on Non-experts
ARTSIT
Springer
DOI: 10.1007/978-3-030-95531-1_31
Abstract
Artificial intelligence is drastically changing the process of creating art. However, in art, as in many other domains, algorithms and models are not immune from generating discriminatory and unfair artifacts or decisions. Explainable Artificial Intelligence (XAI) makes it possible to look into the “black box” and to identify biases and discriminatory behaviour. One of the main problems of XAI is that state-of-the-art explanation tools are usually tailored to AI experts. This paper evaluates how intuitively understandable the same tools are to laypeople. By using the prototypical use case of predictive sales, and testing the results with users, the abstract ideas of XAI are transferred to a real-world setting to study its understandability.
Based on our analysis, it can be concluded that explanations are easier to understand if they are presented in a way that is familiar to the users. A presentation in natural language is favorable because it presents facts unambiguously. All relevant information should be accessible in an intuitive manner that avoids sources of misinterpretations. It is desirable to design the system in an interactive way that allows the user to request further details on demand. This makes the system more flexible and adjustable to the use case. The results presented in this paper can guide the development of explainability tools that are adapted to a non-expert audience.