About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings

Research Article

User Study on the Effects Explainable AI Visualizations on Non-experts

Download(Requires a free EAI acccount)
4 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-95531-1_31,
        author={Sophia Schulze-Weddige and Thorsten Zylowski},
        title={User Study on the Effects Explainable AI Visualizations on Non-experts},
        proceedings={ArtsIT, Interactivity and Game Creation. Creative Heritage. New Perspectives from Media Arts and Artificial Intelligence. 10th EAI International Conference, ArtsIT 2021, Virtual Event, December 2-3, 2021, Proceedings},
        proceedings_a={ARTSIT},
        year={2022},
        month={2},
        keywords={Explainable AI Human-centric AI User study},
        doi={10.1007/978-3-030-95531-1_31}
    }
    
  • Sophia Schulze-Weddige
    Thorsten Zylowski
    Year: 2022
    User Study on the Effects Explainable AI Visualizations on Non-experts
    ARTSIT
    Springer
    DOI: 10.1007/978-3-030-95531-1_31
Sophia Schulze-Weddige1, Thorsten Zylowski1
  • 1: Future Labs, CAS Software AG, CAS-Weg 1-5

Abstract

Artificial intelligence is drastically changing the process of creating art. However, in art, as in many other domains, algorithms and models are not immune from generating discriminatory and unfair artifacts or decisions. Explainable Artificial Intelligence (XAI) makes it possible to look into the “black box” and to identify biases and discriminatory behaviour. One of the main problems of XAI is that state-of-the-art explanation tools are usually tailored to AI experts. This paper evaluates how intuitively understandable the same tools are to laypeople. By using the prototypical use case of predictive sales, and testing the results with users, the abstract ideas of XAI are transferred to a real-world setting to study its understandability.

Based on our analysis, it can be concluded that explanations are easier to understand if they are presented in a way that is familiar to the users. A presentation in natural language is favorable because it presents facts unambiguously. All relevant information should be accessible in an intuitive manner that avoids sources of misinterpretations. It is desirable to design the system in an interactive way that allows the user to request further details on demand. This makes the system more flexible and adjustable to the use case. The results presented in this paper can guide the development of explainability tools that are adapted to a non-expert audience.

Keywords
Explainable AI Human-centric AI User study
Published
2022-02-10
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-95531-1_31
Copyright © 2021–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL