Human-Centered XAI for Mobile Health Apps
•Technology-focused
: Here, the algorithm and the intended use are dominant. These are nearly exclusively
used by domain experts (e.g., clinicians), and it is vital to understand their explainability, to comprehend
ongoing processes on a technical level.
•Management focused
: This targets the whole organization and administering digital health sectors instead
of individual patients. The user group includes health professionals as well as administrative staff. Thus,
explainability helps to retrace decisions, e.g., when it has to be explained to outside parties.
•Patient-focused
: Revolves around the patient, either in providing diagnosis and assisting doctors in the process
or monitoring the patient’s body, not necessarily involving a physician. This firm patient reference offers good
preconditions for the envisaged personalisation approaches.
Our paper focuses on the third dimension, patient focused AI-based health applications. Here we investigate user
preferences in a mobile monitoring task for stress recognition.
1.4 Human-Centered XAI
Authors like Miller
[24]
and Hoffman et al.
[25]
stressed that the same explanation could affect recipients differently
due to cognitive and social variations and desired context. For example, an explainee’s goal could be to understand
why this circumstantial explanation was received or how the AI came to this conclusion. Hence, one explanation can
not satisfy these heterogeneous goals. The omission of addressing these needs and interests can decrease or prevent
the success of XAI applications in practice. Barredo Arrieta et al.
[26]
designate the approximation dilemma to this
issue; explanations must match the audience’s requirements. This is addressed by Human-Centered AI (HCAI). HCAI
provides a perspective on AI that highlights the necessity to take stakeholders’ abilities, beliefs, and perceptions into the
design of AI applications [27].
A promising attempt to reach human-centered XAI is the personalisation of explanations described by Schneider and
Handali
[28]
. Personalisation is incorporated by adapting, among other things, to the explainees’ knowledge, intent,
and preferences. First attempts are shown in Schneider and Handali
[28]
and Arya et al.
[4]
focus on static explanations
often supported by text. So far, interactive visual approaches – potentially better suited for personalisation – have not
been the focus of research or are limited to concepts and techniques closely related to AI developers [
3
]. Shneiderman
[5]
highlight that interactive explanations could enable greater user involvement and thus help users better understand
an ML system’s behaviour than static explanations.
Finally, it is worth noting that most of the XAI techniques are static in their explainability [
4
]. They do not change
in response to feedback or reactions from the receiver. In contrast, an interactive explanation allows consumers to
immerse themselves in the explanation, e.g., ask questions. Arya et al.
[4]
, Shneiderman
[5]
highlight the importance
of interactive explanations, especially for end-users, but missing evaluations. The existing static explanations should
support the development of interactive explanations since communication technology, e.g., pictures, can, in principle,
remain the same. Putnam and Conati
[29]
presents one possibility for designing interactive explanations for intelligent
tutoring systems, where users can receive more detailed explanations by asking the system why this happened or how it
happened.
For mobile health apps, we investigate three different variations of interactive explanations (i.e., live explanation, feature
explanation, and ask-the-app explanation).
2 Persona Concept
Holzinger et al.
[30]
emphasize the importance of asking stakeholders about their attitudes towards AI. This information
is essential, as it influences whether and how users will utilize an AI system later. Personas represent fictional
stakeholders [
31
]; they help developers understand their target users, empathize with them, and make better decisions
concerning the usage of a system. Cooper
[32]
was one of the first to present this concept as a part of requirements
engineering to focus on who uses the system. The literature proposes different templates for the guided construction
[
33
]. Usually, these are textual descriptions that include information about the person’s background, behaviors, and
personal traits [
34
]. Ferreira et al.
[33]
identified two limitations concerning such persona templates. First, techniques
neglect to elicit requirements to focus more on the empathy aspects. Second, specific survey methods do not relate
to the application domain and provide general, less context-specific characteristics for behavior identification. To
combine both, Ferreira et al.
[33]
propose the ‘PATHY 2.0’ technique (Personas empATHY). Its purpose is to enable
the identification of potential application requirements, which are derived from user needs. Among other things, it aims
at creating profiles that represent system stereotypes. The technique separates into six fields, each providing guiding
questions to help describe the respective section: Who, context, technology experience, problems, needs, and existing
solutions.
4