WHAT DOEND-USERS REALLY WANT INVESTIGATION OF HUMAN -CENTERED XAI FOR MOBILE HEALTH APPS Katharina Weitz

2025-04-29 0 0 6.12MB 19 页 10玖币
侵权投诉
WHAT DOEND-USERS REALLY WANT? INVESTIGATION OF
HUMAN-CENTERED XAI FOR MOBILE HEALTH APPS
Katharina Weitz
Chair for Human-Centered AI
University of Augsburg
Universitätsstraße 6a
86159 Augsburg
katharina.weitz@uni-a.de
Alexander Zellner
Chair for Human-Centered AI
University of Augsburg
Universitätsstraße 6a
86159 Augsburg
alexander.zellner@informatik.uni-augsburg.de
Elisabeth André
Chair for Human-Centered AI
University of Augsburg
Universitätsstraße 6a
86159 Augsburg
elisabeth.andre@uni-a.de
ABSTRACT
In healthcare, AI systems support clinicians and patients in diagnosis, treatment, and monitoring, but
many systems’ poor explainability remains challenging for practical application. Overcoming this
barrier is the goal of explainable AI (XAI). However, an explanation can be perceived differently
and, thus, not solve the black-box problem for everyone. The domain of Human-Centered AI
deals with this problem by adapting AI to users. We present a user-centered persona concept to
evaluate XAI and use it to investigate end-users preferences for various explanation styles and
contents in a mobile health stress monitoring application. The results of our online survey show
that users’ demographics and personality, as well as the type of explanation, impact explanation
preferences, indicating that these are essential features for XAI design. We subsumed the results in
three prototypical user personas: power-, casual-, and privacy-oriented users. Our insights bring an
interactive, human-centered XAI closer to practical application.
Keywords Human-Centered AI, Explainable AI, Personalisation, Personas
1 Introduction
AI has become ubiquitous, and we have become used to AI as a decision support system. However, such decisions are
not always equally impactful for our lives. An AI making movie or music recommendations versus an AI diagnosing
or not diagnosing life-threatening diseases have different impacts on our lives. Knowing the reasons behind such
important decisions is essential for not blindly trusting the AI. The General Data Protection Regulation (GDPR) passed
by the European Parliament in 2018 further supports this need for explanations [
1
] and states in Art. 12 that the data
of a subject has to be processed “in a concise, transparent, intelligible and easily accessible form, using clear and
plain language” [
1
]. The area of Explainable Artificial Intelligence (XAI) aims to make AI more understandable and
transparent. XAI for ML focuses on supporting users to “appropriate trust, and effectively manage” AI [
2
, p. 44].
Despite this large body of research, a gap exists between what society needs and what researchers provide. Miller et al.
[3]
claims that developers mostly design explanations for other developers. A reason for this gap is that different people
might require different kinds of explanations [
4
]. Following Arya et al.
[4]
with their postulation, ‘one explanation
does not fit all’ to provide satisfactory explanations to required fields, like the health sector, we need explanations to
adapt to their receiver. A doctor will require a different explanation than a patient or an ML developer. It requires
the investigation of XAI to satisfy end-users, of which personalized explanations will be one foothold. Shneiderman
arXiv:2210.03506v1 [cs.HC] 7 Oct 2022
Human-Centered XAI for Mobile Health Apps
Figure 1: Including stakeholder into the design of human-centered XAI by creating personas based on data of real users
[5]
suggests interactive explanations that enable greater user involvement and help users understand an ML system’s
behavior. Therefore, besides a more algorithmic-driven evaluation of XAI, user-focused studies exploring user goals,
knowledge, preferences, and intentions have gained increasing importance in XAI research. A personalized XAI system
that adapts to the explanation recipient and fits their desires to explain the AI prediction appropriately can increase the
practicality of XAI and allow its usage beyond AI researchers towards becoming an integral part of everyday life.
To come closer to this goal, we present a user-centered concept to evaluate XAI for practical applications using
personas. This concept gains knowledge of end-users preferences regarding XAI and provides an approach to creating
empirical-based personas representing end-user groups. Our approach supports XAI designers in gaining insights
into their users’ preferences and using this information to foster human-centered design (see Figure 1). Based on our
user-centered XAI persona concept, we are investigating in this paper end-users preferences for different explanation
styles and contents in an online survey. From the collected data, we derive personas that describe prototypical end-users
for mobile-health applications.
1.1 Related Work
1.2 Taxonomy of Explanations
A variety of taxonomies can be used to classify XAI methods. One of the commonly used divides XAI techniques
into model-agnostic and model-specific approaches. Model-agnostic refers to algorithms like LIME [
6
] and SHAP [
7
]
that are applied independently of the model’s characteristics so that they can be used for different AI models [
8
]. In
contrast, model-specific approaches are developed for specific ML methods, for example, the LRP approach [
9
] and the
Grad-CAM [10] that work incredibly well on deep neural networks. A diverse repertoire of XAI approaches has been
developed utilizing varying techniques. These broadly separate into four categories [11, 12]:
Visualisation
A natural way of bridging ML models’ complexity and algebraic nature is by using visualization
techniques. For example, Cortez and Embrechts
[13]
presents a portfolio of visualizations to improve the
explainability of black-box building upon the Global SA technique.
Feature-based
This technique estimates the importance, influence, or relevance of individual features on
the prediction. For example, in Lundberg and Lee
[7]
, the authors present the SHAP (SHapley Additive
exPlanations) framework. It calculates an additive feature importance score taking additional properties such
as accuracy and consistency discerning from their antecedents.
Knowledge extraction
Learning algorithms modify cells in the hidden layer of a model. The task of
knowledge-based explanation is to extract, in a comprehensible representation, the knowledge acquired
by the network during training phases [
12
]. A commonly seen approach is a rule-based explainer, as the rule
extractions proposed by Hailesilassie [14] or aLIME [6] providing if-then rules in a model-agnostic manner.
Example-based
This represents a unique technique among the previously mentioned. Example-based expla-
nations are model-agnostic since they improve the interpretability of any ML model. However, they interpret
a model based on the present dataset, not on features or model transformations. One of the most promising
approaches is the explanation through counterfactuals [11, 15].
The presented classification of techniques is based on active research. Therefore, new methods are proposed, and
existing ones are extended.
2
Human-Centered XAI for Mobile Health Apps
Besides the development of XAI approaches, researchers like Doshi-Velez and Kim
[16]
point to the necessity of
evaluating XAI to investigate the usefulness of these methods. They proposed a taxonomy for evaluating XAI. In their
three-step approach, they start with functionally-grounded evaluation with proxy tasks without humans. In these proxy
tasks, a formal explanation quality is evaluated. The following two steps, namely human-grounded evaluation and
application-grounded evaluation, involve users and focus either on simple or real-world application tasks. Doshi-Velez
and Kim
[16]
highlight that from step to step, the evaluation approach gets more cost-intensive and more specific due to
the application of XAI.
We focus on a human-grounded evaluation as proposed by Doshi-Velez and Kim
[16]
. In doing so, we investigate
end-users preferences for a mobile health stress monitoring app. In accordance with Doshi-Velez and Kim
[16]
, we
investigate the impact of different types and contents of an explanation in a mock-up setup. This benefits us to focus on
the explanation design without being limited by the outcome or performance of current ML models.
1.3 XAI for Medical Decision Support
In the early days of AI, rule-based systems like MYCIN [
17
] provided a first decision support system for clinicians.
Since then, ML has gained momentum in the medical field is leading to significant advancement in automated diagnosis
and forecasting [
18
,
19
]. The platform ‘grand-challenge.org’
1
lists over 328 different classification and image analysis
tasks with medical backgrounds and the corresponding AI solutions. However, critical voices highlight that AI helped
in experimental settings but failed in practice due to the poor fit to real-world requirements [
20
]. One of them is the
black-box problem of ML, which increased the research community’s interest in XAI and interpretability within the
medical field. The question of responsibility and risk management are closely coupled to the usage of AI in medical
contexts. Since when diagnoses are made, lives are at stake, leaving such a decision to an opaque ML model would be
irresponsible on many levels. Therefore, many have been dedicated to exploring and improving the transparency and
explainability of AI in medicine [
21
]. In most of the ongoing research, diagnosis and classification tasks are the most
apparent use cases when applying AI in the medical domain. In contrast, a system developed by Kostopoulos et al.
[22]
detects the user’s stress level by analyzing smartphone data. This trend of registering and analyzing body conditions
gained increasing attention with smart wearables allowing more accessible and continuous tracking. Consequently,
users can now track their heart rate, stress level, or physical activity throughout the day. According to recent studies,
smart-wearables will significantly impact digital health, Perez-Pozuelo et al.
[23]
categorizes them in a subclass called
mobile health, standing out through their monitoring capabilities. In Figure 2 we summarize common digital health
use-cases found in literature and structure them further according to similarities in their goal, applicability, or depth of
intervention in medicine.
Figure 2: Overview about different purposes of AI in medical applications
The concrete health use cases are thus summarized in six different categories. Which are then, in turn, further condensed
into three dimensions:
1https://grand-challenge.org/
3
Human-Centered XAI for Mobile Health Apps
Technology-focused
: Here, the algorithm and the intended use are dominant. These are nearly exclusively
used by domain experts (e.g., clinicians), and it is vital to understand their explainability, to comprehend
ongoing processes on a technical level.
Management focused
: This targets the whole organization and administering digital health sectors instead
of individual patients. The user group includes health professionals as well as administrative staff. Thus,
explainability helps to retrace decisions, e.g., when it has to be explained to outside parties.
Patient-focused
: Revolves around the patient, either in providing diagnosis and assisting doctors in the process
or monitoring the patient’s body, not necessarily involving a physician. This firm patient reference offers good
preconditions for the envisaged personalisation approaches.
Our paper focuses on the third dimension, patient focused AI-based health applications. Here we investigate user
preferences in a mobile monitoring task for stress recognition.
1.4 Human-Centered XAI
Authors like Miller
[24]
and Hoffman et al.
[25]
stressed that the same explanation could affect recipients differently
due to cognitive and social variations and desired context. For example, an explainee’s goal could be to understand
why this circumstantial explanation was received or how the AI came to this conclusion. Hence, one explanation can
not satisfy these heterogeneous goals. The omission of addressing these needs and interests can decrease or prevent
the success of XAI applications in practice. Barredo Arrieta et al.
[26]
designate the approximation dilemma to this
issue; explanations must match the audience’s requirements. This is addressed by Human-Centered AI (HCAI). HCAI
provides a perspective on AI that highlights the necessity to take stakeholders’ abilities, beliefs, and perceptions into the
design of AI applications [27].
A promising attempt to reach human-centered XAI is the personalisation of explanations described by Schneider and
Handali
[28]
. Personalisation is incorporated by adapting, among other things, to the explainees’ knowledge, intent,
and preferences. First attempts are shown in Schneider and Handali
[28]
and Arya et al.
[4]
focus on static explanations
often supported by text. So far, interactive visual approaches – potentially better suited for personalisation – have not
been the focus of research or are limited to concepts and techniques closely related to AI developers [
3
]. Shneiderman
[5]
highlight that interactive explanations could enable greater user involvement and thus help users better understand
an ML system’s behaviour than static explanations.
Finally, it is worth noting that most of the XAI techniques are static in their explainability [
4
]. They do not change
in response to feedback or reactions from the receiver. In contrast, an interactive explanation allows consumers to
immerse themselves in the explanation, e.g., ask questions. Arya et al.
[4]
, Shneiderman
[5]
highlight the importance
of interactive explanations, especially for end-users, but missing evaluations. The existing static explanations should
support the development of interactive explanations since communication technology, e.g., pictures, can, in principle,
remain the same. Putnam and Conati
[29]
presents one possibility for designing interactive explanations for intelligent
tutoring systems, where users can receive more detailed explanations by asking the system why this happened or how it
happened.
For mobile health apps, we investigate three different variations of interactive explanations (i.e., live explanation, feature
explanation, and ask-the-app explanation).
2 Persona Concept
Holzinger et al.
[30]
emphasize the importance of asking stakeholders about their attitudes towards AI. This information
is essential, as it influences whether and how users will utilize an AI system later. Personas represent fictional
stakeholders [
31
]; they help developers understand their target users, empathize with them, and make better decisions
concerning the usage of a system. Cooper
[32]
was one of the first to present this concept as a part of requirements
engineering to focus on who uses the system. The literature proposes different templates for the guided construction
[
33
]. Usually, these are textual descriptions that include information about the person’s background, behaviors, and
personal traits [
34
]. Ferreira et al.
[33]
identified two limitations concerning such persona templates. First, techniques
neglect to elicit requirements to focus more on the empathy aspects. Second, specific survey methods do not relate
to the application domain and provide general, less context-specific characteristics for behavior identification. To
combine both, Ferreira et al.
[33]
propose the ‘PATHY 2.0’ technique (Personas empATHY). Its purpose is to enable
the identification of potential application requirements, which are derived from user needs. Among other things, it aims
at creating profiles that represent system stereotypes. The technique separates into six fields, each providing guiding
questions to help describe the respective section: Who, context, technology experience, problems, needs, and existing
solutions.
4
摘要:

WHATDOEND-USERSREALLYWANT?INVESTIGATIONOFHUMAN-CENTEREDXAIFORMOBILEHEALTHAPPSKatharinaWeitzChairforHuman-CenteredAIUniversityofAugsburgUniversitätsstraße6a86159Augsburgkatharina.weitz@uni-a.deAlexanderZellnerChairforHuman-CenteredAIUniversityofAugsburgUniversitätsstraße6a86159Augsburgalexander.zelln...

展开>> 收起<<
WHAT DOEND-USERS REALLY WANT INVESTIGATION OF HUMAN -CENTERED XAI FOR MOBILE HEALTH APPS Katharina Weitz.pdf

共19页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:19 页 大小:6.12MB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 19
客服
关注