Exploring Effectiveness of Explanations for Appropriate Trust Lessons from Cognitive Psychology Ruben S. Verhagen Siddharth MehrotraMark A. NeerincxCatholijn M. Jonker

2025-04-27 0 0 934.09KB 5 页 10玖币
侵权投诉
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology
Ruben S. Verhagen**Siddharth Mehrotra*Mark A. NeerincxCatholijn M. Jonker§
Myrthe L. Tielman
Delft University of Technology, The Netherlands
ABSTRACT
The rapid development of Artificial Intelligence (AI) requires de-
velopers and designers of AI systems to focus on the collaboration
between humans and machines. AI explanations of system behav-
ior and reasoning are vital for effective collaboration by fostering
appropriate trust, ensuring understanding, and addressing issues of
fairness and bias. However, various contextual and subjective fac-
tors can influence an AI system explanation’s effectiveness. This
work draws inspiration from findings in cognitive psychology to
understand how effective explanations can be designed. We identify
four components to which explanation designers can pay special
attention: perception, semantics, intent, and user & context. We
illustrate the use of these four explanation components with an ex-
ample of estimating food calories by combining text with visuals,
probabilities with exemplars, and intent communication with both
user and context in mind. We propose that the significant chal-
lenge for effective AI explanations is an additional step between
explanation generation using algorithms not producing interpretable
explanations and explanation communication. We believe this extra
step will benefit from carefully considering the four explanation
components outlined in our work, which can positively affect the
explanation’s effectiveness.
Index Terms:
Human-centered computing—Visualization—Visu-
alization techniques; Human-centered computing—Visualization—
Visualization design and evaluation methods
1 INTRODUCTION
Humans and Artificial Intelligence (AI) systems are increasingly
collaborating on tasks ranging from the medical to the financial
domain. For such human-machine collaboration to be effective,
mutual understanding and trust are of paramount importance [15,
18, 31]. AI explanations are a crucial and powerful way to increase
human understanding and trust in the system. Explanations can help
in explaining the decisions and inner workings of the ”black-box”
data-driven machine learning algorithms and actions & reasoning of
goal-driven agents [2, 10, 11, 19, 25, 27].
Unfortunately, AI systems often lack transparency and explain-
ability, providing a broad call for explainable AI (XAI). For example,
EU GDPR requires organizations that deploy AI systems to provide
relevant information to affected people about the inner workings
of the algorithms [37]. Furthermore, in addition to helping people
*e-mail: r.s.verhagen@tudelft.nl; *Author sharing first authorship.
e-mail: s.mehrotra@tudelft.nl; *Author sharing first authorship.
e-mail:m.a.neerincx@tudelft.nl; Secondoary affiliation: TNO, Soester-
berg, The Netherlands
§
e-mail:c.m.jonker@tudelft.nl; Secondoary affiliation: Leiden University,
The Netherlands
e-mail:m.l.tielman@tudelft.nl
understand AI systems, AI explanations can also contribute to iden-
tifying and addressing issues of fairness and bias, which are difficult
to do without. Explanations can also help form appropriate trust
in the AI system. For example, Dodge et al. [8] pointed out that
when people trust the explanation, they are more likely to trust the
underlying ML system.
A common challenge in popular explanation methods such as
LIME (Local Interpretable Model-agnostic Explanations) by Ribeiro
et al. [29], SHAP (Shapley Additive Explanations) by Lundberg &
Lee [24], and global-local explanation by Lundberg et al. [23] is the
effectiveness of the explanation to help users calibrate their trust and
increase understanding of the explanation. Various aspects can im-
pact the effectiveness of AI explanations, such as the user’s domain,
system expertise or cognitive and perceptual biases. However, this
list is far from conclusive. Therefore it is crucial to investigate the
following two research questions:
1.
What should be the content of the explanation to make it effec-
tive?
2.
How can the visual delivery or design of the explanation make
it effective?
2 EXPLANATION EFFECTIVENESS
In this work, we take inspiration from cognitive psychology, design,
and data visualization techniques to explore ways how effective
explanations can be designed. A large amount of work in cognitive
psychology focuses on explanations in human-human interactions,
what makes them effective and how context can affect the same. For
example, Lombrozo [22] provides two properties of the structure of
explanations that helps in reasoning, (1) explanations accommodate
novel information in the context of prior beliefs, and (2) do so
in a way that fosters generalization. The author showcase that
explanations provides a unique window onto the mechanisms of
learning and inference in human reasoning.
David B. Leake in his book “Evaluating Explanations” from
cognitive psychology theories has described how context involving
both explainer beliefs and goals can help in deciding an explana-
tion’s goodness [21]. Similarly, Khemlani et al. shows how mental
models represent causal assertions, and how these models under-
lie deductive, inductive, and abductive reasoning yielding effective
explanations [17]. Since explanations determine how humans under-
stand the world in fundamental terms, Tworek and Cimpian helps
in exploring human biases towards the inheritance of judgments in
people’s explanations over socio-moral understanding [35].
The previously mentioned works and recent research in human-AI
interaction [12, 26, 32, 36, 43] helps us in exploring our two research
questions based on four components (Perception, Semantics, Intent,
and User & Context). These components are uniquely visible at
the intersection of studies in human-AI interaction and cognitive
psychology. We will now describe these components in detail.
2.1 Perception
Perception in XAI can refer to the set of mental processes we use
to make sense of a given explanation by an AI system. Perception
arXiv:2210.03737v1 [cs.HC] 5 Oct 2022
摘要:

ExploringEffectivenessofExplanationsforAppropriateTrust:LessonsfromCognitivePsychologyRubenS.Verhagen**SiddharthMehrotra*†MarkA.Neerincx‡CatholijnM.Jonker§MyrtheL.Tielman¶DelftUniversityofTechnology,TheNetherlandsABSTRACTTherapiddevelopmentofArticialIntelligence(AI)requiresde-velopersanddesignersof...

展开>> 收起<<
Exploring Effectiveness of Explanations for Appropriate Trust Lessons from Cognitive Psychology Ruben S. Verhagen Siddharth MehrotraMark A. NeerincxCatholijn M. Jonker.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:934.09KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注