The Influence of Explainable Artificial Intelligence Nudging Behaviour or Boosting Capability

2025-05-06 0 0 127.73KB 8 页 10玖币
侵权投诉
arXiv:2210.02407v1 [cs.HC] 5 Oct 2022
The Influence of Explainable Artificial Intelligence:
Nudging Behaviour or Boosting Capability?
Matija Franklin 1
Abstract
This article aims to provide a theoretical ac-
count and corresponding paradigm for analysing
how explainable artificial intelligence (XAI) in-
fluences people’s behaviour and cognition. It
uses insights from research on behaviour change.
Two notable frameworks for thinking about be-
haviour change techniques are nudges - aimed
at influencing behaviour - and boosts - aimed at
fostering capability. It proposes that local and
concept-based explanations are more adjacent to
nudges, while global and counterfactual explana-
tions are more adjacent to boosts. It outlines a
method for measuring XAI influence and argues
for the benefits of understanding it for optimal,
safe and ethical human-AI collaboration.
1. Introduction
Deep Learning (DL) is a subfield of Machine Learning
(ML) that focuses on developing Deep Neural Network
(DNN) models (Shahroudnejad, 2021). DNNs are complex
models that are capable of achieving high performance on a
variety of tasks. Many deep neural network models are un-
interpretable black-boxes, which often leads to users hav-
ing less trust in them (Miller, 2019). This lack of inter-
pretability and trust can have negative consequences – peo-
ple might use an AI that makes mistakes, or not use an AI
that could improve the chances of a desired result. To im-
prove the interpretability and trustworthiness of black-box
models, Explainable Artificial Intelligence (XAI) research
focuses on developing methods to explain the behaviour of
these models in terms that are comprehensible to humans
(Molnar, 2020).
Explanations provided by XAI can impact human
behaviour and cognition (Donadello et al., 2020;
1Department of Experimental Psychology, University College
London, London, UK. Correspondence to: Matija Franklin
<matija.franklin@ucl.ac.uk>.
Proceedings of the 39 th International Conference on Machine
Learning, 2022. Copyright 2022 by the author(s).
Dragoni et al., 2020). This paper aims to apply frame-
works from research on behaviour change within the
broader field of behavioural science (Ruggeri, 2018). Two
notable frameworks for thinking about and categorising
behaviour change techniques are nudges’ and ‘boosts’
(Gr¨une-Yanoff & Hertwig, 2016). A nudge is any aspect
of choice architecture – the context in which people make
decisions - aimed at influencing the behaviours of individ-
uals without limiting or forcing options (Sunstein, 2014).
Boosts are interventions that foster people’s competencies
through changes in knowledge and skills, so that they can
make their own choices more effectively (Hertwig, 2017).
By doing so this paper seeks to put forward a framework
and a paradigm for evaluating whether within the context
of human-AI interaction an XAI is nudging performance
or boosting capability. It will discuss the implications of
each in relation to human-machine collaboration as well as
the ethics of influence.
2. Different explanations, different outcomes
A great deal of XAI research has gone into developing
methods to improve the explainability of DNN models
(Molnar, 2020). As it stands it is not fully clear how dif-
ferent methods may influence human behaviour.
Feature importance methods (such as saliency methods)
generate scores that reveal how important a feature (like a
word vector or pixel) is to the AI’s decision-making process
(Bhatt et al., 2020). Explanations generated by these meth-
ods can be either global or local in scope (Lundberg et al.,
2020). Global explanations, like those provided by SHAP
(SHapley Additive exPlanations) models, give a quantita-
tive indication of the importance of each input variable on
the model’s output (Lubo-Robles et al., 2020). Local expla-
nation methods, such as LIME (Local Interpretable Model
Agnostic Explanations), generate a numeric score showing
the importance of an input variable in relation to the out-
come variable (Lee et al., 2019).
Counterfactual explanations show what the model’s output
would have been if one or more of the inputs had been
different (Verma et al., 2020). This can be helpful in un-
derstanding why the model arrived at a particular output.
The Influence of Explainable Artificial Intelligence
Finally, concept-based explanations attempt to explain a
model’s output by referencing pre-defined or automatically
generated sets of concepts that are comprehensible to hu-
mans (Kazhdan et al., 2020).
Even though there has been progress in the development of
XAI models, it is still not completely clear which model
should be used for human-machine collaboration, and for
what purpose. There are still many open question. For
example, do local explanations nudge performance in the
short term but not necessarily provide enough information
to educate a user and boost their capability in the long term?
Do global explanations provide enough information to ed-
ucate? Can counterfactual explanations teach people how
their AI works or allow them to identify errors?
A systematic literature review of 241 papers looked at how
the validity and usefulness of explanations have been eval-
uated by the authors of XAI methods (Anjomshoae et al.,
2019). Most studies only conducted user studies in sim-
ple scenarios or completely lacked evaluations. The results
show that 32% of the research papers did not include any
type of evaluation. Furthermore, 59% of the research pa-
pers conducted a user study to evaluate the usefulness of
the explanation (with a small minority also evaluating user
trust towards the AI system). Finally, 9% of the papers used
an algorithmic evaluation, which did not involve any em-
pirical user research. These initial findings suggest that dif-
ferent explanations will lead to variations in performance
on a task (Lage et al., 2019; Narayanan et al., 2018), and
will not always necessarily improve performance and un-
derstanding (Kindermans et al., 2019). If viewed as a be-
haviour change intervention, when does an XAI serve as
a nudge, changing behaviour, and when does it serve as a
boost, improving capability?
3. Local and concept-based explanations as
nudges
The common aim of nudges is to predictively change tar-
geted behaviours. A nudge is any aspect of choice archi-
tecture aimed at influencing people’s behaviour, without
limiting or forcing options, or significantly changing their
economic incentives (Thaler & Sunstein, 2008). All envi-
ronments influence behaviour to some extent, even when
people are not aware of it. Intentionally changing choice ar-
chitecture is nudging. Nudges take many shapes (Sunstein,
2014). Default rules such as automatic enrollment in pro-
grams, automate decision-making for people by setting a
default. Simplification nudges reduce the amount of infor-
mation presented to people to avoid information overload.
The use of descriptive social norms - telling people what
most other people are doing - influences behaviour. As a
policy tool, nudging has been used in over 80 countries
worldwide, and by major supranational institutions such as
the World Bank and UN (OECD, 2017).
Nudges have been heavily influenced by Daniel Kah-
neman’s dual-process account of reasoning (Kahneman,
2003). He proposed that people have ”two systems” in their
mind - System 1 and System 2. System 1 thinking is heuris-
tic. It reacts intuitively and effortlessly, without analysing
all available information. System 2 is an analytical and ef-
fortful, rationalising process. System 1 thinking is fast, and
thus accounts for most behaviour. System 2 can re-evaluate
System 1 thinking, thus using System 2 thinking leads to
fewer erroneous decision. However, this is difficult, as it
requires more cognitive effort. Importantly, some factors
and contexts are more likely to trigger System 1 or System
2 thinking than others. Per Sunstein (2016), nudges work
by targeting either System 1 thinking, thus influencing be-
haviour without the awareness of the decision maker, or
System 2 thinking, thus promoting deliberative thinking.
A famous example of nudging is using disclosure nudges.
Disclosure nudges disclose decision-relevant information
(Sunstein, 2014). They are educative, because they pro-
vide a learning experience, and target System 2, by pro-
moting deliberative thinking. Disclosure nudges are rooted
in three insights. First, as uncertainty promotes erroneous
decision making (Kochenderfer, 2015), disclosure nudges
seek to reduce uncertainty with decision-relevant informa-
tion. Second, when too-much decision-irrelevant informa-
tion is present, people find decision-making more challeng-
ing (Rogers et al., 2013); disclosing only decision-relevant
information can, therefore, reduce error. Finally, disclosure
nudges create an emphasis frame, making relevant informa-
tion more salient (Chong & Druckman, 2007).
Viewed through this framework, local feature impor-
tance explanations and concept-based explanations can be
viewed as a type of disclosure nudge. They will provide a
small amount of decision-relevant information at the time a
person needs to decide whether or not to trust an AIs advice
or prediction. It thus could be the case that findings from
the extensive body of research on disclosure nudges may
generalise to local and concept-based explanations. Such
comparisons can guide future practice and research direc-
tions.
4. Global and counterfactual explanations as
boosts
Boosts are interventions that aim to promote peo-
ple’s competencies, so they can make better decisions
(Gr¨une-Yanoff & Hertwig, 2016). Proponents of boosts
aim to foster skills and decision heuristics that can per-
sist over time, throughout different decision contexts
(Hertwig & Gr¨une-Yanoff, 2017). An example of a boost is
teaching people better decision-making skills with the use
摘要:

arXiv:2210.02407v1[cs.HC]5Oct2022TheInfluenceofExplainableArtificialIntelligence:NudgingBehaviourorBoostingCapability?MatijaFranklin1AbstractThisarticleaimstoprovideatheoreticalac-countandcorrespondingparadigmforanalysinghowexplainableartificialintelligence(XAI)in-fluencespeople’sbehaviourandcognition.I...

展开>> 收起<<
The Influence of Explainable Artificial Intelligence Nudging Behaviour or Boosting Capability.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:127.73KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注