
The Influence of Explainable Artificial Intelligence
Finally, concept-based explanations attempt to explain a
model’s output by referencing pre-defined or automatically
generated sets of concepts that are comprehensible to hu-
mans (Kazhdan et al., 2020).
Even though there has been progress in the development of
XAI models, it is still not completely clear which model
should be used for human-machine collaboration, and for
what purpose. There are still many open question. For
example, do local explanations nudge performance in the
short term but not necessarily provide enough information
to educate a user and boost their capability in the long term?
Do global explanations provide enough information to ed-
ucate? Can counterfactual explanations teach people how
their AI works or allow them to identify errors?
A systematic literature review of 241 papers looked at how
the validity and usefulness of explanations have been eval-
uated by the authors of XAI methods (Anjomshoae et al.,
2019). Most studies only conducted user studies in sim-
ple scenarios or completely lacked evaluations. The results
show that 32% of the research papers did not include any
type of evaluation. Furthermore, 59% of the research pa-
pers conducted a user study to evaluate the usefulness of
the explanation (with a small minority also evaluating user
trust towards the AI system). Finally, 9% of the papers used
an algorithmic evaluation, which did not involve any em-
pirical user research. These initial findings suggest that dif-
ferent explanations will lead to variations in performance
on a task (Lage et al., 2019; Narayanan et al., 2018), and
will not always necessarily improve performance and un-
derstanding (Kindermans et al., 2019). If viewed as a be-
haviour change intervention, when does an XAI serve as
a nudge, changing behaviour, and when does it serve as a
boost, improving capability?
3. Local and concept-based explanations as
nudges
The common aim of nudges is to predictively change tar-
geted behaviours. A nudge is any aspect of choice archi-
tecture aimed at influencing people’s behaviour, without
limiting or forcing options, or significantly changing their
economic incentives (Thaler & Sunstein, 2008). All envi-
ronments influence behaviour to some extent, even when
people are not aware of it. Intentionally changing choice ar-
chitecture is nudging. Nudges take many shapes (Sunstein,
2014). Default rules such as automatic enrollment in pro-
grams, automate decision-making for people by setting a
default. Simplification nudges reduce the amount of infor-
mation presented to people to avoid information overload.
The use of descriptive social norms - telling people what
most other people are doing - influences behaviour. As a
policy tool, nudging has been used in over 80 countries
worldwide, and by major supranational institutions such as
the World Bank and UN (OECD, 2017).
Nudges have been heavily influenced by Daniel Kah-
neman’s dual-process account of reasoning (Kahneman,
2003). He proposed that people have ”two systems” in their
mind - System 1 and System 2. System 1 thinking is heuris-
tic. It reacts intuitively and effortlessly, without analysing
all available information. System 2 is an analytical and ef-
fortful, rationalising process. System 1 thinking is fast, and
thus accounts for most behaviour. System 2 can re-evaluate
System 1 thinking, thus using System 2 thinking leads to
fewer erroneous decision. However, this is difficult, as it
requires more cognitive effort. Importantly, some factors
and contexts are more likely to trigger System 1 or System
2 thinking than others. Per Sunstein (2016), nudges work
by targeting either System 1 thinking, thus influencing be-
haviour without the awareness of the decision maker, or
System 2 thinking, thus promoting deliberative thinking.
A famous example of nudging is using disclosure nudges.
Disclosure nudges disclose decision-relevant information
(Sunstein, 2014). They are educative, because they pro-
vide a learning experience, and target System 2, by pro-
moting deliberative thinking. Disclosure nudges are rooted
in three insights. First, as uncertainty promotes erroneous
decision making (Kochenderfer, 2015), disclosure nudges
seek to reduce uncertainty with decision-relevant informa-
tion. Second, when too-much decision-irrelevant informa-
tion is present, people find decision-making more challeng-
ing (Rogers et al., 2013); disclosing only decision-relevant
information can, therefore, reduce error. Finally, disclosure
nudges create an emphasis frame, making relevant informa-
tion more salient (Chong & Druckman, 2007).
Viewed through this framework, local feature impor-
tance explanations and concept-based explanations can be
viewed as a type of disclosure nudge. They will provide a
small amount of decision-relevant information at the time a
person needs to decide whether or not to trust an AIs advice
or prediction. It thus could be the case that findings from
the extensive body of research on disclosure nudges may
generalise to local and concept-based explanations. Such
comparisons can guide future practice and research direc-
tions.
4. Global and counterfactual explanations as
boosts
Boosts are interventions that aim to promote peo-
ple’s competencies, so they can make better decisions
(Gr¨une-Yanoff & Hertwig, 2016). Proponents of boosts
aim to foster skills and decision heuristics that can per-
sist over time, throughout different decision contexts
(Hertwig & Gr¨une-Yanoff, 2017). An example of a boost is
teaching people better decision-making skills with the use