
In fact, cognitive biases have traditionally been commercially leveraged in different sectors to manipulate
human behavior. Examples include casinos (
5
), addictive apps (
6
), advertisement and marketing strategies
to drive consumption (
7
;
2
) and social media campaigns to impact the outcome of elections (
8
). However,
we advocate in this paper for a constructive and positive use of cognitive biases in technology, moving from
manipulation to collaboration. We propose that considering our cognitive biases in AI systems could lead to
more efficient human-AI collaboration.
Nonetheless, there has been limited research to date on the interaction between human biases and AI systems,
as recently highlighted by several authors (
9
;
10
;
11
;
12
). In this context, we highlight the work by Akata et
al. (
13
) who propose a research agenda for the design of AI systems that collaborate with humans, going
beyond a human-in-the-loop setting. They pose a set of research questions related to how to design AI
systems that collaborate with and adapt to humans in a responsible and explainable way. In their work, they
note the importance of understanding humans and leveraging AI to mitigate biases in human decisions.
In this paper, we build from previous work by proposing a taxonomy of cognitive biases that is tailored to
the design of AI systems. Furthermore, we identify a subset of 20 cognitive biases that are suitable to be
considered in the development of AI systems and outline three directions of research to design cognitive
bias-aware AI systems.
2 A Taxonomy of Cognitive Biases
Since the early studies in the 1950s, approximately 200 cognitive biases have been identified and classified
(
14
;
15
). Several taxonomies of cognitive biases have been proposed in the literature, particularly in specific
domains, such as medical decision making (
16
;
17
), tourism (
18
) or fire evacuation (
19
). Alternative taxonomies
classify biases based on their underlying phenomenon (
20
;
21
;
22
). However, given that there is no widely
accepted theory of the source of cognitive biases (
23
), classifying them according to their hypothesized source
might be misleading.
Dimara et al. (
24
) report similar limitations with existing taxonomies and propose a new taxonomy of cognitive
biases based on the experimental setting where each bias was studied and with a focus on visualization. While
this taxonomy is of great value for visualization, our focus is the interplay between AI and cognitive biases.
Thus, we propose classifying biases according to five stages in the human decision making cycle as depicted
in Figure 1.
The left part of Figure 1represents the physical world that we perceive, interpret and interact with. The
right part represents the internal models and memories that we create based on our experience. As seen in
Figure 1, we propose classifying biases according to five main stages in the human perception, interpretation
and decision making process: presentation biases, associated with how information or facts are presented to
humans; interpretation biases that arise due to misinterpretations of information; value attribution biases
that emerge when humans assign values to objects or ideas that are not rational or based on an underlying
factual reality; recall biases associated with how we recall facts from our memory and decision biases that
have been documented in the context of human-decision making.
Figure 1also illustrates how AI systems (represented as an orange undirected graph) may interact with
humans in this context. First, AI systems could be entities in the external world that humans perceive or
interact with (e.g. chatbots, robots, apps...). Second, they may be active participants and assist humans in
their information processing and decision-making processes (e.g. cognitive assistants, assistive technologies...).
Finally, AI systems could be observers that model our behavior and provide feedback without directly being
involved in the decision making process. Note that these three forms of interaction with AI systems may
occur simultaneously.
We also present four representative cognitive biases for each category. These biases were chosen according to
the amount of evidence in the literature about the existence of the bias and their relevance for the design of
AI systems. Tables 1,2and 3summarize the selected biases, their description, supporting literature and
relevance to AI. Additionally, table 4illustrates how AI could potentially provide support in detecting and
mitigating some of these biases using the confirmation bias as an example.
2