BIASeD Bringing Irrationality into Automated System Design Aditya Gulati adityaellisalicante.org

2025-04-27 0 0 621.88KB 14 页 10玖币
侵权投诉
BIASeD: Bringing Irrationality into Automated System
Design
Aditya Gulati aditya@ellisalicante.org
ELLIS Alicante
Miguel Angel Lozano malozano@ua.es
Universidad de Alicante
Bruno Lepri lepri@fbk.eu
Fondazione Bruno Kessler
Nuria Oliver nuria@ellisalicante.org
ELLIS Alicante
Abstract
Human perception, memory and decision-making are impacted by tens of cognitive biases
and heuristics that influence our actions and decisions. Despite the pervasiveness of such
biases, they are generally not leveraged by today’s Artificial Intelligence (AI) systems that
model human behavior and interact with humans. In this theoretical paper, we claim that
the future of human-machine collaboration will entail the development of AI systems that
model, understand and possibly replicate human cognitive biases. We propose the need for a
research agenda on the interplay between human cognitive biases and Artificial Intelligence.
We categorize existing cognitive biases from the perspective of AI systems, identify three
broad areas of interest and outline research directions for the design of AI systems that have
a better understanding of our own biases.
1 Introduction
A cognitive bias is a systematic pattern of deviation from rationality that occurs when we process, interpret
or recall information from the world, and it affects the decisions and judgments we make. Cognitive biases
may lead to inaccurate judgments, illogical interpretations and perceptual distortions. Thus, they are also
referred to as irrational behavior (1;2).
Since the 1970s, scholars in social psychology, cognitive science, and behavioral economics have carried
out studies aimed at uncovering and understanding these apparently irrational elements in human decision
making. As a result, different theories have been proposed to explain the source of our cognitive biases.
In 1955, Simon proposed the theory of bounded rationality (
3
). It posits that human decision making is
rational, but limited by our computation abilities which results in sub-optimal decisions because we are
unable to accurately solve the utility function of all the options available at all times. Alternative theories
include the dual process theory and the prospect theory, both proposed by Kahneman (4;1).
Even though there is no unified theory of our cognitive biases, it is clear that we use multiple shortcuts
or heuristics
1
to make decisions which might lead to sub-optimal outcomes. However and despite these
limitations, cognitive biases and heuristics are a crucial part of our decision making.
1
While heuristics typically refer to a simplifying rule used to make a decision and a cognitive bias refers to a consistent pattern
of deviation in behavior, in this paper both terms are used interchangeably as both impact human decisions in a similar way.
1
arXiv:2210.01122v3 [cs.HC] 1 Dec 2023
In fact, cognitive biases have traditionally been commercially leveraged in different sectors to manipulate
human behavior. Examples include casinos (
5
), addictive apps (
6
), advertisement and marketing strategies
to drive consumption (
7
;
2
) and social media campaigns to impact the outcome of elections (
8
). However,
we advocate in this paper for a constructive and positive use of cognitive biases in technology, moving from
manipulation to collaboration. We propose that considering our cognitive biases in AI systems could lead to
more efficient human-AI collaboration.
Nonetheless, there has been limited research to date on the interaction between human biases and AI systems,
as recently highlighted by several authors (
9
;
10
;
11
;
12
). In this context, we highlight the work by Akata et
al. (
13
) who propose a research agenda for the design of AI systems that collaborate with humans, going
beyond a human-in-the-loop setting. They pose a set of research questions related to how to design AI
systems that collaborate with and adapt to humans in a responsible and explainable way. In their work, they
note the importance of understanding humans and leveraging AI to mitigate biases in human decisions.
In this paper, we build from previous work by proposing a taxonomy of cognitive biases that is tailored to
the design of AI systems. Furthermore, we identify a subset of 20 cognitive biases that are suitable to be
considered in the development of AI systems and outline three directions of research to design cognitive
bias-aware AI systems.
2 A Taxonomy of Cognitive Biases
Since the early studies in the 1950s, approximately 200 cognitive biases have been identified and classified
(
14
;
15
). Several taxonomies of cognitive biases have been proposed in the literature, particularly in specific
domains, such as medical decision making (
16
;
17
), tourism (
18
) or fire evacuation (
19
). Alternative taxonomies
classify biases based on their underlying phenomenon (
20
;
21
;
22
). However, given that there is no widely
accepted theory of the source of cognitive biases (
23
), classifying them according to their hypothesized source
might be misleading.
Dimara et al. (
24
) report similar limitations with existing taxonomies and propose a new taxonomy of cognitive
biases based on the experimental setting where each bias was studied and with a focus on visualization. While
this taxonomy is of great value for visualization, our focus is the interplay between AI and cognitive biases.
Thus, we propose classifying biases according to five stages in the human decision making cycle as depicted
in Figure 1.
The left part of Figure 1represents the physical world that we perceive, interpret and interact with. The
right part represents the internal models and memories that we create based on our experience. As seen in
Figure 1, we propose classifying biases according to five main stages in the human perception, interpretation
and decision making process: presentation biases, associated with how information or facts are presented to
humans; interpretation biases that arise due to misinterpretations of information; value attribution biases
that emerge when humans assign values to objects or ideas that are not rational or based on an underlying
factual reality; recall biases associated with how we recall facts from our memory and decision biases that
have been documented in the context of human-decision making.
Figure 1also illustrates how AI systems (represented as an orange undirected graph) may interact with
humans in this context. First, AI systems could be entities in the external world that humans perceive or
interact with (e.g. chatbots, robots, apps...). Second, they may be active participants and assist humans in
their information processing and decision-making processes (e.g. cognitive assistants, assistive technologies...).
Finally, AI systems could be observers that model our behavior and provide feedback without directly being
involved in the decision making process. Note that these three forms of interaction with AI systems may
occur simultaneously.
We also present four representative cognitive biases for each category. These biases were chosen according to
the amount of evidence in the literature about the existence of the bias and their relevance for the design of
AI systems. Tables 1,2and 3summarize the selected biases, their description, supporting literature and
relevance to AI. Additionally, table 4illustrates how AI could potentially provide support in detecting and
mitigating some of these biases using the confirmation bias as an example.
2
Figure 1: Stages of the human perception, interpretation and decision-making process that are impacted by
cognitive biases. AI systems (represented by an orange undirected graph) could observe our behavior, detect
biases and help us mitigate them.
3 Cognitive Biases and AI: Research Directions
Given the ubiquity of AI-based systems in our daily lives –from recommender systems to personal assistants
and chatbots– and the pervasiveness of our cognitive biases, there is an opportunity to leverage cognitive
biases to build more efficient AI systems.
In this section, we propose three research directions to further explore the interplay between cognitive biases
and AI: (1) Human-AI interaction, (2) Cognitive biases in AI algorithms and (3) Computational modeling of
cognitive biases.
3.1 Area I. Human-AI Interaction
Cognitive biases have been studied since the 1970’s in experiments where human participants interacted with
other humans, animals or inanimate objects. However, as Hidalgo et al. (
92
) note, we do not necessarily
perceive, interact with and evaluate machines in the same way as we do with humans, animals or objects.
Thus, it is unclear today whether these cognitive biases exist when humans interact with AI systems, and if
so with which degree of intensity and under what circumstances.
This is especially the case with biases related to presentation and decision-making, as per Figure 1. Previous
work has reported that humans are influenced by observing machine behavior. For example, Hang, Ono, and
Yamada (
93
) showed that participants who saw a video of robots exhibiting altruistic behavior were more
likely to demonstrate altruistic behavior themselves. Others suggest that humans regard machines as social
entities if they display “sufficient interactive and social cues” (
94
); and a third set of studies propose that
humans view machines as being different from themselves in social interactions (
95
). Given the impact that
cognitive biases have in many of our daily tasks and given the increased presence of AI algorithms to tackle
many of these tasks, it becomes important to understand whether interactions with AI systems exhibit the
same biases as those observed in human-to-human interactions.
3
摘要:

BIASeD:BringingIrrationalityintoAutomatedSystemDesignAdityaGulatiaditya@ellisalicante.orgELLISAlicanteMiguelAngelLozanomalozano@ua.esUniversidaddeAlicanteBrunoLeprilepri@fbk.euFondazioneBrunoKesslerNuriaOlivernuria@ellisalicante.orgELLISAlicanteAbstractHumanperception,memoryanddecision-makingareimpa...

展开>> 收起<<
BIASeD Bringing Irrationality into Automated System Design Aditya Gulati adityaellisalicante.org.pdf

共14页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:14 页 大小:621.88KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 14
客服
关注