1
Why people judge humans differently from
machines: The role of perceived agency and
experience
Jingling Zhang∗, Jane Conway†, C´
esar A. Hidalgo∗‡
∗Center for Collective Learning, ANITI, IAST, TSE, IRIT, University of Toulouse
†Centre for Creative Technologies and School of Psychology, University of Galway
‡Center for Collective Learning, CIAS, Corvinus University
Abstract—People are known to judge artificial intelligence
using a utilitarian moral philosophy and humans using a moral
philosophy emphasizing perceived intentions. But why do people
judge humans and machines differently? Psychology suggests that
people may have different mind perception models of humans and
machines, and thus, will treat human-like robots more similarly
to the way they treat humans. Here we present a randomized ex-
periment where we manipulated people’s perception of machine
agency (e.g., ability to plan, act) and experience (e.g., ability to
feel) to explore whether people judge machines that are perceived
to be more similar to humans along these two dimensions more
similarly to the way they judge humans. We find that people’s
judgments of machines become more similar to that of humans
when they perceive machines as having more agency but not
more experience. Our findings indicate that people’s use of
different moral philosophies to judge humans and machines can
be explained by a progression of mind perception models where
the perception of agency plays a prominent role. These findings
add to the body of evidence suggesting that people’s judgment
of machines becomes more similar to that of humans motivating
further work on dimensions modulating people’s judgment of
human and machine actions.
I. INTRODUCTION
Do people judge human and machine actions equally?
Recent empirical studies suggest this is not the case. In fact,
several studies have shown that people make strong differences
when judging humans and machines.
Consider the recent experiments from Malle et al. (2015)
asking people to judge a trolley problem [10], [15]. In a trolley
problem, people can pull a lever to deviate an out-of-control
trolley sacrificing a few people to save many. Malle et al.
(2015) found that people expected robots to pull the lever and
act utilitarianly (sacrifice one person to save four) compared
to humans (which were not judged as severely for not pulling
the lever) [21]. This idea was expanded by [14]. Using a
set of over 80 randomized experiments comparing people’s
reactions to the actions of humans and machines, the authors
concluded that people judge humans and machines using
different moral philosophies: a consequentialist philosophy
(focused on outcomes) for machines and a moral philosophy
focused more on intention when it comes to humans.
But why do people use different moral philosophies to
judge humans and machines? Psychology suggests that people
may perceive the minds of machines and humans differently
[8], [11], and therefore, may treat more human-like robots
more similarly to the way they treat humans [9]. This idea
is related to various experiments where robots were endowed
with human-like features [19], [27], [16], [24], [22], [28], [29],
[23]. For instance, Powers and Kiesler (2006) used a robot with
tunable chin length and tone of voice to explore the connection
between the robot’s appearance and its perceived personality
[24]. Waytz et al. (2014) compared anthropomorphized and
non-anthropomorphized self-driving cars to show that people
trust the anthropomorphized self-driving cars more [27]. Malle
et al. (2016) explored the impact of a robot’s appearance in
people’s judgment of moral actions (trolley problem), finding
that people judge more human-like robots more similarly to
the way they judge humans [22]. Yet, these experiments did
not provide an explicit quantitative mind perception model
explaining people’s judgment of more and less human-like
machines.
Here we explore how perceived agency and experience,
two key dimensions of mind perception [11], affect people’s
judgments of machines.
Agency is related to an agents ability to plan (e.g., to create
a strategy for action that considers potential consequences)
and to act (e.g., the capacity to affect or control the immediate
environment). Thus, agency is related to moral responsibility
for performed actions (higher agency, higher expected respon-
sibility) [17].
Experience, in the context of this paper, is used to describe
the ability to feel (e.g., the ability to experience sensations
such as pain, sadness, guilt, or anger). It is, thus, related to
the concept of moral status (not to be confused with the idea
of expertise) and to the right of an agent to be treated with
dignity.
These two dimensions represent a basic mind perception
model that has been used previously to explain the cognition
and behavior of alters using representations of their perceived
mental abilities [3], [4], [8], [11]. Usually, mind perception
models involve low dimensional representations of an alter’s
characteristics, such as the warmth and competence model
used to explain stereotypes [5]. That model, for instance, says
that people tend to protect those high in warmth and low in
competence (e.g., babies) but fear those high in competence
and low in warmth (e.g., killer robots).
arXiv:2210.10081v2 [cs.CY] 19 Sep 2023