The Guilty Silicon Mind Blameworthiness and Liability in Human -Machine Teaming Dr Brendan Walker -Munro and Dr Zena Assaad As human science pushes the boundaries towards the development of artificial intelligence AI the

2025-05-06 0 0 653.76KB 26 页 10玖币
侵权投诉
The Guilty (Silicon) Mind: Blameworthiness and Liability in Human-Machine Teaming
Dr Brendan Walker-Munro and Dr Zena Assaad*
As human science pushes the boundaries towards the development of artificial intelligence (AI), the
sweep of progress has caused scholars and policymakers alike to question the legality of applying or
utilising AI in various human endeavours. For example, debate has raged in international scholarship
about the legitimacy of applying AI to weapon systems to form lethal autonomous weapon systems
(LAWS). Yet the argument holds true even when AI is applied to a military autonomous system that
is not weaponised: how does one hold a machine accountable for a crime? What about a tort? Can an
artificial agent understand the moral and ethical content of its instructions? These are thorny
questions, and in many cases these questions have been answered in the negative, as artificial entities
lack any contingent moral agency. So what if the AI is not alone, but linked with or overseen by a
human being, with their own moral and ethical understandings and obligations? Who is responsible
for any malfeasance that may be committed? Does the human bear the legal risks of unethical or
immoral decisions by an AI? These are some of the questions this manuscript seeks to engage with.
.
Introduction
Automation has been a key result of mankind’s technological development over the last two
centuries. Rather than a reliance on manual labour, as a society we have developed mechanised tools
which replace our efforts with streamlined and optimised acts, preferably themselves undertaken by
machines. Even in the most sensitive and value-driven theatre of human endeavour that of decision
making the march of progress has not slowed, such that we now have computer programs capable of
making decisions on everything from restaurant orders and hotel bookings to delivery of healthcare and
social welfare programs.
1
* <AUTHOR DETAILS>. The research for this paper received funding from the Australian Government through the
Defence Cooperative Research Centre for Trusted Autonomous Systems.
1
Igor Bikeev, Pavel Kabanov, Ildar Begishev, Zarina Khisamova, ‘Criminological Risks and Legal Aspects of Artificial
Intelligence Implementation’ (Proceedings of the International Conference on Artificial Intelligence, Information
Processing and Cloud Computing, Sanya, December 2019).
Yet that automation is not without its controversy. Discussion has raged in the international
community regarding the legitimacy of merging the “hard” processing capabilities of a computer with
the “soft” processing abilities of a human.
2
Whilst the reality of such a concept might previously have
been restricted to the pages of popular fiction,
3
this is no longer the case. Human-machine interfaces
where a system operates to modulate a human’s sensory connection with a machine are already being
used in contemporary applications such as piloting drones and other autonomous and semi-autonomous
platforms.
4
Scholars are now examining the next step of this inclusion of machines in the human realm
of decision-making with an increased research interest in “human-machine teaming” (HMT).
5
Conceptually, HMT will take human-machine collaboration to another level by more closely
fusing the processing capabilities of machines and humans yet despite the research interest, the
literature still lacks a cohesive framework which adequately reflects the legal responsibility for HMT.
Imagine a car assembly, where human workers and robotic workers complete their tasks side-by-side,
assembling the components of a vehicle as part of a smoothly operating team. However, both the
humans and machine are also given a particular values framework imposed by the factory owner:
vehicles must be completed to a certain standard, within a certain time. What happens when the
machines realise that their human counterparts are the ones that are slowing down the process, making
mistakes, costing time and resources? A human worker might seek to disobey the restrictions imposed
on him or her by the factory owner to strike, or maybe just going at their own pace and risking
dismissal. Robots have no such flexibility in their programming: what happens if they decide, for coldly
logical reasons, it would be more efficient to kill off their human co-workers?
2
Linda Skitka, Kathleen L. Mosier, Mark Burdick, ‘Does automation bias decision-making?’ (1999) 51(5) International
Journal of Human-Computer Studies, 991; Ericka Rovira, Kathleen McGarry, Raja Parasuraman, ‘Effects of imperfect
automation on decision making in a simulated command and control task’ (2007) 49(1) Human Factors, 76; Gustav
Markkula, Richard Romano, Ruth Madigan, Charles W. Fox, Oscar T. Giles, Natasha Merat, ‘Models of human decision-
making as tools for estimating and optimizing impacts of vehicle automation’ (2018) 2672(37) Transportation Research
Record, 153; Monika Zalnieriute, Lyria Bennett Moses, George Williams, ‘The rule of law and automation of government
decision‐making’ (2019) 82(3) The Modern Law Review, 425.
3
Alan Turing, ‘Computing Machinery and Intelligence’ (1950) 59 Mind 236, 433-460.
4
Jennifer Riley, Laura D. Strater, Sheryl L. Chappell, Erik S. Connors, Mica Endsley, ‘Situation Awareness in Human-
Robot Interaction: Challenges and User Interface Requirements’, in Michael Barnes and Florian Jentsch (Eds.), Human-
Robot Interactions in Future Military Operations (CRC Press, Boca Raton), 180.
5
The term “human-machine team” and “human-machine teaming” are functionally the same for present purposes, and can
be used interchangeably throughout this paper.
This might sound like the plot to a particularly ridiculous Hollywood blockbuster yet some
semblance of these facts can be found in reality. Kenji Urada is widely recognised as the first human to
“die by robot”. In 1981, Urada was performing maintenance on an automated hydraulic arm which,
despite written safety protocol, was still powered on. The system misinterpreted Urada’s actions as an
attempt to damage the arm, which reacted by knocking Urada into an adjacent machine. Urada was
crushed and died instantly.
6
A similarly horrifying (though less serious) incident occurred in 2022 when
a 7-year-old chess player had his finger broken by a robotic opponent.
7
In both cases, blame was laid
squarely on the human for violating safety protocol, and otherwise no charges were laid and no justice
was served.
Nowhere should this development be more concerning than in the field of the military and armed
forces given the rapid development of research into the ‘deployment of AI-infused systems (e.g. drone
swarming, command and control decision-making support systems and a broader range of autonomous
weapon systems)’.
8
Whilst the idea of HMT presents obvious benefits to military operations, the
controversy arises by inflaming existing risks or generating new challenges. Of relevance to military
commanders and systems designers also the thesis of this article should be a conceptual question
about the attribution of responsibility for unlawful actions committed within HMT operations: do the
actions give rise to civil liability (where the remedy is usually compensation or some remedial order of
the court) or criminal liability (where the remedy is usually imprisonment for natural persons, both as
a form of punishment and to protect innocent members of society).
We therefore set out in this article to advance the proposition that, for HMT, the specifics of the
dynamic interactions between the human and machine elements will dictate how liability will be
attributed. For the context of this paper, HMT is defined as a bi-directional combination of human and
6
Yueh-Hsuan Weng, Chien-Hsun Chen, Chuen-Tsai Sun, Toward the Human-Robot Co-Existence Society: On Safety
Intelligence for Next Generation Robots (2009) 1 International Journal for the Society of Robotics, 273.
7
Jon Henley, “Chess robot grabs and breaks finger of seven-year-old opponent”, The Guardian (online, 24 July 2022)
<https://amp-theguardian-com.cdn.ampproject.org/c/s/amp.theguardian.com/sport/2022/jul/24/chess-robot-grabs-and-breaks-
finger-of-seven-year-old-opponent-moscow>.
8
James Johnson, ‘The AI-cyber nexus: implications for military escalation, deterrence and strategic stability’ (2019) Journal
of Cyber-Policy, https://doi.org/10.1080/23738871.2019.1701693, 1.
machine capabilities which work together with a dynamic directedness towards an aligned goal.
9
We
intend to approach the problem in the following way. Part I will involve an exploration of the issues of
HMT operations. This Part will identify that the bi-directionality of communication between the human
and machine elements serves to blur the perceptions and observations of both parts, and may have legal
and regulatory ramifications. Part II will then introduce some key terms in the context of both civil and
criminal law around the establishment of liability, with reference to the idea of blameworthiness. In
Part III, we explore how a specific mechanism of approaching blameworthiness and liability might be
conducted in the future of HMT.
This Article will also specifically focus on HMT in a military context. There are three reasons
for such a focus. The first is that HMT is a significant component of the technological research for many
Western military forces including the US, UK and Australia,
10
but also of other nations such as China.
11
Secondly, like their comparative cousins in the form of autonomous weapon systems, the application
of AI to military decision making in HMT is already recognised as a challenge to the rules-based order
of international and comparative domestic law.
12
And thirdly, the military are often a testbed for
emerging technologies, with armed forces standing as the entity which commonly responds to the legal
and regulatory challenges that arise from their implementation.
13
9
Zena Assaad, work in progress.
10
Ministry of Defence, Human Machine Teaming (Joint Concept Note 1/18, May 2018)
<https://www.gov.uk/government/publications/human-machine-teaming-jcn-118>; Chad C. Tossell, Boyoung Kim, Bianca
Donadio, Ewart de Visser, ‘Appropriately Representing Military Tasks for Human-Machine Teaming Research’, in
Constantine Stephanidis, Jessie Y. C. Chen, Gino Fragomeni (Eds.), HCI International 2020 Late Breaking Papers:
Virtual and Augmented Reality (Springer, 2020), 245-265; Alex Neads, David J. Galbreath, Theo Farell, From Tools to
Teammates: Human Machine Teaming and the Future of Command and Control in the Australian Army (Australian Army
Occasional Paper No. 7, 20 September 2021).
11
Department of Defense, Military and Security Developments Involving the People’s Republic of China (2021), 146-148,
<https://media.defense.gov/2021/Nov/03/2002885874/-1/-1/0/2021-CMPR-FINAL.PDF>.
12
Aiden Warren, Alek Hillas, ‘Lethal Autonomous Weapons Systems: Adapting to the Future Unmanned Warfare and
Unaccountable Robots’ (2017) 12(1) Yale Journal of International Affairs, 71; Aiden Warren, Alek Hillas, ‘Friend or
frenemy? The role of trust in human-machine teaming and lethal autonomous weapons systems’ (2020) 31(4) Small Wars &
Insurgencies, 822.
13
See for example how drone regulation has emerged in military contexts: Ferran Giones, Alexander Brem, ‘From toys to
tools: The co-evolution of technological and entrepreneurial developments in the drone industry’ (2017) 60(6) Business
Horizons, 875; Matthieu J. Guitton, ‘Fighting the locusts: implementing military countermeasures against drones and drone
swarms’ (2021) 4(1) Scandinavian Journal of Military Studies, 1.
Part I Definitional issues of human-machine teaming
One of the most significant challenges facing the academic and industrial community is the lack
of a shared definition on exactly what comprises a HMT. Definitions are vitally important for legal and
regulatory purposes, not just as academic or theoretical constructs. The blurring of responsibility
between the human and machine elements in a HMT indeed, the very concept of identifying where a
human ends and a machine begins has the capacity to present significant challenges to the legal and
regulatory framework for future HMT operations. If a legal principle cannot apply to the emergence of
HMT operations, or applies weakly or ambiguously, the danger of an unregulated system is plainly
apparent. Even absent the possibility that HMT (especially military HMT) might be operating without
a proper form of legal control or oversight, the absence of a proper regulatory system has the capacity
to diminish public trust in the operations of the armed forces which deploy such systems. Worse, the
deliberate unregulated deployment of such systems may in fact expose those same armed forces to
liability themselves.
14
One such example defines a HMT as ‘a purposeful combination of human and cyber-physical
elements that collaboratively pursue goals that are unachievable by either individually’.
15
Some broader
literature of HMT have similarities in their proposed definitions, with many expressing notions of
sharing authority to pursue common goals.
16
Such a definition clearly articulates the connection and
bi-directionality between the human (natural) and the machine (artificial), yet articulates these by
reference to a frame in which goals cannot be achieved by one or the other in isolation. Applying such
a definition to the simple act of driving a vehicle highlights the definitional issues clearly, both humans
and machines can operate, steer and control a vehicle without necessary recourse to the other
17
.
14
Consider for example the application of Article 36 of Additional Protocol I of the Geneva Convention: Damian P.
Copeland, ‘Legal Review of New Technology Weapons’, in Hitoshi Nasu, Robert McLaughlin (Eds.), New Technologies
and the Law of Armed Conflict (Springer, 2014) 43-55.
15
Azad M. Madni, Carla C. Madni. ‘Architectural framework for exploring adaptive human-machine teaming options in
simulated dynamic environments’ (2018) 6 Systems 4, 49.
16
Joseph B. Lyons, Katia Sycara, Michael Lewis, August Capiola, HumanAutonomy Teaming: Definitions, Debates, and
Directions’ (2021) 12 Frontiers in Psychology, 1932, DOI 10.3389/fpsyg.2021.589585.
17
J. Levinson et al., Towards fully autonomous driving: Systems and algorithms’ (2011) Proceedings of the IEEE
Intelligent Vehicles Symposium (IV), pp. 163-168, doi: 10.1109/IVS.2011.5940562.
摘要:

TheGuilty(Silicon)Mind:BlameworthinessandLiabilityinHuman-MachineTeamingDrBrendanWalker-MunroandDrZenaAssaad*Ashumansciencepushestheboundariestowardsthedevelopmentofartificialintelligence(AI),thesweepofprogresshascausedscholarsandpolicymakersaliketoquestionthelegalityofapplyingorutilisingAIinvarious...

展开>> 收起<<
The Guilty Silicon Mind Blameworthiness and Liability in Human -Machine Teaming Dr Brendan Walker -Munro and Dr Zena Assaad As human science pushes the boundaries towards the development of artificial intelligence AI the.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:26 页 大小:653.76KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注