
From plane crashes to algorithmic harm: applicability of
safety engineering frameworks for responsible ML
SHALALEH RISMANI, Google Research, McGill University, Canada
RENEE SHELBY, Google, JusTech Lab, Australian National University, U.S.A
ANDREW SMART, Google Research, U.S.A
EDGAR JATHO, Naval Postgraduate School, U.S.A
JOSH A. KROLL, Naval Postgraduate School, U.S.A
AJUNG MOON, McGill University, Canada
NEGAR ROSTAMZADEH, Google Research, Canada
Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social
and ethical impact – described here as social and ethical risks – for users, society and the environment. Despite
the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed
and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management
practices, and collected their rst reactions on adapting safety engineering frameworks into their practice
– namely, System Theoretic Process Analysis (STPA) and Failure Mode and Eects Analysis (FMEA). Our
ndings suggest STPA/FMEA can provide appropriate structure toward social and ethical risk assessment
and mitigation processes. However, we also nd nontrivial challenges in integrating such frameworks in
the fast-paced culture of the ML industry. We call on the ML research community to strengthen existing
frameworks and assess their ecacy, ensuring that ML systems are safer for all people.
CCS Concepts:
•Social and professional topics →Computing / technology policy
;
•General and
reference →Evaluation;Surveys and overviews.
Additional Key Words and Phrases: empirical study, safety engineering, machine learning, social and ethical
risk
ACM Reference Format:
Shalaleh Rismani, Renee Shelby, Andrew Smart, Edgar Jatho, Josh A. Kroll, AJung Moon, and Negar Ros-
tamzadeh. 2023. From plane crashes to algorithmic harm: applicability of safety engineering frameworks for
responsible ML. 1, 1 (October 2023), 25 pages. https://doi.org/XXXXXXX.XXXXXXX
1 INTRODUCTION
During a panel at the 1994 ACM Conference on Human Factors in Computing Systems (CHI),
prominent scholars from dierent disciplines convened to discuss "what makes a good computer
system good." Panelists highlighted considerations for safety, ethics, user perspectives, and societal
structures as critical elements for making a good system [
41
]. Almost 28 years later, we posit that
Authors’ addresses: Shalaleh Rismani, Google Research, McGill University, Montreal, Canada; Renee Shelby, Google, JusTech
Lab, Australian National University, San Francisco, U.S.A; Andrew Smart, Google Research, San Francisco, U.S.A; Edgar
Jatho, Naval Postgraduate School, Monterey, U.S.A; Josh A. Kroll, Naval Postgraduate School, Monterey, U.S.A; AJung Moon,
McGill University, Montreal, Canada; Negar Rostamzadeh, Google Research, Montreal, Canada.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and
the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specic permission and/or a fee. Request permissions from permissions@acm.org.
©2023 Association for Computing Machinery.
XXXX-XXXX/2023/10-ART $15.00
https://doi.org/XXXXXXX.XXXXXXX
, Vol. 1, No. 1, Article . Publication date: October 2023.
arXiv:2210.03535v1 [cs.HC] 6 Oct 2022