Good Intentions Bad Inventions How Employees Judge Pervasive Technologies

2025-04-27 0 0 1.21MB 9 页 10玖币
侵权投诉
Good Intentions, Bad
Inventions: How Employees
Judge Pervasive Technologies
in the Workplace
Marios Constantinides
Nokia Bell Labs, Cambridge (UK)
Daniele Quercia
Nokia Bell Labs, Cambridge (UK)
Abstract—Pervasive technologies combined with powerful AI have been recently introduced to
enhance work productivity. Yet, some of these technologies are judged to be invasive. To identify
which ones, we should understand how employees tend to judge these technologies. We
considered 16 technologies that track productivity, and conducted a study in which 131
crowd-workers judged these scenarios. We found that a technology was judged to be right
depending on the following three aspects of increasing importance. That is, whether the
technology: 1) was currently supported by existing tools; 2) did not interfere with work or was fit
for purpose; and 3) did not cause any harm or did not infringe on any individual rights. Ubicomp
research currently focuses on how to design better technologies by making them more accurate,
or by increasingly blending them into the background. It might be time to design better
ubiquitous technologies by unpacking AI ethics as well.
Index Terms: H.1.2.b Human-centered
computing; C.3.h Ubiquitous computing
THE INTRODUCTION New pervasive technolo-
gies in the workplace have been introduced to
enhance productivity (e.g., a tool that provides
an aggregated productivity score based on, for
example, email use on the move, network con-
nectivity, and exchanged content). Yet, some of
them are judged to be invasive so much so that
they make it hard to build a culture of trust at
work, and often impacting workers’ productivity
and well-being in negative ways [2]. While these
technologies hold the promise of enabling em-
ployees to be productive, report after report has
highlighted the outcries of AI-based tools being
biased and unfair, and lacking transparency and
accountability [6]. Systems are now being used to
analyze footage from security cameras in work-
place to detect, for example, when employees are
not complying with social distancing rules1; while
there is a handful of good intentions behind such a
technology (e.g., ensuring safe return to the office
after the COVID-19 pandemic), the very same
technology could be used to surveil employees’
movements, or time away from desk. Companies
1https://www.ft.com/content/58bdc9cd-18cc-44f6-bc9b-8ca4a
c598fc8
under submission 1
arXiv:2210.06381v2 [cs.HC] 13 Oct 2022
now hold protected intellectual properties on tech-
nologies that use ultrasonic sound pulses to detect
worker’s location and monitor their interactions
with inventory bins in factories.2
As we move towards a future likely ruled
by big data and powerful AI algorithms, im-
portant questions arise relating to the psycho-
logical impacts of surveillance, data governance,
and compliance with ethical and moral concerns
(https://social-dynamics.net/responsibleai). To
make the first steps in answering such questions,
we set out to understand how employees judge
pervasive technologies in the workplace and, ac-
cordingly, determine how desirable technologies
are supposed to behave both onsite and remotely.
In so doing, we made two sets of contribu-
tions: First, we considered 16 pervasive technolo-
gies that track workplace productivity based on
a variety of inputs, and conducted a study in
which 131 US-based crowd-workers judged these
technologies along the 5 well-established moral
dimensions of harm, fairness, loyalty, authority,
and purity [9]. We found that the judgments of a
scenario were based on specific heuristics reflect-
ing whether the scenario: was currently supported
by existing technologies; interfered with current
ways of working or was not fit for purpose; and
was considered irresponsible by causing harm
or infringing on individual rights. Second, we
measured the moral dimensions associated with
each scenario by asking crowd-workers to asso-
ciate words reflecting the five moral dimensions
with it. We found that morally right technologies
were those that track productivity based on task
completion, work emails, and audio and textual
conversations during meetings, whereas morally
wrong technologies were those that involved
some kind of body-tracking such as tracking
physical movements and facial expressions.
RELATED WORK
On a pragmatic level, organizations adopted
“surveillance” tools mainly to ensure security and
boost productivity [4]. In a fully remote work
setting, organizations had to adopt new security
protocols [5] due to the increased volume of
online attacks,3and they ensured productivity by
2Wrist band haptic feedback system: https://patents.google.c
om/patent/WO2017172347A1/
3https://www.dbxuk.com/statistics/cyber-security-risks-wfh
tracking the efficient use of resources [4].
However, well-meaning technologies could
inadvertently be turned into surveillance tools.
For example, a technology that produces an ag-
gregated productivity score4based on diverse
inputs (e.g., email, network connectivity, and ex-
changed content) can be a double-edged sword.
On the one hand, it may provide managers and
senior leadership visibility into how well an or-
ganization is doing. On the other hand, it may
well be turned into an evil tool that puts employ-
ees under constant surveillance and unnecessary
psychological pressure.5More worryingly, one
in two employees in the UK thinks that it is
likely that they are being monitored at work [18],
while more than two-thirds are concerned that
workplace surveillance could be used in a dis-
criminatory way, if left unregulated. Previous
studies also found that employees are willing to
be ‘monitored’ but only when a company’s moti-
vations for doing so are transparently communi-
cated [14]. Technologies focused on workplace
safety typically receive the highest acceptance
rates [11], while technologies for unobtrusive and
continuous stress detection receive the lowest,
with employees mainly raising concerns about
tracking privacy-sensitive information [12].
To take a more responsible approach in de-
signing new technologies, researchers have re-
cently explored which factors affect people’s
judgments of these technologies. In his book
“How humans judge machines” [10], Cesar Hi-
dalgo showed that people do not judge humans
and machines equally, and that differences were
the result of two principles. First, people judge
humans by their intentions and machines by
their outcomes (e.g., “in natural disasters like
the tsunami, fire, or hurricane scenarios, there
is evidence that humans are judged more posi-
tively when they try to save everyone and fail—
a privilege that machines do not enjoy” [10]-p.
157). Second, people assign extreme intentions
to humans and narrow intentions to machines,
and, surprisingly, they may excuse human actions
more than machine actions in accidental scenarios
(e.g., “when a car accident is caused by either a
falling tree or a person jumping in front of a car,
4https://www.theguardian.com/technology/2020/nov/26/micro
soft-productivity-score-feature-criticised-workplace-surveillance
5https://twitter.com/dhh/status/1331266225675137024
2
摘要:

GoodIntentions,BadInventions:HowEmployeesJudgePervasiveTechnologiesintheWorkplaceMariosConstantinidesNokiaBellLabs,Cambridge(UK)DanieleQuerciaNokiaBellLabs,Cambridge(UK)Abstract—PervasivetechnologiescombinedwithpowerfulAIhavebeenrecentlyintroducedtoenhanceworkproductivity.Yet,someofthesetechnologies...

收起<<
Good Intentions Bad Inventions How Employees Judge Pervasive Technologies.pdf

共9页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:9 页 大小:1.21MB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 9
客服
关注