now hold protected intellectual properties on tech-
nologies that use ultrasonic sound pulses to detect
worker’s location and monitor their interactions
with inventory bins in factories.2
As we move towards a future likely ruled
by big data and powerful AI algorithms, im-
portant questions arise relating to the psycho-
logical impacts of surveillance, data governance,
and compliance with ethical and moral concerns
(https://social-dynamics.net/responsibleai). To
make the first steps in answering such questions,
we set out to understand how employees judge
pervasive technologies in the workplace and, ac-
cordingly, determine how desirable technologies
are supposed to behave both onsite and remotely.
In so doing, we made two sets of contribu-
tions: First, we considered 16 pervasive technolo-
gies that track workplace productivity based on
a variety of inputs, and conducted a study in
which 131 US-based crowd-workers judged these
technologies along the 5 well-established moral
dimensions of harm, fairness, loyalty, authority,
and purity [9]. We found that the judgments of a
scenario were based on specific heuristics reflect-
ing whether the scenario: was currently supported
by existing technologies; interfered with current
ways of working or was not fit for purpose; and
was considered irresponsible by causing harm
or infringing on individual rights. Second, we
measured the moral dimensions associated with
each scenario by asking crowd-workers to asso-
ciate words reflecting the five moral dimensions
with it. We found that morally right technologies
were those that track productivity based on task
completion, work emails, and audio and textual
conversations during meetings, whereas morally
wrong technologies were those that involved
some kind of body-tracking such as tracking
physical movements and facial expressions.
RELATED WORK
On a pragmatic level, organizations adopted
“surveillance” tools mainly to ensure security and
boost productivity [4]. In a fully remote work
setting, organizations had to adopt new security
protocols [5] due to the increased volume of
online attacks,3and they ensured productivity by
2Wrist band haptic feedback system: https://patents.google.c
om/patent/WO2017172347A1/
3https://www.dbxuk.com/statistics/cyber-security-risks-wfh
tracking the efficient use of resources [4].
However, well-meaning technologies could
inadvertently be turned into surveillance tools.
For example, a technology that produces an ag-
gregated productivity score4based on diverse
inputs (e.g., email, network connectivity, and ex-
changed content) can be a double-edged sword.
On the one hand, it may provide managers and
senior leadership visibility into how well an or-
ganization is doing. On the other hand, it may
well be turned into an evil tool that puts employ-
ees under constant surveillance and unnecessary
psychological pressure.5More worryingly, one
in two employees in the UK thinks that it is
likely that they are being monitored at work [18],
while more than two-thirds are concerned that
workplace surveillance could be used in a dis-
criminatory way, if left unregulated. Previous
studies also found that employees are willing to
be ‘monitored’ but only when a company’s moti-
vations for doing so are transparently communi-
cated [14]. Technologies focused on workplace
safety typically receive the highest acceptance
rates [11], while technologies for unobtrusive and
continuous stress detection receive the lowest,
with employees mainly raising concerns about
tracking privacy-sensitive information [12].
To take a more responsible approach in de-
signing new technologies, researchers have re-
cently explored which factors affect people’s
judgments of these technologies. In his book
“How humans judge machines” [10], Cesar Hi-
dalgo showed that people do not judge humans
and machines equally, and that differences were
the result of two principles. First, people judge
humans by their intentions and machines by
their outcomes (e.g., “in natural disasters like
the tsunami, fire, or hurricane scenarios, there
is evidence that humans are judged more posi-
tively when they try to save everyone and fail—
a privilege that machines do not enjoy” [10]-p.
157). Second, people assign extreme intentions
to humans and narrow intentions to machines,
and, surprisingly, they may excuse human actions
more than machine actions in accidental scenarios
(e.g., “when a car accident is caused by either a
falling tree or a person jumping in front of a car,
4https://www.theguardian.com/technology/2020/nov/26/micro
soft-productivity-score-feature-criticised-workplace-surveillance
5https://twitter.com/dhh/status/1331266225675137024
2