
RESEARCH
in fields such as pervasive com-
puting, networking, systems, etc., is usually driven
by results that push forward (or improve efficiency
of) the state-of-the-art or provide demonstrable
contributions. To achieve novel and practical so-
lutions, researchers often invest significant efforts
into designing and evaluating several iterations
of design choices with hopes of achieving the
desired performance. However, the probability
of such efforts leading to a negative output
can be quite high due to a lack of complete
understanding/hindsight or unconvincing design
choices, some of which may be harder to rectify
at later stages. Such efforts are rarely rewarded
since negative results are usually hard to pub-
lish [
1
], [
2
], being considered not novel or lacking
significant new knowledge. However, we argue
that negative results, if properly leveraged, can
be quite beneficial to the research community at-
large in the perspective of lessons learned and
knowledge transfer between the research groups
tackling the same topics. While our arguments in
this article are from the perspective of pervasive
computing research, many of our insights are
valid in other fields of computing, such as human-
computer interaction, network/mobility measure-
ments, security and privacy, etc., thanks to the
inherently interdisciplinary nature of the pervasive
computing.
Most pervasive computing systems require
multiple, often non-standard components, like
hardware prototypes for sensing, developing com-
putation paradigms, novel energy-saving commu-
nication protocols, adaptive middleware, or multi-
modal user interfaces. Often, a new research idea
or hypothesis covers only one part of the system,
but to complete the experiment, you must (at least
partially) get everything to work. Each component
might be a source of error or the bottleneck that
prohibits the expected result. However, sometimes
the interplay of different components highlight
the symptom of an underlying, more significant
challenge that could lead to new research ideas.
Hence, in this article, we assert that we must
treat negative results as a potential source for new
scientific insights.
While negative results might not directly con-
tribute to the advancement of the state-of-the-art,
the wisdom of hindsight could be an essential
contribution in itself, such that other researchers
could avoid falling into similar pitfalls. Such
“what not to do?” insights can also foster good
research practices, especially for handling gray
areas like ethical boundaries [
3
]. We consider
negative results to be outcomes of studies that
are run correctly (in the light of the current state-
of-the-art) and in good practice but fail in proving
the hypothesis (statistically significantly). The
“badness” of the work might also emerge from
properly but unfittingly designed data collection
or (non-trivial) lapses of hindsight, especially in
studies involving real-world uncontrolled mea-
surements. Furthermore, the experiences that help
spot negative outcomes from intermediate results
benefit the systems and networking community at
large.
Besides, it is crucial to share (especially for
junior researchers) the knowledge that not all
experiments will be successful, failures might
happen, and knowing how to overcome unexpected
outcomes is a skill to develop as a researcher.
The interest in this field was evident from the
participation in the questionnaire for Percom 2022
participants, in which
≈
70 attendees participated.
The result of the questionnaire revealed that
≈
85%
of the conference participants already failed in
their research work. The perspectives noted in
this article are drawn from discussions with the
pervasive computing community during the First
International Workshop on Negative Results in
Pervasive Computing (PerFail 2022)
1
and from
authors’ past experiences. Specifically, this paper
discusses failure taxonomy in pervasive comput-
ing, considers mitigation and avoidance strategies,
and presents our call-for-arms for healthy failure
culture in pervasive research.
What is a Failure?
Traditional applied research aims to funda-
mentally understand how systems work or how to
make them work. Within the pervasive computing,
systems and networking research, solutions that
demonstrably work are naturally more attractive
to the community. Therefore, it becomes essential
to gauge the usefulness of research that failed to
work. Failure can be viewed as a necessary catalyst
to research – if there is no possibility to fail, the
1https://perfail-workshop.github.io/2022/
2