
A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review •56:3
IEEE Xplore [
161
], and ScienceDirect [
97
], to search for relevant publications, resulting in an initial set of 2534
publications. After an abstract screening and a subsequent full-text screening, we considered 327 publications
relevant for the synthesis to answer RQs 1-3.
Our SLR shows which approaches for in-vehicle interaction modalities were used by the included publications.
They primarily used visual, auditory, and tactile input and output modalities, while few considered (novel)
modalities, such as electrodermal, thermal, olfactory, or cerebral. Furthermore, our proposed combination matrix
for multimodal interaction reveals that most publications utilized visual modalities for multimodal input and
output, e.g., in combination with auditory, kinesthetic, or tactile modalities. Gaps regarding multimodal input
containing olfactory modalities and output containing vestibular, electrodermal, or gustatory modalities highlight
future multimodal interaction research opportunities. We then present a design space for in-vehicle interaction
that extends [
177
] regarding the set of input and output modalities, interior locations, and involved human
sensors and actuators. The design space reveals little to no utilization of thermal, olfactory, gustatory, cerebral,
and cardiac input modalities, only a few approaches for vestibular, kinesthetic, and thermal output, and none for
electrodermal and gustatory output. Our design space further shows that publications mainly used the front as
output location, e.g., for displays [
50
] or vibration [
376
], while other locations are not frequently considered, e.g.,
table, door, rear, oor, or ceiling. To answer RQ 4, we assessed the feasibility of possible in-vehicle interaction
modalities and locations in an online user study (N=48) by presenting concept images deduced from related
work and gaps in our design space (see Figure 4 and Figure 5). The study results reveal that input modalities
were more accepted regarding usefulness, usage, and comfort than output modalities. Besides, well-established
input and output modalities in current vehicles, such as auditory or tactile, were generally perceived as more
acceptable. However, novel input and output modalities, e.g., vestibular stimuli, were also perceived as useful.
While participants perceived interaction in some nomadic and anchored interior locations as useful, e.g., handheld,
wearable, rear, seat, or table, they deemed other locations less useful, e.g., ceiling, oor, door, or VR. The results
highlight the importance of considering novel modalities and locations in future in-vehicle interaction design.
Contribution Statement: First, we report the results of an SLR on in-vehicle UI research and the analysis of
multimodal interaction and utilization of interior locations, leading to a combination matrix for multimodal
in-vehicle interaction and visualization of interaction locations accompanied by a self-developed interactive
website
1
. Second, we propose a design space for in-vehicle interaction, considering several novel vehicle interior
locations and including an extensive set of human sensors and actuators. Third, we provide the results of an
image-based online user study (N=48) on perceived usefulness, real-world usage, and comfort of possible in-
vehicle interaction approaches deduced from our design space and related work and discuss implications for
future interaction design.
2 BACKGROUND AND RELATED WORK
Our work is grounded on (1) a denition of the in-vehicle interaction scheme based on a classication of human
sensors and actuators and (2) previous literature reviews on automotive UIs.
2.1 Human Sensors and Actuators
Based on [
33
,
41
,
165
,
340
], we distinguish seven human sensor/actuator categories: (1) visual, (2) auditory,
(3) haptic, (4) olfactory, (5) gustatory, (6) cerebral, and (7) cardiac. We follow the denition of sensors and
actuators and the respective interaction scheme shown in Figure 2. The in-vehicle interaction scheme includes
two agents: human and vehicle, and describes an input-output feedback loop between both agents [
340
]. A
human/the human body intentionally or subconsciously uses actuators such as ngers, brain, or heart to generate
1
https://in-vehicle-interaction-design-space.onrender.com/ | Interactive tool to support investigation of in-vehicle interaction research and
design.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.