A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review

2025-04-30 0 0 4.36MB 51 页 10玖币
侵权投诉
56
A Design Space for Human Sensor and Actuator Focused In-Vehicle
Interaction Based on a Systematic Literature Review
PASCAL JANSEN,Institute of Media Informatics, Ulm University, Germany
MARK COLLEY,Institute of Media Informatics, Ulm University, Germany
ENRICO RUKZIO,Institute of Media Informatics, Ulm University, Germany
Fig. 1. Anchored (white outline) and nomadic (red outline) interaction locations in a concept vehicle.
Automotive user interfaces constantly change due to increasing automation, novel features, additional applications, and user
demands. While in-vehicle interaction can utilize numerous promising modalities, no existing overview includes an extensive
set of human sensors and actuators and interaction locations throughout the vehicle interior. We conducted a systematic
literature review of 327 publications leading to a design space for in-vehicle interaction that outlines existing and lack of
work regarding input and output modalities, locations, and multimodal interaction. To investigate user acceptance of possible
modalities and locations inferred from existing work and gaps unveiled in our design space, we conducted an online study
(N=48). The study revealed users’ general acceptance of novel modalities (e.g., brain or thermal activity) and interaction with
locations other than the front (e.g., seat or table). Our work helps practitioners evaluate key design decisions, exploit trends,
and explore new areas in the domain of in-vehicle interaction.
CCS Concepts:
General and reference Surveys and overviews
;
Human-centered computing HCI theory,
concepts and models;Empirical studies in HCI.
Additional Key Words and Phrases: systematic literature review; design space; in-vehicle interaction; human sensors and
actuators
Authors’ addresses: Pascal Jansen, pascal.jansen@uni-ulm.de, Institute of Media Informatics, Ulm University, Ulm, Germany; Mark Colley,
mark.colley@uni-ulm.de, Institute of Media Informatics, Ulm University, Ulm, Germany; Enrico Rukzio, enrico.rukzio@uni-ulm.de, Institute
of Media Informatics, Ulm University, Ulm, Germany.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page.
Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
©2022 Copyright held by the owner/author(s).
2474-9567/2022/6-ART56
https://doi.org/10.1145/3534617
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.
arXiv:2210.12493v1 [cs.HC] 22 Oct 2022
56:2 Jansen et al.
ACM Reference Format:
Pascal Jansen, Mark Colley, and Enrico Rukzio. 2022. A Design Space for Human Sensor and Actuator Focused In-Vehicle
Interaction Based on a Systematic Literature Review. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 56
(June 2022), 51 pages. https://doi.org/10.1145/3534617
1 INTRODUCTION
With the increasing integration of automation technology into vehicle systems, the scope of in-vehicle interaction
is getting broader. According to the Society of Automotive Engineers (SAE) taxonomy J3016 [
324
], there are six
levels of driving automation, ranging from level 0 (no driving automation) to level 5 (full driving automation) in
the context of motor vehicles and their operation on roadways. With automated vehicles (AVs) (SAE levels 3-5)
changing the role of the driver, automotive user interfaces (UIs) undergo a paradigm shift [
80
] and the vehicle
transforms into a mobile oce or living space [
167
]. Accordingly, the driver can perform non-driving related
tasks (NDRTs) [
82
,
396
], such as working, using the smartphone [
77
], or gaming in virtual reality (VR) [
249
].
Hence, the interior design of AVs will adapt, and new input and output locations emerge that are anchored (e.g.,
door, table, or seat) or nomadic (e.g., handheld or wearable), see Figure 1. For example, future AVs may consist of a
4-seat conguration, where passengers face each other and, like passengers of non-AVs, benet from UIs located
throughout the interior, e.g., rear-seat entertainment [
78
,
129
]. As the interior is a closed space surrounding
the passengers, all human senses could be stimulated by output modalities and, to some extent, used as input.
We consider input and output from the human perspective (see Figure 2), i.e., human actuators (e.g., mouth or
skin) intentionally generate explicit input modalities (e.g., speech) or subconsciously produce implicit input
modalities (e.g., electrodermal activity (EDA)) sensed by vehicle sensors (e.g., microphone), and vehicle actuators
(e.g., speaker) generate output modalities (e.g., sound) perceived by human sensors (e.g., ears).
Despite the large body of works, concepts, and prototypes regarding in-vehicle input and output, current
automotive UI research does not or only partly consider the full range of input and output modalities (which
also contains, e.g., vestibular stimuli, EDA, gustatory stimuli, brain, or heart activity) and novel vehicle interior
locations (e.g., rear, oor, or ceiling). Thus, a new perspective on the in-vehicle interaction space is required,
unconstrained from a front-focused design and including such modalities. Besides, it is partly unknown what
modalities and interior locations were already considered in previous works concerning vehicles of any SAE level
(0-5). For manual or assisted driving (SAE 0-2), knowledge about possible input and output modalities and their
placement may help in designing interactions with minimal driver distraction [
299
] and workload levels (physical,
visual, and mental) [
53
]. In the context of AVs (SAE 3-5), there exist human factor issues, such as mistrust [
107
],
loss of control [
109
], or safety concerns [
333
]. Therefore, it is essential to design in-vehicle interactions that will
be accepted [
80
]. Besides, passengers of AVs can perform NDRTs while interacting with modalities/locations that
were previously impractical or dangerous regarding the driving task, e.g., due to sensory overload or reduced
takeover readiness. Still, the usability of such modalities/locations (e.g., swivel seats, or VR) in an AV context is
underexplored.
To investigate these problems, we dened the following research questions (RQs):
RQ 1:
How does current automotive UI research leverage human sensors and actuators for in-vehicle interaction?
RQ 2: What vehicle interior locations can be utilized for in-vehicle interaction?
RQ 3:
What is the design space for in-vehicle interaction, including an extensive set of human sensors and
actuators?
RQ 4:
How do users perceive the usefulness, real-world usage, and comfort of in-vehicle modalities and locations?
To answer RQs 1-3, we conducted a systematic literature review (SLR). We gathered a set of keywords for
in-vehicle interaction to dene the search query for the SLR, which was based on the PRISMA guidelines [
255
,
279
].
Our SLR considered vehicles of any SAE level (0-5). We selected the databases ACM Digital Library (DL) [
20
],
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.
A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review 56:3
IEEE Xplore [
161
], and ScienceDirect [
97
], to search for relevant publications, resulting in an initial set of 2534
publications. After an abstract screening and a subsequent full-text screening, we considered 327 publications
relevant for the synthesis to answer RQs 1-3.
Our SLR shows which approaches for in-vehicle interaction modalities were used by the included publications.
They primarily used visual, auditory, and tactile input and output modalities, while few considered (novel)
modalities, such as electrodermal, thermal, olfactory, or cerebral. Furthermore, our proposed combination matrix
for multimodal interaction reveals that most publications utilized visual modalities for multimodal input and
output, e.g., in combination with auditory, kinesthetic, or tactile modalities. Gaps regarding multimodal input
containing olfactory modalities and output containing vestibular, electrodermal, or gustatory modalities highlight
future multimodal interaction research opportunities. We then present a design space for in-vehicle interaction
that extends [
177
] regarding the set of input and output modalities, interior locations, and involved human
sensors and actuators. The design space reveals little to no utilization of thermal, olfactory, gustatory, cerebral,
and cardiac input modalities, only a few approaches for vestibular, kinesthetic, and thermal output, and none for
electrodermal and gustatory output. Our design space further shows that publications mainly used the front as
output location, e.g., for displays [
50
] or vibration [
376
], while other locations are not frequently considered, e.g.,
table, door, rear, oor, or ceiling. To answer RQ 4, we assessed the feasibility of possible in-vehicle interaction
modalities and locations in an online user study (N=48) by presenting concept images deduced from related
work and gaps in our design space (see Figure 4 and Figure 5). The study results reveal that input modalities
were more accepted regarding usefulness, usage, and comfort than output modalities. Besides, well-established
input and output modalities in current vehicles, such as auditory or tactile, were generally perceived as more
acceptable. However, novel input and output modalities, e.g., vestibular stimuli, were also perceived as useful.
While participants perceived interaction in some nomadic and anchored interior locations as useful, e.g., handheld,
wearable, rear, seat, or table, they deemed other locations less useful, e.g., ceiling, oor, door, or VR. The results
highlight the importance of considering novel modalities and locations in future in-vehicle interaction design.
Contribution Statement: First, we report the results of an SLR on in-vehicle UI research and the analysis of
multimodal interaction and utilization of interior locations, leading to a combination matrix for multimodal
in-vehicle interaction and visualization of interaction locations accompanied by a self-developed interactive
website
1
. Second, we propose a design space for in-vehicle interaction, considering several novel vehicle interior
locations and including an extensive set of human sensors and actuators. Third, we provide the results of an
image-based online user study (N=48) on perceived usefulness, real-world usage, and comfort of possible in-
vehicle interaction approaches deduced from our design space and related work and discuss implications for
future interaction design.
2 BACKGROUND AND RELATED WORK
Our work is grounded on (1) a denition of the in-vehicle interaction scheme based on a classication of human
sensors and actuators and (2) previous literature reviews on automotive UIs.
2.1 Human Sensors and Actuators
Based on [
33
,
41
,
165
,
340
], we distinguish seven human sensor/actuator categories: (1) visual, (2) auditory,
(3) haptic, (4) olfactory, (5) gustatory, (6) cerebral, and (7) cardiac. We follow the denition of sensors and
actuators and the respective interaction scheme shown in Figure 2. The in-vehicle interaction scheme includes
two agents: human and vehicle, and describes an input-output feedback loop between both agents [
340
]. A
human/the human body intentionally or subconsciously uses actuators such as ngers, brain, or heart to generate
1
https://in-vehicle-interaction-design-space.onrender.com/ | Interactive tool to support investigation of in-vehicle interaction research and
design.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.
56:4 Jansen et al.
input modalities. In this work, intentionally performed input is considered as explicit (e.g., touch, or speech) and
subconsciously performed input as implicit (e.g., brain or heart activity). The input modalities are sensed by
specic vehicle sensors (e.g., microphones or touchscreens) at vehicle input locations that are either nomadic
(e.g., VR, handheld, or wearable) or anchored throughout the interior (e.g., front, seat, door, or rear). Vehicles
use actuator devices, such as screens, speakers, or vibration motors, at nomadic or anchored output locations
to generate output modalities (e.g., display, sound, or vibration). The output modalities are sensed by specic
human sensors (e.g., eye, ear, or skin). In this work, the sensor/actuator categories are used to categorize input
and output modalities.
Fig. 2. In-vehicle interaction scheme including two agents: a human and a vehicle. A feedback loop is created between both
agents, which sense the external world using natural (human) and artificial (vehicle) sensors. Both human and vehicle act
upon the environment with their actuators. One possible example is shown aer each sensor, actuator, input modality, and
output modality.
The
visual
category entails the eye as a sensor enabling passengers to perceive any light-based output
modalities and as an actuator that produces explicit input modalities, e.g., gaze, pupil dilation, or blink rate [
340
].
Auditory
output modalities are sensed by the ears, while auditory actuators are body parts that can explicitly
generate sounds, e.g., mouth for voice or hands/ngers for clapping [
111
]. According to Benyon [
33
] and the
ISO standard 9241-910 [
272
], we divide
haptic
into kinesthetic, cutaneous, and vestibular. We did not consider
proprioception, which is the sense of one’s body position and movement [
272
] as such sensation is already
covered by kinesthetic and vestibular sensors/actuators [
165
].
Kinesthetic
sensors in the human joints and
muscles detect body motion, while kinesthetic actuators generate explicit output modalities, such as muscle
activity or body movement. Such body activity can also be implicitly performed.
Cutaneous
is subdivided into
electrodermal, tactile, thermal, and pain, which each have specic skin sensors to perceive output modalities such
as pressure, temperature, or pain stimuli. Cutaneous actuators generate skin-related input modalities explicitly
via touch and implicitly via EDA or skin temperature. The
vestibular
category is adapted from [
33
] and describes
a sensor that detects balance and general body motion. However, as the vestibular system is a passive sensor and
active body motion is a kinesthetic actuator, there is no vestibular actuator. Similar to [
33
,
340
], we include the
olfactory
and
gustatory
categories, which each have a dedicated sensor organ (nose and tongue). However,
olfactory actuators are any source of body scent, and gustatory actuators are any source of body avor, e.g.,
sweat taste. Olfactory and gustatory actuators implicitly produce input. Besides, we include the
cerebral
and
cardiac
categories, similar to [
25
,
364
]. We consider brain activity measured by, e.g., functional near-infrared
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.
A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review 56:5
spectroscopy (fNIRS) or electroencephalography (EEG) as implicit input modality and brain stimuli as output
modalities "sensed" by the brain. Likewise, heart activity (e.g., heart rate) is an implicit input modality, and heart
stimuli (e.g., via a debrillator) are output modalities "sensed" by the heart.
2.2 Literature Reviews on Automotive User Interfaces
There are several reviews on human-vehicle interaction and considerations for the design of automotive UIs. For
example, Lee [
216
] analyzed 50 years of driving safety research, including an overview of vehicle technology, and
Akamatsu et al. [
8
] presented a detailed description of the history of vehicle UIs and related human factors. Still,
these papers give little information on higher automation levels (SAE levels 3, 4, and 5) and future in-vehicle
interaction regarding novel modalities. A review including AVs was conducted by Kun et al. [
197
] who identied
problem elds for automotive research regarding the transition to higher automation levels. Similarly, Ayoub et
al. [
22
] identied various trends, e.g., the transition towards AVs or the increasing relevance of NDRTs. They also
summarized a broad range of input and output modalities, including (novel) approaches like augmented reality
(AR), VR, or emotion recognition. An overview of technologies that are being used or developed to perceive user’s
intentions for natural and intuitive in-vehicle interaction was presented by Murali et al. [
261
]. They found that
novel multimodal sensing devices replace legacy display interfaces and haptic devices such as buttons and knobs.
However, their overview was not based on an SLR, and they did not consider some sensor/actuator categories,
e.g., vestibular, olfactory, and gustatory. Besides, there are reviews regarding human factor-related issues (e.g.,
distraction, awareness, trust, or acceptance) [
76
,
164
] and technical challenges [
157
]. The rst design space for
driver-based automotive UIs was introduced by Kern and Schmidt [
177
] that describes in-vehicle input and
output modalities concerning their location in the interior. However, since driving automation was not yet an
omnipresent research topic at the time of publication (2009), their design space focused on a subset of possible
in-vehicle modalities (i.e., visual, auditory, and haptic) and locations (i.e., subdivisions of front). In a later work
(2021), Detjen et al. [
80
] discussed the requirements and challenges of interaction with AVs regarding users’
acceptance, namely, security & privacy, trust & transparency, safety & performance, competence & control, and
positive experiences. They also classied current in-vehicle interaction literature by their contribution to one
of the acceptance challenges and used interaction modality. However, they did not consider novel in-vehicle
locations and mainly focused on SAE 3-5 vehicles.
In combination, these works already furnish the direction for future in-vehicle interaction. However, they are
limited due to not including AVs (e.g., [
8
,
216
]), novel interior locations (e.g., [
80
]), or considering a subset of
possible human sensors and actuators (e.g., [
177
,
261
]). In this work, we include in our SLR any SAE level (i.e.,
from manual to highly automated driving) and approaches for input and output modalities while considering an
extensive set of human sensors and actuators. Besides, we propose a comprehensive design space that extends
previous design spaces (such as [80,177]) and includes not only the driver but also other passengers as users.
3 SYSTEMATIC LITERATURE REVIEW ON IN-VEHICLE INTERACTION
To answer RQ 1, "How does current automotive UI research leverage human sensors and actuators for in-vehicle
interaction?" and RQ 2, "What vehicle interior locations can be utilized for in-vehicle interaction?", our goal was to
elaborate a detailed and comprehensive overview of research on in-vehicle interaction. Therefore, we employed an
SLR. The process of this SLR is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis
(PRISMA) proposed by Moher et al. [
254
] and Page et al. [
279
]. Our multistaged process is depicted in Figure 3
and consists of an identication step, a two-step publication screening part, and a synthesis.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 56. Publication date: June 2022.
摘要:

56ADesignSpaceforHumanSensorandActuatorFocusedIn-VehicleInteractionBasedonaSystematicLiteratureReviewPASCALJANSEN,InstituteofMediaInformatics,UlmUniversity,GermanyMARKCOLLEY,InstituteofMediaInformatics,UlmUniversity,GermanyENRICORUKZIO,InstituteofMediaInformatics,UlmUniversity,GermanyFig.1.Anchored(...

展开>> 收起<<
A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review.pdf

共51页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:51 页 大小:4.36MB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 51
客服
关注