Mixed-Reality Robot Behavior Replay A System Implementation Zhao Han1Tom Williams1Holly A. Yanco2 1MIRRORLab Department of Computer Science Colorado School of Mines 1500 Illinois St. Golden CO USA 80401

2025-05-02 0 0 478.59KB 6 页 10玖币
侵权投诉
Mixed-Reality Robot Behavior Replay: A System Implementation
Zhao Han,1*Tom Williams,1Holly A. Yanco2
1MIRRORLab, Department of Computer Science, Colorado School of Mines, 1500 Illinois St., Golden, CO, USA 80401
2HRI Lab, Department of Computer Science, University of Massachusetts Lowell, 1 University Ave., Lowell, MA, USA 01854
zhaohan@mines.edu, twilliams@mines.edu, holly@cs.uml.edu,
Abstract
As robots become increasingly complex, they must explain
their behaviors to gain trust and acceptance. However, it may
be difficult through verbal explanation alone to fully convey
information about past behavior, especially regarding objects
no longer present due to robots’ or humans’ actions. Humans
often try to physically mimic past movements to accompany
verbal explanations. Inspired by this human-human interac-
tion, we describe the technical implementation of a system
for past behavior replay for robots in this tool paper. Specifi-
cally, we used Behavior Trees to encode and separate robot
behaviors, and schemaless MongoDB to structurally store
and query the underlying sensor data and joint control mes-
sages for future replay. Our approach generalizes to different
types of replays, including both manipulation and navigation
replay, and visual (i.e., augmented reality (AR)) and auditory
replay. Additionally, we briefly summarize a user study to fur-
ther provide empirical evidence of its effectiveness and effi-
ciency. Sample code and instructions are available on GitHub
at https://github.com/umhan35/robot-behavior-replay.
1 Introduction
Robots used in domains like collaborative manufacturing,
warehousing, and assistive living stand to have benefits such
as improving productivity, reducing work-related injuries,
and increasing the standard of living. Yet the increasingly
complexity of the manipulation and navigation tasks needed
in these domains can be difficult for users to understand,
especially when users need to ascertain the reasons behind
robot failures. As such, there is a surge of interest in im-
proving robot understandability by enabling them to ex-
plain themselves, e.g., through function annotation (Hayes
and Shah 2017), encoder-decoder deep learning framework
(Amir, Doshi-Velez, and Sarne 2018), interpretable task
representation (Han et al. 2021), and software architecture
(Stange et al. 2022). Different dimensions of robot expla-
nations have also been explored, such as proactive explana-
tions (Zhu and Williams 2020), preferred explanations (Han,
Phillips, and Yanco 2021), and undesired behaviors (Stange
and Kopp 2020). However, these works focused on explain-
ing a robot’s current behaviors.
*Most of this work was completed while Zhao Han was affili-
ated with the University of Massachusetts Lowell.
Presented at the AI-HRI Symposium at AAAI Fall Symposium Se-
ries (FSS) 2022
Figure 1: Manipulation replay using the replay technique
described in this paper. The robot’s arm movement and the
green projection (bottom) to indicate the object to be grasped
were being replayed to clarify a perception failure: A torn-
up wood chip was unknowingly misrecognized as one of the
gearbox bottoms. Key frames from the same replay and two
other types of replays are illustrated in Figure 2–4.
One challenge within this space is enabling robots to
explain their past behavior after their environment has
changed. This is an interesting yet challenging problem be-
cause objects present in the past might have already been
replaced or removed from the scene, making the task of re-
ferring to those objects during explanation particularly chal-
lenging (see also Han, Rygina, and Williams 2022). More-
over, a robot may not be capable of reasoning and explaining
its past behaviors due to unawareness of failures (see Figure
2 and 4), and limited semantic reasoning about objects like
ground obstacles or tabletop objects (see also Figure 3).
To help explain a robot’s past behaviors, we describe in
this tool paper the implementation of a mixed-reality robot
behavior replay system that builds on previous work on Visu-
alization Robots Virtual Design Elements (VDEs) (Walker
et al. 2022). While previous VDEs in this category have
primarily sought to visualize future robot behaviors (Rosen
et al. 2019), we instead use this technique to visualize previ-
ously executed behaviors. The robot behaviors that our tech-
nique is capable of replaying generalize to replay of both
manipulation and navigation behaviors. (See Figure 2–4).
Our replay technique can also handle replay of non-physical
cues: verbalization, e.g., sound and speech and visualiza-
tion, such as projector-based augmented reality (Han et al.
2020b, 2022). Empirical evidence of the effectiveness and
efficiency of our approach in explaining past behavior has
arXiv:2210.00075v1 [cs.RO] 30 Sep 2022
Figure 2: Manipulation replay of picking a misrecognized object: Start, perceive, reach above, pick, reset. Both arm move-
ment and AR visualizations are replayed. The rectangular green area (bottom) shows the grasped object. White area, projected
onto the two gearbox bottoms, shows correctly recognized objects. (Video: https://youtu.be/pj7-LqEsb94)
Figure 3: Navigation replay of a detour path: Start, rotate, detour, reach position, reach orientation. Both wheel movement and
AR visualizations were replayed. Yellow area (spheres of laser scan points; bottom middle) were projected to show ground ob-
stacle, and purple arrows (path poses; bottom) are projected to show past detour path. (Video: https://youtu.be/hV6jsA42YYY)
been presented in our previous work (Han and Yanco Under
review). While beyond the scope of this tool paper, we will
briefly mention the experimental results in Section 4.
We demonstrate our technique on a mobile manipula-
tor Fetch robot (Wise et al. 2016) using the widely-used
Robot Operating System (ROS) (Quigley et al. 2009), with
the robot behavior encoded in hierarchical behavior trees
(Colledanchise and ¨
Ogren 2018). Our use of ROS means
that our implementation is more-or-less platform agnostic,
as most current robots used in research and development
have ROS support (OpenRobotics 2022) or bridges (Scheutz
et al. 2019).
This work is beneficial to both manipulation and naviga-
tion researchers. In addition, our replay technique is helpful
for visual debugging for robot developers (Ikeda and Szafir
2022), and for explaining past behaviors to non-expert users.
2 Related Work:
Choosing Underlying Technologies
2.1 Robot Data Storage
To replay robot behavior, the first step is to store robot data.
One popular tool is rosbag1, which uses filesystems (bag
files) to store and play ROS messages. Despite being persis-
tent on disks, relying on filesystems, compared to databases
that we will discuss soon, made it challenging to query spe-
cific behaviors for replaying purposes, because related data
1https://wiki.ros.org/rosbag
in different bag files are unstructured and unlinked, requiring
writing custom code and logic.
Thus, roboticists have been exploring database tech-
nologies. The schemaless MongoDB database is a popular
and justified choice among many researches, e.g., Beetz,
M¨
osenlechner, and Tenorth (2010); Niemueller, Lakemeyer,
and Srinivasa (2012); Beetz, Tenorth, and Winkler (2015), to
store data from sensors or communication messages. Being
schemaless allows storing different data types without creat-
ing different data structures for different data messages, such
as tables in relational Structured Query Language (SQL)
databases, e.g., MySQL. In addition to the large number of
different robotics data messages, they are often hierarchi-
cal/nested and commonly seen in ROS messages, such as the
PoseStamped message in the geometry msgs package2. The
hierarchical PoseStamped message contains a Header mes-
sage to include a reference coordinate frame and a times-
tamp, and a Pose message to include a hierarchical Point
message for position information and a Quaternion message
for orientation information. It is imaginably tedious to cre-
ate all these tables for nested data messages one by one. The
advantage of schemaless database is also known as minimal
configuration, allowing evolving data structures to support
innovation and development (Niemueller, Lakemeyer, and
Srinivasa 2012). In this work, we used the mongodb log li-
brary, open-sourced by Niemueller, Lakemeyer, and Srini-
vasa (2012), with slight modifications to synchronize timing
of different timestamped ROS messages for replay.
2https://wiki.ros.org/geometry msgs
摘要:

Mixed-RealityRobotBehaviorReplay:ASystemImplementationZhaoHan,1*TomWilliams,1HollyA.Yanco21MIRRORLab,DepartmentofComputerScience,ColoradoSchoolofMines,1500IllinoisSt.,Golden,CO,USA804012HRILab,DepartmentofComputerScience,UniversityofMassachusettsLowell,1UniversityAve.,Lowell,MA,USA01854zhaohan@mines...

展开>> 收起<<
Mixed-Reality Robot Behavior Replay A System Implementation Zhao Han1Tom Williams1Holly A. Yanco2 1MIRRORLab Department of Computer Science Colorado School of Mines 1500 Illinois St. Golden CO USA 80401.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:478.59KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注