In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion Chaoyi Pan Marion Lepert Shenli Yuan Rika Antonova Jeannette Bohg Abstract In this paper we present a method to manipulate

2025-05-05 0 0 3.86MB 8 页 10玖币
侵权投诉
In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion
Chaoyi Pan, Marion Lepert, Shenli Yuan, Rika Antonova, Jeannette Bohg
Abstract In this paper, we present a method to manipulate
unknown objects in-hand using tactile sensing without relying
on a known object model. In many cases, vision-only approaches
may not be feasible; for example, due to occlusion in cluttered
spaces. We address this limitation by introducing a method to
reorient unknown objects using tactile sensing. It incrementally
builds a probabilistic estimate of the object shape and pose
during task-driven manipulation. Our approach uses Bayesian
optimization to balance exploration of the global object shape
with efficient task completion. To demonstrate the effectiveness
of our method, we apply it to a simulated Tactile-Enabled Roller
Grasper, a gripper that rolls objects in hand while collecting
tactile data. We evaluate our method on an insertion task with
randomly generated objects and find that it reliably reorients
objects while significantly reducing the exploration time.
I. INTRODUCTION
This work studies how robots can reorient objects in-hand
with limited prior knowledge about object shape. Manipu-
lating objects of unknown shapes is a foundational skill to
perform tasks in unstructured environments. For example,
to be useful in a kitchen, a robot must be able to pick
up and manipulate a breadth of objects such as fruits and
vegetables whose shapes cannot be known beforehand and
fit them in constrained spaces such as a tightly packed drawer
in a fridge. While vision is a powerful modality to gain
information about novel objects, there are many scenarios
where vision cannot be used, such as cluttered spaces with
high occlusion and during manipulation of small objects
where the gripper will often occlude the object. Therefore,
robots must learn to manipulate objects with sensors other
than vision. Tactile sensing provides high-resolution local
information about contact between the object and gripper,
which is complementary to global vision information.
Object reorientation with tactile sensing is challenging be-
cause tactile data gives information about the object shape for
only a small contact patch area, so the object’s global shape
needs to be pieced together from limited local information.
In addition, the object may only have a limited set of features
that the tactile sensor can detect to distinguish between
different locations on the object. Moreover, the object may
shift slightly in-between tactile readings, which increases
uncertainty regarding the location of these readings. We seek
to overcome some of these challenges by leveraging the
Authors contributed equally and names are in random order. Chaoyi
Pan is with Tsinghua University. Marion Lepert, Shenli Yuan, Rika
Antonova, and Jeannette Bohg are with Stanford University [lepertm,
shenliy, rika.antonova, bohg]@stanford.edu.
Toyota Research Institute provided funds to support this work. We are
grateful to Shaoxiong Wang for help with the Roller Grasper. Rika Antonova
is supported by the National Science Foundation grant No.2030859 to the
Computing Research Association for the CIFellows Project.
Fig. 1: Left: Simulation of the Tactile-Enabled Roller Grasper demonstrating
a sequence of exploration steps that lead to successful insertion. Right: The
Tactile-Enabled Roller Grasper that we simulate and that inspired our work.
Tactile-Enabled Roller Grasper. This gripper rolls objects in
hand and continuously collects tactile data on the surface of
the rollers. Staying in contact with the object reduces uncer-
tainty between tactile measurements, and enables us to piece
together a sequence of local contact patches into a global
estimate of the object’s shape. Given this gripper, we propose
a method to reorient unknown objects by incrementally
building a probabilistic estimate of the object’s shape during
manipulation. Our method leverages Bayesian optimization
to strategically trade off exploration of the global object
shape with efficient task completion. We demonstrate our
approach on a simulated Tactile-Enabled Roller Grasper as
shown in Fig. 1.
We evaluate our approach on an insertion task. Insertion
tasks are ubiquitous in many environments, such as assembly
tasks in factories, dense box packing in warehouses, and
plugging cables in the home. As a result, insertion tasks
continue to be heavily studied in robotics. In this paper, we
focus on finding the correct object orientation that will allow
the object to fit into a target hole. The robot has access to
a parameterization of the hole’s contour, which gives the
robot a well defined reorientation target without revealing
the 3D-shape of the object, ensuring that our assumption
that the robot is working with an unseen object holds. We
evaluate our method in simulation on a set of randomly
generated objects and find that our method reliably completes
the insertion task while significantly reducing the exploration
time needed to do so.
To summarize, our main contributions are: (i) A system
to reorient unknown objects that does not rely on vision and
instead leverages tactile sensing; and (ii) An in-hand 3D
simultaneous shape reconstruction and localization method
to estimate an object’s shape and pose that is task driven.
arXiv:2210.13403v3 [cs.RO] 10 Mar 2023
II. RELATED WORK
A. In-hand manipulation
In-hand manipulation has been studied extensively [1]–[7].
Many approaches require object pose which can be obtained
with a marker tag [3], [4] or a 6D object pose estimator
[5]. Alternatively, deep learning methods can obtain an end-
to-end policy that does not require pose estimation. For
example, [1] learns a controller to reorient a Rubik’s cube
and [7] presents a controller that can reorient many distinct
objects, but both rely on vision and complex multi-fingered
grippers. In contrast, [2] demonstrates how simpler hardware
(parallel-jaw grippers) can use extrinsic dexterity to re-orient
objects in-hand. Their approach is limited to simple cuboids
and requires a 6D object pose estimator. [6] shows how a
compliant gripper can robustly reorient objects in-hand using
handcrafted open-loop primitives that do not require object
pose or shape estimation.
B. Tactile Sensing for shape reconstruction and localization
Reconstructing object shape from vision and tactile data is
a common research objective [8]–[10]. These works typically
rely heavily on vision and tend to use tactile data simply
to detect contact. However, many scenarios with heavy
occlusion preclude good vision data, leading to a growing
interest in reconstructing and/or localizing objects with just
tactile data.
Low resolution tactile sensors [11]–[13] are typically used
to obtain binary contact information only. The development
of vision-based high resolution tactile sensors, such as Gel-
Sight [14], has greatly increased our ability to reconstruct
and localize objects without vision. The GelSight outputs an
RGB image of the tactile imprint, and photometric stereo
is used to convert this image to a depth image. Others
learn this mapping from data [15], or choose instead to
learn a binary segmentation mask [16] to reduce noise. [14]
showed one of the first uses of the GelSight for small object
localization, further improved in [15], [17]. However, they
require building a complete tactile map of the object before
using it for localization.
Most works assume each tactile imprint is collected by
making and breaking contact with the object. This introduces
uncertainty in the relative position of tactile imprints. Instead,
[13] maintains constant contact with the object to reduce
robot movement and obtain higher fidelity data. However,
they use a low resolution sensor and only consider a bowl
shaped object fixed in space. [18] slide along a freely moving
cable with a GelSight to estimate the cable’s pose, but their
method is specific to cables. By rolling objects in-hand, we
also continuously collect tactile data, but do so for a more
general set of freely-moving objects.
Most of these works narrow their scope either by recon-
structing the shape of an unknown object in a fixed pose,
or by localizing an object of known shape and unknown
pose. Recently, [19] proposed removing these limitations by
simultaneously doing shape reconstruction and localization
from tactile data. However, they only show results for
pushing planar objects.
Gaussian process implicit surfaces (GPIS) are commonly
used to build a probabilistic estimate of an object’s shape
[20]. [13], [21] use the variance of the GPIS to guide
the exploration towards regions with highest uncertainty.
However, the goal of these approaches is to reconstruct
the object shape as accurately as possible. In contrast, we
focus on exploration that prioritizes regions of the object
that are likely to aid in solving an insertion task, making
our exploration more efficient. [22] also prioritizes exploring
regions that help solve a task, but their approach is limited
to grasping an object that is fixed during exploration.
C. Insertion Task
A lot of previous work on the peg insertion task is object-
geometry specific [23]–[25]. For example, [26] assumes a
cylindrical peg shape; [27] uses vision and tactile data to
enable insertion of complex shapes but assumes known peg
geometry; [28] learns from forces measured during human
demonstration, but requires successful demonstration of the
specific peg shape. In contrast, our work assumes no prior
knowledge of the peg shape, and instead learns an explicit
peg object model. There is some prior work that aims to be
robust to different peg geometries, such as [29], but the pegs
are fixed to the end-effector.
D. Roller Grasper
In this work, we use a simulated version of the Tactile-
Enabled Roller Grasper hardware from [30]. This gripper
has 7 degrees of freedom, shown in Fig. 3, and each roller is
equipped with a custom GelSight sensor [31]. The sensor’s
camera is inside the roller, fixed to the stator such that it
points towards the grasping point, while the elastomer covers
the rotating roller.
Because the Roller Grasper rolls objects in-hand, it can
continuously collect tactile information of an object’s sur-
face. This continuity reduces uncertainty in relative position
between tactile imprints, simplifying reconstruction of ob-
ject shape compared to linkage-based grippers that capture
discontinuous data.
Several works have studied the Roller Grasper. [3] pro-
posed a velocity controller and an imitation learning con-
troller to reorient objects. [30] built a closed loop controller
that keeps an object centered in the Roller Grasper during
manipulation using contact patch data from the tactile sensor.
We build on these works by using the velocity controller
to determine the roller action to produce a desired object
angular velocity. However, we do not assume a known
object model or use marker tags, and instead simultaneously
reconstruct and localize the object in a task-driven manner.
III. APPROACH
We propose a method that leverages tactile sensing to
reorient objects of unknown shapes in order to complete a
task. We focus on settings with high occlusion where the
robot cannot see the object it is manipulating, and must
instead rely on its sense of touch and proprioception. We
摘要:

In-HandManipulationofUnknownObjectswithTactileSensingforInsertionChaoyiPan,MarionLepert,ShenliYuan,RikaAntonova,JeannetteBohgAbstract—Inthispaper,wepresentamethodtomanipulateunknownobjectsin-handusingtactilesensingwithoutrelyingonaknownobjectmodel.Inmanycases,vision-onlyapproachesmaynotbefeasible;...

展开>> 收起<<
In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion Chaoyi Pan Marion Lepert Shenli Yuan Rika Antonova Jeannette Bohg Abstract In this paper we present a method to manipulate.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:3.86MB 格式:PDF 时间:2025-05-05

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注