In-Hand Gravitational Pivoting Using Tactile Sensing Jason Toskov Monash University

2025-05-05 0 0 7.97MB 16 页 10玖币
侵权投诉
In-Hand Gravitational Pivoting Using Tactile Sensing
Jason Toskov
Monash University
jtos0003@student.monash.edu
Rhys Newbury
Monash University
rhys.newbury@monash.edu
Mustafa Mukadam
Meta AI
mukadam@fb.com
Dana Kuli´
c
Monash University
dana.kulic@monash.edu
Akansel Cosgun
Monash University
akansel.cosgun@monash.edu
Abstract: We study gravitational pivoting, a constrained version of in-hand ma-
nipulation, where we aim to control the rotation of an object around the grip
point of a parallel gripper. To achieve this, instead of controlling the grip-
per to avoid slip, we embrace slip to allow the object to rotate in-hand. We
collect two real-world datasets, a static tracking dataset and a controller-in-the-
loop dataset, both annotated with object angle and angular velocity labels. Both
datasets contain force-based tactile information on ten different household ob-
jects. We train an LSTM model to predict the angular position and velocity of
the held object from purely tactile data. We integrate this model with a con-
troller that opens and closes the gripper allowing the object to rotate to de-
sired relative angles. We conduct real-world experiments where the robot is
tasked to achieve a relative target angle. We show that our approach outper-
forms a sliding-window based MLP in a zero-shot generalization setting with
unseen objects. Furthermore, we show a 16.6% improvement in performance
when the LSTM model is fine-tuned on a small set of data collected with both
the LSTM model and the controller in-the-loop. Code and videos are available at
https://rhys-newbury.github.io/projects/pivoting/
Figure 1: We study in-hand manipulation to rotate an object with gravitational pivoting. We design
an LSTM model to predict the position and velocity of an object purely from tactile sensing. Our
grip controller uses this prediction to modulate the width of the gripper and achieve a target angle.
1 Introduction
The majority of past works in robotic manipulation either assume fixed grasps [1,2] or aim to avoid
any slipping during manipulation [3]. However, we focus on embracing slip, using object slip to
increase the dexterity of simple grippers. This idea was explored by Chen et al. [4] where the grip
on the object is loosened to allow an object to slip downwards with gravity. This paper explores a
6th Conference on Robot Learning (CoRL 2022), Auckland, New Zealand.
arXiv:2210.05068v1 [cs.RO] 11 Oct 2022
method of inducing rotation for in-hand manipulation using gravity. Such manipulation, known as
pivoting, is key to performing tasks requiring an object to be at a specific relative angle to a gripper,
such as stacking shelves [5].
This paper proposes a method for a robot with a parallel gripper to rotate a long object grasped away
from its center of mass to a desired final relative orientation. This aims to allow parallel grippers to
robustly reorient objects into a desired orientation without having to regrasp the object. To achieve
this task, we focus on addressing two challenges: tracking the position of the object and controlling
the gripper to allow for gravitational pivoting towards the target angle.
Vision-based methods to track the object often make use of an eye-in-hand camera. However, the
gripper will often occlude the object, making it difficult to estimate the angle of the object accu-
rately [6]. An alternative is to use an externally placed camera. However, this necessitates the robot
moving to a fixed position in front of the camera for each manipulation. We use purely tactile infor-
mation to track the object to avoid these issues. We design an LSTM-based neural network model,
RSE-LSTM, which uses tactile information to predict a held objects’ relative angular position and
angular velocity.
Previous approaches to controlling the gripper often used model-based approaches, which required
information about the object, such as shape, mass, and friction. In contrast, we design a simple
gripper controller that assumes no a priori knowledge about the object parameters to allow for gen-
eralization to unseen objects.
We collect a real-world force-based tactile dataset, on ten household objects. This dataset is anno-
tated with both angular position and velocity measurements. RSE-LSTM is trained on this dataset,
and the results are reported with respect to both unseen data and unseen objects. We further validate
our approach experimentally on unseen objects.
The contributions of our paper are threefold:
An annotated dataset containing gravitational pivoting with 10 household objects.
A LSTM-based neural network which can predict both the velocity and angle of an object using
only tactile information.
A grip controller, which can adjust the width of the gripper to allow an object to pivot in-hand to
achieve a required relative angle.
2 Related Works
Slip measurement. Slip detection is often framed as a binary problem, with machine learning
models predicting either slip or no slip. This is achieved with the use of visual sensors [7], force-
based sensors [8]. or optical sensors [9]. Various machine learning techniques have been used
including: Support Vector Machines [10,9], MLPs [11] and LSTM models [12]. Alternatively,
Convolutional Neural Networks (CNN) have been used to both detect and classify the type of slip
as either translational or rotational [13]. LSTM models have been used to determine the direction
of rotational slip [14], or the overall direction of the combination of rotational and translational
slip [15].
The domain of quantitative slip measurement is comparatively underexplored. Previous works mea-
sure the amount of translational slip using image-based tactile sensors [16]. Alternatively, visual
gel-based tactile sensors have been used to measure the rotation angle using a model-based ap-
proach [17]. However, to our knowledge, the use of force-based tactile sensing has not been ex-
plored, which is the focus of this paper.
Induced rotation. To induce rotation in a held object, previous work has made use of the external
environment to apply a torque or force on the held object [18,19,20]. However, rotation can also
be induced without any interaction with external objects. For example, the robot can perform a
swinging motion using the end-effector, where the velocity of the swing aims to bring an object to a
desired angle [21,22,23].
Alternatively, by loosening the grip on the object, gravity can be used to induce a rotation in the
object [24,25,26,27,28,29]. These approaches are model-based and rely on prior knowledge of
important parameters of the system, such as shape, mass and friction of the held object. Our work
assumes no prior knowledge about such parameters to allow for generalization to unseen objects.
2
Figure 2: Gravitational Pivoting occurs when a parallel-jaw gripper loosens its grip on a object such
that gravity induces a torque on the object. This torque will then cause a rotation to occur.
Induced translational slip. Shi et al. [30] design a model-based approach to induce translational
slip in an object by accelerating the gripper causing the object to slide in a desired way in-hand.
Chen et al. [4] train a MLP to predict the velocity of an object which is undergoing translational
slip. They feed the MLP the previous one second of observations to predict the sliding velocity of
the object. A controller is then designed to achieve a target sliding velocity for the object. Our work
extends this to the rotational case and makes use of an LSTM rather than providing a fixed length
history. The allows the network to learn an encoding for the history, which may be more informative.
3 Problem Definition
Gravitational pivoting is a form of in-hand manipulation, where an object is rotated in hand by
loosening the grip on the object. Consider a static gripper which holds an object away from its
center-of-mass. If the gripper is closed tightly enough, the object should remain stationary inside
the gripper. However, if the grip on the object is loosened slightly, gravity will induce a torque on
the object, causing it to pivot inside the gripper.
The coordinate system for this problem is defined in Figure 2. Gravity is defined to work in the
negative y-direction in the global frame. We also define a rotating object-centric coordinate frame,
where the y-axis is aligned with the long axis of the object.
Figure 3: The hardware setup for both
data collection and experiments.
Specifically, we consider pivoting tasks as follows. The
robot starts with an object in hand, grasped away from
the center-of-mass. The task is to rotate the object by
a relative angle around the object-centric z-axis. We
only consider achieving relative angles (α) in the range
[0,180] degrees, as we only consider allowing the ob-
ject to fall and rotate. We assume that the robot has
no a priori knowledge about the object and only has
access to tactile information from the fingertips. This
aims to allow generalizability by assuming no knowl-
edge of the grasped objects. We constrain the type of
objects to ones that have a prism-like shape, where one
dimension, length, is much larger than the other two,
width and depth. When these objects are held away
from the center-of-mass, the torque induced on the ob-
ject by gravity will be much larger than the downwards
force at the contact points. Therefore, the objects are
more likely to undergo rotational slip when the grip is loosened, and the translational slip is assumed
to be negligible. Prism-like objects include common household objects, such as bottles, boxes, and
tools (such as hammers).
4 Data Collection
4.1 Hardware Setup
The robot hand consists of a Robotiq 2f-85 parallel-jaw gripper and a table-mounted UR5 robot
arm. On each jaw of the gripper is a PapillArray tactile sensor [31]. The parallel jaw gripper has a
width of 85mm and can be controlled in 256 increments, for a resolution of 0.33mm per increment.
The PapillArray sensors consist of 9 pillars, arranged in a 3 by 3 square, where each pillar provides
a force and displacement measurement in each direction, and whether the pillar is in contact with
3
the object. The sensor also provides global forces and torques in each direction, for a total of 142
measurements over the two sensors. In addition, an OAK-D camera is positioned to provide a side-
on view of the robotic arm and held object, which will be used to record the ground truth angle
(more details of the ground truth collection is provided in Section 4.3). The hardware setup for both
data collection and experiments is shown in Figure 3.
4.2 Object set
We consider the pivoting task for a set of 10 different household objects. Similar to the object
set used by Chen et al. [4], we consider two classes of objects: box-like objects and cylinder-like
objects, with five objects in each class. The set of objects used are shown in Figure 4.
4.3 Methodology
Figure 4: Objects used in this dataset. We dis-
tinguish between two classes of objects, box-like
object and cylinder-like objects. We refer to the
names of these objects throughout paper. The
names are from left to right (back row): Tooth-
paste, Earbud, Breadboard, Magnet, Deodorant,
Spray2, Shampoo, Spray1, Pill. The object at the
front is labeled as Toothbrush.
A systematic methodology is used to collect a
dataset of gravitational pivoting, outlined in the
supplementary material. We use two different
methodologies for controlling the gripper dur-
ing rotation of the object:
Rotate To Stop: The gripper is opened a
fixed amount and the object is allowed to ro-
tate until the object comes to a stationary po-
sition.
Angle Goal: The controller (described in
Section 5.2) is tasked to stop the object at an
angle (αstop) from a set of angles (Astop).
The controller receives ground truth angle
readings for the purposes of data collection.
To create a larger variation of friction proper-
ties, we collect data both with and without a
layer of masking tape added to each object sur-
face. In total there were 595 ‘Rotate To Stop’
and 971 ‘Angle Goal’ sequences collected, after filtering invalid datapoints. The dataset is attached
in the supplementary material.
4.4 Ground Truth Annotation
To measure the ground truth rotation of the grasped object, two distinctly colored blobs have been
attached to each object. An external camera observes these blobs, and we then define the object-
centric y-axis between the centroid of the two blobs. The orientation change between the initial
object-centric y-axis and the current object-centric y-axis is used as the ground-truth angles. The
position and angular velocity are filtered to ensure they follow a smooth signal. The details of the
filter are in the supplementary material.
5 Proposed Approach
Our approach consists of two main components. A Rotational Slip Estimator LSTM (RSE-LSTM)
and a Grip Controller. The system diagram is shown in Figure 1. From purely tactile information,
the RSE-LSTM estimates the relative angle change between the initial and current object-centric
y-axis. The Grip Controller uses the estimation and gravitational pivoting, aiming to reach a desired
rotation relative to the initial angle.
5.1 Rotational Slip Estimator
The RSE-LSTM uses measurements from the tactile sensors and predicts both the current angular
velocity (ω) and relative angle of the object (α) as the outputs. We found that calculating both αand
ωimproved the results of the model (Section 6.3).
The RSE-LSTM model consists of an LSTM, the outputs of which are passed to an MLP. Instead of
using a sliding window (similar to [4]), using the hidden states of the LSTM could allow the model
to use a longer history more effectively by learning a more feature-rich hidden state. The model runs
at 60 Hz due to hardware limitations in training data collection.
4
摘要:

In-HandGravitationalPivotingUsingTactileSensingJasonToskovMonashUniversityjtos0003@student.monash.eduRhysNewburyMonashUniversityrhys.newbury@monash.eduMustafaMukadamMetaAImukadam@fb.comDanaKuli´cMonashUniversitydana.kulic@monash.eduAkanselCosgunMonashUniversityakansel.cosgun@monash.eduAbstract:Westu...

展开>> 收起<<
In-Hand Gravitational Pivoting Using Tactile Sensing Jason Toskov Monash University.pdf

共16页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:16 页 大小:7.97MB 格式:PDF 时间:2025-05-05

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 16
客服
关注