ForceTorque Sensing for Soft Grippers using an External Camera

2025-04-22 0 0 5.41MB 7 页 10玖币
侵权投诉
Force/Torque Sensing for Soft Grippers using an External Camera
Jeremy A. Collins1, Patrick Grady1, Charles C. Kemp1
Input Image Force and Torque Estimate
Force/Torque
Sensor
Eye-in-Hand
Camera
VFTS-Net
FX
FY
FZ
TX
TY
TZ
Data Capture Setup
Fig. 1. We modify a soft robotic gripper by adding an eye-in-hand camera and a force/torque sensor. Data is collected by teleoperating the robot in a
variety of home and office settings. We train a network, VFTS-Net, to take images from the camera as input and output 3-axis forces and 3-axis torques.
Estimates from VFTS-Net are visualized as lightly shaded arrows, and ground truth measurements from the force/torque sensor are darkly shaded arrows.
Abstract Robotic manipulation can benefit from wrist-
mounted force/torque (F/T) sensors, but conventional F/T sen-
sors can be expensive, difficult to install, and damaged by
high loads. We present Visual Force/Torque Sensing (VFTS),
a method that visually estimates the 6-axis F/T measurement
that would be reported by a conventional F/T sensor. In contrast
to approaches that sense loads using internal cameras placed
behind soft exterior surfaces, our approach uses an external
camera with a fisheye lens that observes a soft gripper. VFTS
includes a deep learning model that takes a single RGB image as
input and outputs a 6-axis F/T estimate. We trained the model
with sensor data collected while teleoperating a robot (Stretch
RE1 from Hello Robot Inc.) to perform manipulation tasks.
VFTS outperformed F/T estimates based on motor currents,
generalized to a novel home environment, and supported three
autonomous tasks relevant to healthcare: grasping a blanket,
pulling a blanket over a manikin, and cleaning a manikin’s
limbs. VFTS also performed well with a manually operated
pneumatic gripper. Overall, our results suggest that an external
camera observing a soft gripper can perform useful visual
force/torque sensing for a variety of manipulation tasks.
I. INTRODUCTION
During robotic manipulation, grippers often apply forces
and torques to the environment. Sensing the force and torque
applied by the gripper has been useful for autonomous
manipulation, but sensors that provide this information have
limitations. Notably, conventional F/T sensors can be expen-
sive, difficult to mount, and damaged by high loads.
For example, a common approach to measuring the load
applied to a gripper is to mount an F/T sensor between the
1Jeremy A. Collins, Patrick Grady, and Charles C. Kemp are with
the Institute for Robotics and Intelligent Machines at the Georgia Insti-
tute of Technology (GT). This work was supported by NSF Award #
2024444. Code, data, and models are available at https://github.
com/Healthcare-Robotics/visual-force-torque. Charles C.
Kemp is an associate professor at GT. He also owns equity in and works
part-time for Hello Robot Inc., which sells the Stretch RE1. He receives
royalties from GT for sales of the Stretch RE1.
gripper and the robot’s wrist. F/T sensors often use strain
gauges to sense tiny deformations in an elastic element of
the sensor. This approach requires that the strain gauges be
resilient to the external load applied to the gripper as well as
gravitational and inertial forces from the gripper itself. For
many applications, the strain gauges need to be both stiff and
sensitive, and protective coverings could reduce performance
by interfering with the load on the strain gauges. Together,
these design objectives are difficult to achieve.
We present an alternative to conventional F/T sensors.
Instead of relying on the deformation of internal compo-
nents, VFTS directly observes the deformation of a soft
gripper using an external camera. The high compliance of
soft grippers results in deformations that can be visually
observed using a commodity camera. By observing this load-
dependent phenomenon, the causative forces and torques can
be estimated. We rigidly mount a camera with a fisheye lens
to the gripper (i.e., an eye-in-hand camera). We then train
a convolutional neural network, VFTS-Net, to estimate the
applied force and torque based on a single RGB image from
this camera (Figure 1).
In contrast with conventional F/T sensors, our approach
relies on a low-cost USB camera ($60). Our method eases
installation by allowing the camera to be mounted to the
exterior of the gripper rather than between the gripper and
the wrist. Since the camera visually senses the loads from a
distance, it is also less likely to be damaged by high loads.
Researchers have investigated related methods that involve
placing a camera inside a gripper behind a compliant surface.
Loads applied can be estimated by observing deformation in
the surface. In contrast, our approach uses an external camera
to observe a soft gripper. Our approach does not require
modification of the gripper’s contact surfaces or interior. The
global view of the external camera facilitates estimation of
arXiv:2210.00051v3 [cs.RO] 8 May 2023
a) Coordinate Frame b) Tendon-Actuated Gripper c) Pneumatic Gripper d) Data Collection
Fig. 2. a) The right-handed coordinate frame used in this paper is shown. Torques are drawn as curved arrows, while forces are drawn as straight arrows.
We use the colors R/G/B to denote the X/Y/Z axes, respectively. b) Under the application of force, the tendon-actuated gripper flexures and fingertip deform
against the surface. c) The fingers of the pneumatic gripper are highly compliant and deform uniformly under the application of force. d) We collected
data for the tendon-actuated gripper by teleoperating the robot in a variety of settings, including a real home.
the total force and torque applied to the soft gripper.
We provide evidence for the feasibility of our approach
by collecting a dataset of in-the-wild robotic manipulation
in multiple environments and testing on held-out data from
novel environments with manipulation of unseen objects. We
also provide an analysis indicating that lower performance
corresponds with types of gripper deformation that are more
difficult to visually observe.
VFTS outperformed a baseline method that uses motor
currents to estimate 6-axis F/T measurements. We also
provide evidence that the estimates from VFTS can support
autonomous manipulation by enabling a mobile manipulator
to perform three autonomous tasks. We additionally show
that VFTS-Net can be trained to work with two distinct soft
grippers: a tendon-actuated gripper and a pneumatic gripper.
In summary, our paper includes the following contribu-
tions:
We present Visual Force/Torque Sensing (VFTS), a
method that uses a convolutional neural network to
estimate the forces and torques exerted on a soft gripper
given an RGB image from an eye-in-hand camera.
We demonstrate the utility of VFTS for real-world
applications with a series of robotic tasks conducted
with a mobile manipulator.
We will release our code, data, and trained models.
II. RELATED WORKS
A. Tactile Sensing using Physical Sensors
A wide range of methods have been proposed for robotic
tactile sensing. Sensors have been developed to measure
vibration [1], temperature [2], or pressure inside a fluid-filled
cavity [3], [4]. To enable safe operation around humans,
some collaborative robots (cobots) have been constructed
with actuator torque sensors [5], [6]. Other research has used
the compliant joints of a robot for force estimation [7].
Six-axis force/torque sensors are ubiquitous in both re-
search and industrial applications. These sensors function
by measuring elastic deformation in the structure of the
sensor [8]. Force/torque sensors are often placed between
the robot arm and end-effector, and they have been used
for a wide array of applications such as robotic surgery
[9], [10], humanoid robot locomotion [11], cloth manipu-
lation [12], and robot-assisted dressing and feeding [13],
[14]. Force/torque sensors are also widely used in industrial
applications, including sanding [15], deburring [16], part
alignment [17], and robot teaching [18].
B. Tactile Sensing using Vision
We make the distinction between two types of vision-based
approaches to understand tactile signals, those using internal
versus external sensors.
Researchers have developed tactile sensors that use a
camera mounted inside a robot to perceive a soft exterior.
Prior techniques have used dot patterns [19], [20], [21],
photometric stereo [22], or fiducial markers [23] to track
the motion of the exterior and infer deflection at a high
resolution. A variety of other optical techniques have been
proposed [24] to sense deformation from internal sensors.
Other work uses cameras mounted external to the robot
to estimate forces. A common technique is to leverage the
deformation caused by a rigid gripper making contact with
soft objects. This approach has seen success for estimating
forces on soft tissue during surgery [25], [26] and manipu-
lation of soft household objects [27]. Other work has used
the trajectory of grasped objects to infer the net forces and
moments necessary to cause this motion [28], [29].
The deformation caused by a soft body interacting with
a rigid environment can also be used to perceive force for
human hands and bodies [30], [31]. Urban et al. [32] use a
camera mounted to a human fingernail to observe changes
in appearance to estimate force and torque.
This paper builds upon work for estimating the contact
pressure applied by a soft gripper to a planar surface [33].
They use external static cameras in a controlled environment
to observe deformations in the part of the gripper contacting
the surface. Our work estimates forces and torques measured
at the base of the gripper in uncontrolled environments
and only uses sensors mounted to the robot. This allows
摘要:

Force/TorqueSensingforSoftGrippersusinganExternalCameraJeremyA.Collins1,PatrickGrady1,CharlesC.Kemp1Fig.1.Wemodifyasoftroboticgripperbyaddinganeye-in-handcameraandaforce/torquesensor.Dataiscollectedbyteleoperatingtherobotinavarietyofhomeandofcesettings.Wetrainanetwork,VFTS-Net,totakeimagesfromtheca...

展开>> 收起<<
ForceTorque Sensing for Soft Grippers using an External Camera.pdf

共7页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:7 页 大小:5.41MB 格式:PDF 时间:2025-04-22

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 7
客服
关注