Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning Joaquim Ortiz-Haro1 Jung-Su Ha1 Danny Driess1 Erez Karpas2 Marc Toussaint1

2025-05-02 0 0 1.1MB 7 页 10玖币
侵权投诉
Learning Feasibility of Factored Nonlinear Programs in Robotic
Manipulation Planning
Joaquim Ortiz-Haro1, Jung-Su Ha1, Danny Driess1, Erez Karpas2, Marc Toussaint1
Abstract A factored Nonlinear Program (Factored-NLP)
explicitly models the dependencies between a set of continuous
variables and nonlinear constraints, providing an expressive
formulation for relevant robotics problems such as manipula-
tion planning or simultaneous localization and mapping. When
the problem is over-constrained or infeasible, a fundamental
issue is to detect a minimal subset of variables and constraints
that are infeasible. Previous approaches require solving several
nonlinear programs, incrementally adding and removing con-
straints, and are thus computationally expensive. In this paper,
we propose a graph neural architecture that predicts which
variables and constraints are jointly infeasible. The model
is trained with a dataset of labeled subgraphs of Factored-
NLPs, and importantly, can make useful predictions on larger
factored nonlinear programs than the ones seen during training.
We evaluate our approach in robotic manipulation planning,
where our model is able to generalize to longer manipulation
sequences involving more objects and robots, and different
geometric environments. The experiments show that the learned
model accelerates general algorithms for conflict extraction (by
a factor of 50) and heuristic algorithms that exploit expert
knowledge (by a factor of 4).
I. INTRODUCTION
Computing values for a set of variables that fulfil all the
constraints is a key problem in several applications, such as
robotics, planning, and scheduling. In discrete domains, these
problems are generally known as Constrained Satisfaction
Problems (CSP), which also include classical combinatorial
optimization like k-coloring, maximum cut or Boolean sat-
isfaction (SAT). In continuous domains, the dependencies
between a set of continuous variables and nonlinear con-
straints can be modelled with a factored nonlinear program
(without cost term or with a small regularization), which have
applications in manipulation planning [1] or simultaneous
localization and mapping [2].
When a problem is over-constrained or infeasible, a funda-
mental challenge is to extract a minimal conflict: a minimal
subset of variables and constraints that are jointly infeasible.
These conflicts usually provide an explanation of the failure
that can be incorporated back into iterative solvers, for
example in the conflict-driven clause learning algorithm for
SAT problems [3], [4], or conflict based solvers for Task and
Motion Planning in robotics (TAMP) [5], [6], [7], [8], [9].
In continuous domains, extracting conflicts requires solv-
ing multiple nonlinear programs (NLPs) adding and remov-
ing constraints. While the number of NLPs required to solve
1TU Berlin, Germany. 2Technion, Israel.
This research has been supported by the German-Israeli Foundation for
Scientific Research (GIF) grant I-1491-407.6/2019. Joaquim Ortiz-Haro and
Danny Driess thank the International Max-Planck Research School for
Intelligent Systems (IMPRS-IS) for the support.
(a) Input Factored-NLP (b) Neural message passing
(c) Output variable scores (d) Infeasible subgraphs
Fig. 1: Overview of our approach to detect minimal infeasible
subgraphs in a Factored-NLP. (a) The input of the model
is a Factored-NLP. Circles represent variables and squares
are constraints. (b) We perform several iterations of neural
message passing using the structure of the NLP. (c) The
network outputs the probability that a variable belongs to a
minimal infeasible subgraph. (d) We extract several minimal
infeasible subgraphs with a connected component analysis.
grows logarithmically with the number of nonlinear con-
straints, each call to a nonlinear solver is usually expensive.
In this paper, we propose a neural model to predict
minimal infeasible subsets of variables and constraints from
a factored nonlinear program. The input to our model is
directly the factored nonlinear program, including semantic
information on variables and constraints (e.g. a class label)
and a continuous feature for each variable (which, for
instance can be used to encode the geometry of a scene
in robotics). The graph structure of the factored nonlinear
program is exploited for performing message passing in the
neural network. Finding the minimal infeasible subgraph (i.e.
a subset of variables and constraints of the Factored-NLP)
is cast as a variable classification problem, and the predicted
infeasible subsets are extracted with a connected components
analysis. An overview of our approach is shown in Fig. 1.
The prediction of the graph neural network can be naturally
integrated into an algorithm to detect minimal infeasible
subgraphs, providing a significant speedup with respect to
previous methods, both using general conflict extraction
algorithms or expert algorithms with domain knowledge.
Our approach follows a promising trend in robotics to
arXiv:2210.12386v2 [cs.RO] 23 May 2023
combine optimization and learning [10], [11], [12], [13],
[14]. In this paradigm, a dataset of solutions to similar
problems is used to accelerate optimization methods, making
expensive computations tractable and enabling real-time so-
lutions to combinatorial and large scale optimization. There-
fore, we assume that a dataset of labeled factored nonlinear
programs (generated offline with an algorithm for conflict
extraction) is available. Each example consists of a factored
nonlinear program and a set of minimal infeasible subgraphs.
As an application, we evaluate our method in robotic
sequential manipulation. Finding minimal conflicts is a fun-
damental step in conflict-based solvers, usually accounting
for most of the computational time.
From a robotics perspective, our novel contribution is to
use the structure of the nonlinear program formulation of
manipulation planning for message passing with a graph
neural network. To this end, we first formulate the motion
planning problem that arises from a high-level manipulation
sequence (symbolic actions such as pick or place) as a
factored nonlinear program [1]. Variables correspond to the
configuration of objects and robots at each step of the motion
and the nonlinear constraints model kinematics, collision
avoidance and grasping constraints. When combined with our
learned model, we get strong generalization capabilities to
predict the minimal infeasibility of manipulation sequences
of different lengths in different scenes, involving a different
number of objects and robots.
Our contributions are:
A neural model to predict minimal conflicts in factored
nonlinear programs. We formulate the detection of
minimal infeasible subgraphs as a variable classification
problem and a connected components analysis.
An algorithm that integrates the prediction of our neural
model and non-learning conflict extraction methods,
providing a significant acceleration.
An empirical demonstration that the formulation of
manipulation planning as a factored nonlinear program,
together with our neural model, enables scalability and
generalization.
II. RELATED WORK
A. Minimal Infeasible Subsets of Constraints
In the discrete SAT and CSP literature, a minimal infeasi-
ble subset of constraints (also called Minimal Unsatisfiable
Subset of Constraints or Minimal Unsatisfiable Core) is
usually computed by solving a sequence of SAT and MAX-
SAT problems [15], [16], [17].
In a general continuous domain, a minimal infeasible
subset can be found by solving a linear number of prob-
lems [18]. This search can be accelerated with a divide
and conquer strategy, with logarithmic complexity [19]. In
convex and nonlinear optimization, we can find approximate
minimal subsets by solving one optimization program with
slack variables [20]. In contrast, our method uses learning
to directly predict minimal infeasible subsets of variables
and constraints, and can be combined with these previous
approaches to reduce the computational time.
B. Graph Neural Networks in Combinatorial Optimization
We use Graph Neural Networks (GNN) [21], [22], [23] for
learning in graph-structured data. Different message passing
and convolutions have been proposed, e.g. [24], [25]. Our
architecture, targeted towards inference in factored nonlinear
programs, is inspired by previous works that approximate
belief propagation in factor graphs [26], [27], [28].
Recently, GNN models have been applied to solve NP-
hard problems [29], Boolean Satisfaction [30], Max cut [31],
constraint satisfaction [32], and discrete planning [33], [34],
[35]. Compared to state-of-the-art solvers, learned models
achieve competitive solution times and scalability but are
outperformed in reliability and accuracy. To our knowledge,
this is the first work to use a GNN model to predict
the minimal infeasible subgraphs of a factored nonlinear
program in a continuous domain.
C. Graph Neural Networks in Manipulation Planning
In robotic manipulation planning, GNNs are a popular ar-
chitecture to represent the relations between movable objects,
because they provide a strong relational bias and a natural
generalization to including additional objects in the scene.
For example, they have been used as problem encoding
to learn policies for robotic assembly [36], [37] and ma-
nipulation planning [38], to learn object importance and
guide task and motion planning [39], and to learn dynamical
models and interactions between objects [40], [41]. Previous
works often use object centric representations: the vertices
of the graph represent the objects and the task is encoded in
the initial feature vector of each variable. Alternatively, our
model performs message passing using the structure of the
nonlinear program of the manipulation sequence, achieving
generalization to different manipulation sequences that fulfil
different goals.
III. FORMULATION
A. Factored Nonlinear Program (Factored-NLP)
A Factored-NLP Gis a bipartite graph G= (XH, E)
that models the dependencies between a set of variables
X={xiRni}and a set of constraints H={ha:
RmaRm
a}. Each constraint ha(xa)is a piecewise
differentiable function evaluated on a (typically small) subset
of variables xaX(e.g. xa={x1, x4}). The edges
model the dependencies between variables and constraints
E={(xi, ha) : constraint hadepends on variable xi}.
Throughout this document, we will use the indices i, j to
denote variables and a, b to denote constraints.
The underlying/associated nonlinear program is
find xis.t. ha(xa)0xiX, haH . (1)
The constraints ha(xa)0also include equality constraints
(that can be written as ha(xa)0and ha(xa)0).
A Factored-NLP Gis feasible (F(G)=1) iff (1) has
a solution, that is, there exists a value assignment ¯xifor
each variable xisuch that all constraints ha(¯xa)are fulfilled.
Otherwise, it is infeasible (F(G) = 0). This assignment
摘要:

LearningFeasibilityofFactoredNonlinearProgramsinRoboticManipulationPlanningJoaquimOrtiz-Haro1,Jung-SuHa1,DannyDriess1,ErezKarpas2,MarcToussaint1Abstract—AfactoredNonlinearProgram(Factored-NLP)explicitlymodelsthedependenciesbetweenasetofcontinuousvariablesandnonlinearconstraints,providinganexpressive...

展开>> 收起<<
Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning Joaquim Ortiz-Haro1 Jung-Su Ha1 Danny Driess1 Erez Karpas2 Marc Toussaint1.pdf

共7页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:7 页 大小:1.1MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 7
客服
关注