combine optimization and learning [10], [11], [12], [13],
[14]. In this paradigm, a dataset of solutions to similar
problems is used to accelerate optimization methods, making
expensive computations tractable and enabling real-time so-
lutions to combinatorial and large scale optimization. There-
fore, we assume that a dataset of labeled factored nonlinear
programs (generated offline with an algorithm for conflict
extraction) is available. Each example consists of a factored
nonlinear program and a set of minimal infeasible subgraphs.
As an application, we evaluate our method in robotic
sequential manipulation. Finding minimal conflicts is a fun-
damental step in conflict-based solvers, usually accounting
for most of the computational time.
From a robotics perspective, our novel contribution is to
use the structure of the nonlinear program formulation of
manipulation planning for message passing with a graph
neural network. To this end, we first formulate the motion
planning problem that arises from a high-level manipulation
sequence (symbolic actions such as pick or place) as a
factored nonlinear program [1]. Variables correspond to the
configuration of objects and robots at each step of the motion
and the nonlinear constraints model kinematics, collision
avoidance and grasping constraints. When combined with our
learned model, we get strong generalization capabilities to
predict the minimal infeasibility of manipulation sequences
of different lengths in different scenes, involving a different
number of objects and robots.
Our contributions are:
•A neural model to predict minimal conflicts in factored
nonlinear programs. We formulate the detection of
minimal infeasible subgraphs as a variable classification
problem and a connected components analysis.
•An algorithm that integrates the prediction of our neural
model and non-learning conflict extraction methods,
providing a significant acceleration.
•An empirical demonstration that the formulation of
manipulation planning as a factored nonlinear program,
together with our neural model, enables scalability and
generalization.
II. RELATED WORK
A. Minimal Infeasible Subsets of Constraints
In the discrete SAT and CSP literature, a minimal infeasi-
ble subset of constraints (also called Minimal Unsatisfiable
Subset of Constraints or Minimal Unsatisfiable Core) is
usually computed by solving a sequence of SAT and MAX-
SAT problems [15], [16], [17].
In a general continuous domain, a minimal infeasible
subset can be found by solving a linear number of prob-
lems [18]. This search can be accelerated with a divide
and conquer strategy, with logarithmic complexity [19]. In
convex and nonlinear optimization, we can find approximate
minimal subsets by solving one optimization program with
slack variables [20]. In contrast, our method uses learning
to directly predict minimal infeasible subsets of variables
and constraints, and can be combined with these previous
approaches to reduce the computational time.
B. Graph Neural Networks in Combinatorial Optimization
We use Graph Neural Networks (GNN) [21], [22], [23] for
learning in graph-structured data. Different message passing
and convolutions have been proposed, e.g. [24], [25]. Our
architecture, targeted towards inference in factored nonlinear
programs, is inspired by previous works that approximate
belief propagation in factor graphs [26], [27], [28].
Recently, GNN models have been applied to solve NP-
hard problems [29], Boolean Satisfaction [30], Max cut [31],
constraint satisfaction [32], and discrete planning [33], [34],
[35]. Compared to state-of-the-art solvers, learned models
achieve competitive solution times and scalability but are
outperformed in reliability and accuracy. To our knowledge,
this is the first work to use a GNN model to predict
the minimal infeasible subgraphs of a factored nonlinear
program in a continuous domain.
C. Graph Neural Networks in Manipulation Planning
In robotic manipulation planning, GNNs are a popular ar-
chitecture to represent the relations between movable objects,
because they provide a strong relational bias and a natural
generalization to including additional objects in the scene.
For example, they have been used as problem encoding
to learn policies for robotic assembly [36], [37] and ma-
nipulation planning [38], to learn object importance and
guide task and motion planning [39], and to learn dynamical
models and interactions between objects [40], [41]. Previous
works often use object centric representations: the vertices
of the graph represent the objects and the task is encoded in
the initial feature vector of each variable. Alternatively, our
model performs message passing using the structure of the
nonlinear program of the manipulation sequence, achieving
generalization to different manipulation sequences that fulfil
different goals.
III. FORMULATION
A. Factored Nonlinear Program (Factored-NLP)
A Factored-NLP Gis a bipartite graph G= (X∪H, E)
that models the dependencies between a set of variables
X={xi∈Rni}and a set of constraints H={ha:
Rma→Rm′
a}. Each constraint ha(xa)is a piecewise
differentiable function evaluated on a (typically small) subset
of variables xa⊆X(e.g. xa={x1, x4}). The edges
model the dependencies between variables and constraints
E={(xi, ha) : constraint hadepends on variable xi}.
Throughout this document, we will use the indices i, j to
denote variables and a, b to denote constraints.
The underlying/associated nonlinear program is
find xis.t. ha(xa)≤0∀xi∈X, ha∈H . (1)
The constraints ha(xa)≤0also include equality constraints
(that can be written as ha(xa)≤0and ha(xa)≥0).
A Factored-NLP Gis feasible (F(G)=1) iff (1) has
a solution, that is, there exists a value assignment ¯xifor
each variable xisuch that all constraints ha(¯xa)are fulfilled.
Otherwise, it is infeasible (F(G) = 0). This assignment