ProDMPs: A Unified Perspective on Dynamic and Probabilistic
Movement Primitives
Ge Li1, Zeqi Jin1, Michael Volpp1, Fabian Otto2,3, Rudolf Lioutikov1, and Gerhard Neumann1
Abstract— Movement Primitives (MPs) are a well-known
concept to represent and generate modular trajectories. MPs
can be broadly categorized into two types: (a) dynamics-based
approaches that generate smooth trajectories from any ini-
tial state, e. g., Dynamic Movement Primitives (DMPs), and
(b) probabilistic approaches that capture higher-order statis-
tics of the motion, e. g., Probabilistic Movement Primitives
(ProMPs). To date, however, there is no method that unifies
both, i. e. that can generate smooth trajectories from an arbi-
trary initial state while capturing higher-order statistics. In this
paper, we introduce a unified perspective of both approaches by
solving the ODE underlying the DMPs. We convert expensive
online numerical integration of DMPs into basis functions that
can be computed offline. These basis functions can be used
to represent trajectories or trajectory distributions similar to
ProMPs while maintaining all the properties of dynamical
systems. Since we inherit the properties of both methodologies,
we call our proposed model Probabilistic Dynamic Movement
Primitives (ProDMPs). Additionally, we embed ProDMPs in
deep neural network architecture and propose a new cost
function for efficient end-to-end learning of higher-order trajec-
tory statistics. To this end, we leverage Bayesian Aggregation
for non-linear iterative conditioning on sensory inputs. Our
proposed model achieves smooth trajectory generation, goal-
attractor convergence, correlation analysis, non-linear condi-
tioning, and online re-planing in one framework.
I. INTRODUCTION
Movement Primitives (MPs) are a prominent tool for
motion representation and synthesis in robotics. They serve
as basic movement elements, modulate the motion behavior,
and form more complex movements through combination
or concatenation. This work focuses on trajectory-based
movement representations [1, 2]. Given a parameter vector,
such representations generate desired trajectories for the
robot to follow. These methods have gained much popularity
in imitation and reinforcement learning (IL, RL) [3–7] due
to their concise parametrization and flexibility to modulate
movement. Current methods can be roughly classified into
approaches based on dynamical systems [1, 8–11] and prob-
abilistic approaches [2, 12, 13], with both types offering their
own advantages. The dynamical systems-based approaches,
such as Dynamic Movement Primitives (DMPs), guarantee
that the generated trajectories start precisely at the current
position and velocity of the robot, which allows for smooth
trajectory replanning i. e., changing the parameters of the
MPs during motion execution [11, 14]. However, since DMPs
represent the trajectory via the forcing term instead of a
direct representation of the trajectory position, numerical
1Karlsruhe Institute of Technology, Germany. ge.li@kit.edu
2Bosch Center for Artificial Intelligence, Germany.
3University of T ¨
ubingen, Germany.
integration from acceleration to position has to be applied
to formulate the trajectory, which constitutes an additional
workload and makes the estimation of the trajectory statistics
difficult [15]. Probabilistic methods, such as Probabilistic
Movement Primitive (ProMP), are able to acquire such
statistics, thus making them the key enablers for acquiring
variable-stiffness controllers and the trajectory’s temporal
and DoFs correlation. These methods further perform as gen-
erative models, facilitating the sampling of new trajectories.
However, the lack of internal dynamics of these approaches
suffers from discontinuities in position and velocity between
old and new trajectories in the case of replanning.
In this work, we propose Probabilistic Dynamic Movement
Primitives (ProDMPs) which unify both methodologies. We
show that the trajectory of a DMP, obtained by integrating
its second-order dynamical system, can be expressed by a
linear basis function model that depends on the parameters
of the DMP, i. e., the weights of the forcing function and the
goal attractor. The linear basis functions can be obtained by
integrating the original basis functions used in the DMP - an
operation that only needs to be performed once offline in the
ProDMPs. Recently, MP research has been extended to deep
neural network (NN) architectures [10, 11, 13] that enable
conditioning the trajectory generation on high-dimensional
context variables, such as images. Following these ideas, we
integrate our representation into a deep neural architecture
that allows non-linear conditioning on a varying number
of conditioning events. These events are aggregated using
Bayesian Aggregation (BA) into a latent probabilistic repre-
sentation [16] which is mapped to a Gaussian distribution
in the parameter space of the ProDMPs. We summarize
the contributions of this paper as: (a) We unify ProMPs
and DMPs into one consistent framework that inherits the
benefits of both formulations. (b) We enable to compute
distributions and to capture correlations of DMPs trajectories,
while (c) the robot’s current state can be inscribed into the
trajectory distribution through boundary conditions, allowing
for smooth replanning. (d) Moreover, the offline integration
of the basis functions significantly facilitates the integration
into neural network architectures, reducing the computation
time by a factor of 10. (e) Hence, we embed ProDMPs in
a deep encoder-decoder architecture that allows non-linear
conditioning on a set of high-dimensional observations with
varying information levels. We evaluate our method on three
digit-writing tasks using images as inputs, a simulated robot
pushing task with a complex physical interaction, and a real
robot picking task with shifting object positions. We compare
our model with state-of-the-art NN-based DMPs [9–11] and
arXiv:2210.01531v1 [cs.RO] 4 Oct 2022