1 Spectral Sparsification for Communication-Efficient Collaborative Rotation and Translation Estimation

2025-04-28 0 0 3.09MB 37 页 10玖币
侵权投诉
1
Spectral Sparsification for Communication-Efficient
Collaborative Rotation and Translation Estimation
Yulun Tian and Jonathan P. How
Abstract—We propose fast and communication-efficient op-
timization algorithms for multi-robot rotation averaging and
translation estimation problems that arise from collaborative
simultaneous localization and mapping (SLAM), structure-from-
motion (SfM), and camera network localization applications. Our
methods are based on theoretical relations between the Hessians
of the underlying Riemannian optimization problems and the
Laplacians of suitably weighted graphs. We leverage these results
to design a collaborative solver in which robots coordinate with
a central server to perform approximate second-order optimiza-
tion, by solving a Laplacian system at each iteration. Crucially,
our algorithms permit robots to employ spectral sparsification
to sparsify intermediate dense matrices before communication,
and hence provide a mechanism to trade off accuracy with
communication efficiency with provable guarantees. We perform
rigorous theoretical analysis of our methods and prove that they
enjoy (local) linear rate of convergence. Furthermore, we show
that our methods can be combined with graduated non-convexity
to achieve outlier-robust estimation. Extensive experiments on
real-world SLAM and SfM scenarios demonstrate the superior
convergence rate and communication efficiency of our methods.
Index Terms—Simultaneous localization and mapping, opti-
mization, multi-robot systems.
I. INTRODUCTION
Collaborative spatial perception is a fundamental capability
for multi-robot systems to operate in unknown, GPS-denied
environments. State-of-the-art systems (e.g., [1]–[6]) rely on
optimization-based back-ends to achieve accurate multi-robot
simultaneous localization and mapping (SLAM). Often, a
central server receives data from all robots (e.g. in the form
of factor graphs [7]) and solves the underlying large-scale
optimization for the entire team. In comparison, collaborative
optimization frameworks leverage robots’ local computation
and iterative communication (either peer-to-peer or coordi-
nated by a server), and thus have the potential to scale to
larger scenes and support more robots.
Recent works focus on developing fully distributed algo-
rithms in which robots carry out iterative optimization via
peer-to-peer message passing [8]–[13]. While these methods
are flexible in terms of the required communication architec-
ture, they often suffer from slow convergence due to their
first-order nature and the inherent poor conditioning of typical
The authors are with the Department of Aeronautics and Astronautics, Mas-
sachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA
02139, USA. {yulun, jhow}@mit.edu. This work was supported in
part by ARL DCIST under Cooperative Agreement Number W911NF-17-2-
0181, and in part by ONR under BRC Award N000141712072.
The authors gratefully acknowledge Dr. Kaveh Fathian, Parker Lusk, Dr.
Kasra Khosoussi, Dr. David M. Rosen, and anonymous reviewers for helpful
comments. The authors would also like to thank Prof. Pierre-Antoine Absil
for insightful discussions on the linear convergence of approximate Newton
methods on Riemannian manifolds.
SLAM problems. To resolve the slow convergence issue, an al-
ternative is to pursue a second-order optimization framework.
A prominent example is DDF-SAM [14]–[16], in which robots
marginalize out internal variables (i.e., those without inter-
robot measurements) in their local factor graphs before com-
munication. From an optimization perspective, robots partially
eliminate their local Hessians and communicate the resulting
matrices. However, a shortcoming of this approach is that the
transmitted matrices are usually dense (even if the original
problem is sparse), and hence could result in long transmission
times that prevent the team from obtaining a timely solution.
To address the aforementioned technical gaps, this work
presents results towards collaborative optimization that
achieves both fast convergence and efficient communication.
Specifically, we develop new algorithms for solving multi-
robot rotation averaging and translation estimation. These
problems are fundamental and have applications ranging from
initialization for pose graph SLAM [17], structure-from-
motion (SfM) [18], and camera network localization [8]. Our
approach is based on a server-client architecture (Fig. 1a),
in which multiple robots (clients) coordinate with a server
to collaboratively solve the optimization problem leveraging
local computation. The crux of our method lies in exploiting
theoretical relations between the Hessians of the optimization
problems and the Laplacians of the underlying graphs. We
leverage these theoretical insights to develop a fast collabo-
rative optimization method in which each iteration computes
an approximate second-order update by replacing the Hessian
with a constant Laplacian matrix, which improves efficiency
in both computation and communication. Furthermore, dur-
ing communication, robots use spectral sparsification [19],
[20] to sparsify intermediate dense matrices resulted from
elimination of its internal variables. Figs. 1b to 1d show a
high-level illustration of our approach. By varying the degree
of sparsification, our method thus provides a principled way
for trading off accuracy with communication efficiency. The
theoretical properties of spectral sparsification allow us to
perform rigorous convergence analysis, and establish linear
rates of convergence for our methods. Lastly, we also present
an extension to outlier-robust estimation by combining our
approach with graduated non-convexity (GNC) [21], [22].
Contributions. The key contributions of this work are
summarized as follows:
We present collaborative optimization algorithms for multi-
robot rotation averaging and translation estimation under the
server-client architecture, which enjoy fast convergence (in
terms of the number of iterations) and efficient communica-
tion through the use of spectral sparsification.
In contrast to the typical sublinear convergence of prior
methods, we prove (local) linear convergence for our meth-
arXiv:2210.05020v4 [cs.RO] 16 Aug 2023
2
ð1ð2ð3
(a) Server-client architecture
R bot 2
Robo
Robot 3
(b) Measurement graph
R bot 2
Robo
Robot 3
(c) Robot 2’s dense reduced graph
R bot 2
Robo
Robot 3
(d) Robot 2’s sparsified graph
Fig. 1: (a) Information flow in server-client architecture. Each communication round consists of an upload stage (clients to server) and a
download stage (server to clients). (b) Example 3-robot problem visualized as a graph. For each robot α∈ {1,2,3}, its vertices (variables)
Vαare shown in a distinct color. Each edge corresponds to a relative measurement between two variables. Separators (solid line) correspond
to variables with inter-robot measurements, and the remaining variables form the interior vertices (dashed line). (c) For robot 2, elimination
of its interior vertices creates a dense matrix S2, which corresponds to a dense graph over its separators. (d) In our approach, robot 2 achieves
communication efficiency by transmitting a sparse approximation e
S2of the original dense matrix S2, which also corresponds to a sparsified
graph over its separators.
ods and show that the rate of convergence depends on the
user-defined sparsification parameter.
We present an extension to outlier-robust estimation by
combining the proposed algorithms with GNC.
We perform extensive evaluations of our methods and
demonstrate their values on real-world SLAM and SfM
scenarios with outlier measurements.
Lastly, while our algorithms and theoretical guarantees cover
separate rotation averaging and translation estimation, we
demonstrate through our experiments that their combination
can be used to achieve robust initialization for pose graph
optimization (PGO), which is another fundamental problem
commonly used in collaborative SLAM.
Paper Organization. The rest of this paper is organized
as follows. The remainder of this section introduces necessary
notation and mathematical preliminaries, and in Sec. II, we
review related works. Sec. III formally introduces the problem
formulation, communication architecture, and relevant appli-
cations. In Sec. IV, we establish theoretical relations between
the Hessians and the underlying graph Laplacians. Then, in
Sec. V, we leverage these theoretical results to design fast and
communication-efficient solvers for the problems of interest
and establish convergence guarantees. Finally, Sec. VI presents
numerical evaluations of the proposed algorithms.
Notations and Preliminaries
Table V in the appendix summarizes detailed notations used
in this work. Unless stated otherwise, lowercase and uppercase
letters denote vectors and matrices, respectively. We define
[n]{1,2, . . . , n}as the set of positive integers from 1 to n.
Linear Algebra and Spectral Approximation. Snand Sn
+
denote the set of n×nsymmetric and symmetric positive
semidefinite matrices, respectively. We use to denote the
Kronecker product. For a positive integer n,1nRnand
InRn×ndenote the vector of all ones and the Identity
matrix. For any matrix A,ker(A)and image(A)denote the
kernel (nullspace) and image (span of column vectors) of
A, respectively. Adenotes the Moore-Penrose inverse of A,
which coincides with the inverse A1when Ais invertible.
When A∈ Sn,λ1(A), . . . , λn(A)denote the real eigenvalues
of Asorted in increasing order. When A∈ Sn
+, we also define
XAptr(XAX)where Xis of compatible dimensions.
Following [23], [24], for A, B ∈ Snand ϵ > 0, we say
that Bis an ϵ-approximation of A, denoted as AϵB, if the
following holds,
eϵBAeϵB, (1)
where BAmeans AB∈ Sn
+. Note that (1) is symmetric
and holds under composition: if AϵBand BδC, then
Aϵ+δC. Furthermore, if Ais singular, the relation (1)
implies that Bis necessarily singular and ker(A) = ker(B).
Graph Theory. A weighted undirected graph is denoted
as G= (V,E, w), where Vand Edenote the vertex and edge
sets, and w:E R>0is the edge weight function that assigns
each edge (i, j)∈ E a positive weight wij . For a graph Gwith
nvertices, its graph Laplacian L(G;w)∈ Sn
+is defined as,
L(G;w)ij =
PkNbr(i)wik,if i=j,
wij ,if i̸=j, (i, j)∈ E,
0,otherwise.
(2)
In (2), Nbr(i)⊆ V denotes the neighbors of vertex iin the
graph. Our notation L(G;w)serves to emphasize that the
Laplacian of Gdepends on the edge weight w. When the edge
weight wis irrelevant or clear from context, we will write the
graph as G= (V,E)and its Laplacian as L(G)or simply
L. The graph Laplacian Lalways has a zero eigenvalue, i.e.,
λ1(L)=0. The second smallest eigenvalue λ2(L)is known
as the algebraic connectivity, which is always positive for
connected graphs.
Riemannian Manifolds. The reader is referred to [25],
[26] for a comprehensive review of optimization on matrix
manifolds. In general, we use Mto denote a smooth matrix
manifold. For integer n > 1,Mndenotes the product
manifold formed by ncopies of M.TxMdenotes the tangent
space at x∈ M. For tangent vectors η, ξ TxM, their inner
product is denoted as η, ξx, and the corresponding norm
is ηx=pη, ηx. In the rest of the paper, we drop the
subscript xas it will be clear from context. At x∈ M, the
injectivity radius inj(x)is a positive constant such that the ex-
ponential map Expx:TxM→Mis a diffeomorphism when
restricted to the domain U={ηTxM:η<inj(x)}. In
this case, we define the logarithm map to be LogxExp1
x.
Unless otherwise mentioned, we use d(x, y)to denote the
geodesic distance between two points x, y ∈ M induced by
3
the Riemannian metric. In addition, it holds that d(x, y) = v
where v= Logx(y); see [26, Proposition 10.22].
The Rotation Group SO(d).The rotation group is denoted
as SO(d) = {RRd×d:RR=I, det(R) = 1}. The tan-
gent space at Ris given by TRSO(d) = {RV :Vso(d)},
where so(d)is the space of d×dskew-symmetric matrices.
In this work, we exclusively work with 2D and 3D rotations.
We define a basis for TRSO(3) such that each tangent vector
ηTRSO(3) is identified with a vector vR3,
η=R[v]×=R
0v3v2
v30v1
v2v10
.(3)
Note that (3) defines a bijection between ηTRSO(3) and
vR3. For d= 2, we can define a similar basis for the
1-dimensional tangent space TRSO(2), where each tangent
vector ηTRSO(2) is identified by a scalar vRas,
η=R[v]×=R0v
v0.(4)
We have overloaded the notation [·]×to map the input scalar or
vector to the corresponding skew-symmetric matrix in so(2) or
so(3). Under the basis given in (3) and (4), the inner product
on the tangent space is defined by the corresponding vector dot
product, i.e.,η1, η2=v
1v2where v1, v2Rpare vector
representations of η1and η2, and p= dim SO(d) = d(d
1)/2. We define the function Exp : RpSO(d)as,
Exp(v) = exp([v]×),(5)
where exp(·)denotes the conventional matrix exponential.
Note that Exp : RpSO(d)should not be confused with
the exponential mapping on Riemannian manifolds Expx:
TxM→M, although the two are closely related in the case of
rotations. Specifically, at a point RSO(d)where d∈ {2,3},
the exponential map can be written as ExpR(η) = RExp(v).
Lastly, we also denote Log as the inverse of Exp in (5).
II. RELATED WORKS
In this section, we review related work in collaborative
SLAM (Sec. II-A), graph structure on rotation averaging and
PGO (Sec. II-B), and the applications of spectral sparsification
and Laplacian linear solvers (Sec. II-C).
A. Collaborative SLAM
Systems. State-of-the-art collaborative SLAM systems rely
on optimization-based back-ends to accurately estimate robots’
trajectories and maps in a global reference frame. In fully cen-
tralized systems (e.g., [1]–[3]), robots upload their processed
measurements to a central server that in practice could contain
e.g., odometry factors, visual keyframes, and/or lidar keyed
scans. Using this information, the server is responsible for
managing the multi-robot maps and solving the entire back-
end optimization problem. In contrast, in systems leveraging
distributed computation (e.g. [4]–[6], [18], [27]), robots col-
laborate to solve back-end optimization by coordinating with
a server or among themselves. The resulting communication
usually involves exchanging intermediate iterates needed by
distributed optimization to attain convergence.
Optimization Algorithms. To solve factor graph optimiza-
tion in a multi-robot setting, Cunningham et al. develop DDF-
SAM [14], [16] where each agent communicates a “condensed
graph” produced by marginalizing out internal variables (those
without inter-robot measurements) in its local Gaussian factor
graph. Researchers have also developed information-based
sparsification methods to sparsify the dense information matrix
after marginalization using Chow-Liu tree (e.g., [28], [29]) or
convex optimization (e.g., [30], [31]). In these works, sparsifi-
cation is guided by an information-theoretic objective such as
the Kullback-Leibler divergence, and requires linearization to
compute the information matrix. In comparison, our approach
sparsifies the graph Laplacian that does not depend on lin-
earization, and furthermore the sparsified results are used by
collaborative optimization to achieve fast convergence.
From an optimization perspective, marginalization corre-
sponds to a domain decomposition approach (e.g., see [32,
Chapter 14]) where one eliminates a subset of variables
in the Hessian using the Schur complement. Related works
use sparse approximations of the resulting matrix (e.g., with
tree-based sparsity patterns) to precondition the optimization
[33]–[36]. Recent work [27] combines domain decomposition
with event-triggered transmission to improve communication
efficiency during collaborative estimation. Zhang et al. [37] de-
velop a centralized incremental solver for multi-robot SLAM.
Fully decentralized solvers for SLAM have also gained in-
creasing attention; see [8]–[13], [38]. In the broader field of
optimization, related works include decentralized consensus
optimization methods such as [39]–[42]. Compared to these
fully decentralized methods, the proposed approach assumes
a central server but achieves significantly faster convergence
by implementing approximate second-order optimization.
B. Graph Structure in Rotation Averaging and PGO
Prior works have investigated the impact of graph structure
on rotation averaging and PGO problems from different per-
spectives. One line of research [43]–[45] adopts an estimation-
theoretic approach and shows that the Fisher information
matrix is closely related to the underlying graph Laplacian
matrix. Eriksson et al. [46] establish sufficient conditions for
strong duality to hold in rotation averaging, where the derived
analytical error bound depends on the algebraic connectivity
of the graph. Recently, Bernreiter et al. [47] use tools from
graph signal processing to correct onboard estimation errors in
multi-robot mapping. Doherty et al. [48] propose a measure-
ment selection approach for pose graph SLAM that seeks to
maximize the algebraic connectivity of the underlying graph.
This paper differs from the aforementioned works by analyzing
the impact of graph structure on the underlying optimization
problems, and exploiting the theoretical analysis to design
novel optimization algorithms in the multi-robot setting.
Among related works in this area, the ones most related to
this paper are [49]–[53]. Carlone [49] analyzes the influences
of graph connectivity and noise level on the convergence
of Gauss-Newton methods when solving PGO. Tron [50]
derives the Riemannian Hessian of rotation averaging un-
der the geodesic distance, and uses the results to prove
convergence of Riemannian gradient descent. In a pair of
4
papers [51], [52], Wilson et al. study the local convexity of
rotation averaging under the geodesic distance, by bounding
the Riemannian Hessian using the Laplacian of a suitably
weighted graph. Recently, Nasiri et al. [53] develop a Gauss-
Newton method for rotation averaging under the chordal
distance, and show that its convergence basin is influenced by
the norm of the inverse reduced Laplacian matrix. Our work
differs from [49]–[53] by focusing on the development of fast
and communication-efficient solvers in multi-robot teams with
provable performance guarantees. During this process, we also
prove new results on the connections between the Riemannian
Hessian and graph Laplacian, and show that they hold under
both geodesic and chordal distance.
C. Spectral Sparsification and Laplacian Solvers
A remarkable property of graph Laplacians is that they
admit sparse approximations; see [19] for a survey. Spielman
and Srivastava [20] show that every graph with nvertices can
be approximated using a sparse graph with O(nlog n)edges.
This is achieved using a random sampling procedure that
selects each edge with probability proportional to its effective
resistance, which intuitively measures the importance of each
edge to the whole graph. Batson et al. [54] develop a procedure
based on the so-called barrier functions for constructing linear-
sized sparsifiers. Another line of work [55], [56] employs spar-
sification during approximate Gaussian elimination. Spectral
sparsification is one of the main tools that enables recent
progress in fast Laplacian solvers (i.e., for solving linear
systems of the form Lx =b, where Lis a graph Laplacian);
see [57] for a survey. Peng and Spielman [58] develop a
parallel solver that invokes sparsification as a subroutine,
which is improved and extended in following works [23], [24].
Recently, Tutunov [59] extends the approach in [58] to solve
decentralized consensus optimization problems. In this work,
we leverage spectral sparsification to design communication-
efficient collaborative optimization methods for rotation aver-
aging with provable convergence guarantees.
III. PROBLEM FORMULATION
This section formally defines the rotation averaging and
translation estimation problems in the multi-robot context. For
clarity, here we introduce the problems without considering
outlier measurements, and present extensions to outlier-robust
optimization in Sec. V-D. We review the communication and
computation architectures used by our algorithms. Finally, we
discuss relevant applications in multi-robot SLAM and SfM.
A. Rotation Averaging
We model rotation averaging using an undirected measure-
ment graph G= (V,E). Each vertex i∈ V = [n]corresponds
to a rotation variable RiSO(d)to be estimated. Each edge
(i, j)∈ E corresponds to a noisy relative measurement of the
form, e
Rij =R
iRjRerr
ij ,(6)
where Ri, RjSO(d)are the latent (ground truth) rotations
and Rerr
ij SO(d)is the measurement noise. In standard rota-
tion averaging, we aim to estimate the rotations by minimizing
the sum of squared measurement residuals, which corresponds
to the formulation in Problem 1.
Problem 1 (Rotation Averaging).
minimize
R=(R1,...,Rn)SO(d)nX
(i,j)∈E
κij φ(Rie
Rij , Rj).(7)
For each edge (i, j)∈ E,κij >0is the corresponding
measurement weight. The function φis defined as either the
squared geodesic (8a) or chordal distance (8b),
φ(Rie
Rij , Rj)
1
2
Log( e
R
ij R
iRj)
2,(8a)
1
2
Rie
Rij Rj
2
F.(8b)
In the multi-robot setting, each robot owns a subset of
all rotation variables and only knows about measurements
involving its own variables; see Fig. 1b for an illustration.
B. Translation Estimation
Similar to rotation averaging, we also consider the problem
of estimating multiple translation vectors given noisy relative
translation measurements.
Problem 2 (Translation Estimation).
minimize
t=(t1,...,tn)Rd×nX
(i,j)∈E
τij
2
tjtib
tij
2
2.(9)
Note that (9) is a linear least squares problem. Similar to
rotation averaging, (9) can be modeled using the undirected
measurement graph G= (V,E), where vertex irepresents
the translation variable tiRdto be estimated, and edge
(i, j)∈ E represents the relative translation measurement b
tij
Rd. Lastly, τij >0is the positive weight associated with
measurement (i, j)∈ E.
C. Communication and Computation Architecture
In this work, we consider solving Problems 1 and 2 under
the server-client architecture. As shown in Fig. 1a, a central
server coordinates with all robots (clients) to solve the overall
problem by distributing the underlying computation to the
entire team. In practice, the server could itself be a robot (e.g.,
in multi-robot exploration scenarios) or a remote machine
(e.g., in cloud-based AR/VR applications). Each iteration
(communication round) consists of an upload stage in which
robots perform parallel local computations and transmit their
intermediate information to the server, and a download stage
in which the server aggregates information from all robots and
broadcasts back the result. When a direct communication link
to the server does not exist, a robot can still participate in this
framework by relaying its information through other robots.
By leveraging local computations, the server-client architec-
ture can scale better compared to a fully centralized approach
in which the server solves the entire optimization problem.
At the same time, by implementing second-order optimization
algorithms, this architecture also produces significantly faster
and more accurate solutions compared to fully distributed
5
approaches that rely on first-order optimization. In the experi-
ments, we demonstrate the scalability and fast convergence of
our approach on large SLAM and SfM problems.
D. Applications
Rotation averaging (Problem 1) is a fundamental problem in
robotics and computer vision. In distributed camera networks
(e.g., [8]), rotation averaging is used to estimate the orienta-
tions of spatially distributed cameras with overlapping fields
of view. In distributed SfM (e.g., [18]), rotation averaging is
a key step to build large-scale 3D reconstructions from many
images. Furthermore, in the context of collaborative SLAM,
rotation averaging and translation estimation (Problem 2) can
be combined to provide accurate initialization for PGO [17].
In state-of-the-art PGO solvers, the cost function often has a
separable structure between rotation and translation measure-
ments. For example, SE-Sync [60] uses the formulation,
minimize
R=(R1,...,Rn)SO(d)n,
t=(t1,...,tn)Rd×nX
(i,j)∈E
κij
Rie
Rij Rj
2
F
+X
(i,j)∈E
τij
tjtiRie
tij
2
2.
(10)
In (10), RiSO(d)and tiRdare rotation matrices and
translation vectors to be estimated, e
Rij SO(d)and e
tij Rd
are noisy relative rotation and translation measurements, and
κij , τij >0are constant measurement weights. Notice that in
(10), the first sum of terms is equivalent to rotation averaging
(Problem 1) under the chordal distance. Furthermore, given
fixed rotation estimates b
RSO(d)n, the second sum of
terms is equivalent to translation estimation (Problem 2) where
each b
tij in (9) is given by b
tij =b
Rie
tij . In both cases, the
equivalence is up to a multiplying factor of 1/2, but this is
inconsequential since it does not change solutions to the opti-
mization problems. Following Carlone et al. [17], we use these
observations to initialize PGO in a two-stage process. The
first stage initializes the rotation variables using the proposed
rotation averaging solver (Sec. V-B). Given the initial rotation
estimates, the second stage initializes the translations using
the proposed translation estimation solver (Sec. V-C). We
note that this initialization scheme does not have theoretical
guarantees with respect to the full PGO problem. However, we
still demonstrate its practical value through our experiments.
IV. LAPLACIAN SYSTEMS ARISING FROM ROTATION
AVERAGING AND TRANSLATION ESTIMATION
In this section, we show that for rotation averaging (Prob-
lem 1) and translation estimation (Problem 2), their Hessian
matrices are closely related to the Laplacians of suitably
weighted graphs. The theoretical relations we establish in this
section pave the way for designing fast and communication-
efficient solvers in Sec. V.
A. Rotation Averaging
To solve rotation averaging (Problem 1), we resort to an
iterative Riemannian optimization framework. Before pro-
ceeding, however, one needs to be careful of the inherent
gauge-symmetry of rotation averaging: in (7), note that left
multiplying each rotation RiSO(d), i [n]by a common
rotation SSO(d)does not change the cost function. As a
result, each solution R= (R1, . . . , Rn)SO(d)nactually
corresponds to an equivalence class of solutions in the form
of,
[R] = {(SR1, . . . , SRn), S SO(d)}.(11)
The equivalence relation (11) shows that rotation averaging
is actually an optimization problem defined over a quotient
manifold M=M/, where M= SO(d)nis called the
total space and denotes the equivalence relation underlying
(11); see [26, Chapter 9] for more details. Accounting for
the quotient structure is critical for establishing the relation
between the Hessian and the graph Laplacian.
In this work, we are interested in applying Newton’s method
on the quotient manifold Mdue to its superior convergence
rate. The Newton update can be derived by considering a
local perturbation of the cost function. Specifically, let R=
(R1, . . . , Rn)SO(d)nbe our current rotation estimates. For
each rotation matrix Ri, we seek a local correction to it in
the form of Exp(vi)Ri, where viRpis some vector to be
determined and Exp(·)is defined in (5). In (7), replacing each
Riwith its correction Exp(vi)Rileads to the following local
approximation1of the optimization problem,
min
vRpn h(v;R)X
(i,j)∈E
κij φ(Exp(vi)Rie
Rij ,Exp(vj)Rj).
(12)
In (12), the overall vector vRpn is formed by concatenating
all vis. Compared to (7), the optimization variable in (12)
becomes the vector vand the rotations Rare treated as fixed.
Furthermore, we note that the quotient structure of Problem 1
gives rise to the following vertical space [26, Chapter 9.4]
that summarizes all directions of change along which (12) is
invariant,
N= image(1nIp)Rpn.(13)
Intuitively, Ncaptures the action of any global left rotation.
Indeed, for any v∈ N, we have Exp(vi) = Exp(vj)for all
i, j [n], and thus the cost function (12) remains constant.
Let us denote the gradient and Hessian of (12) as follows,
g(R)h(v;R)|v=0, H(R)2h(v;R)|v=0.(14)
Our notations g(R)and H(R)serve to emphasize that the
gradient and Hessian are defined in the total space Mand
depend on the current rotation estimates R. Let HNde-
note the horizontal space, which is the orthogonal complement
of the vertical space N. In [26, Chapter 9.12], it is shown
that executing the Newton update on the quotient manifold
amounts to finding the solution v∈ H to the linear system,
(PHH(R)PH
| {z }
H(R)
)v=g(R),(15)
1The approximation defined in (12) is closely related to the standard
pullback function in Riemannian optimization; see Appendix II-D. In this
work, we use (12) since the resulting Hessian has a particularly interesting
relationship with the graph Laplacian matrix, as shown in Theorem 1.
摘要:

1SpectralSparsificationforCommunication-EfficientCollaborativeRotationandTranslationEstimationYulunTianandJonathanP.HowAbstract—Weproposefastandcommunication-efficientop-timizationalgorithmsformulti-robotrotationaveragingandtranslationestimationproblemsthatarisefromcollaborativesimultaneouslocalizat...

展开>> 收起<<
1 Spectral Sparsification for Communication-Efficient Collaborative Rotation and Translation Estimation.pdf

共37页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:37 页 大小:3.09MB 格式:PDF 时间:2025-04-28

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 37
客服
关注