3
the Riemannian metric. In addition, it holds that d(x, y) = ∥v∥
where v= Logx(y); see [26, Proposition 10.22].
The Rotation Group SO(d).The rotation group is denoted
as SO(d) = {R∈Rd×d:R⊤R=I, det(R) = 1}. The tan-
gent space at Ris given by TRSO(d) = {RV :V∈so(d)},
where so(d)is the space of d×dskew-symmetric matrices.
In this work, we exclusively work with 2D and 3D rotations.
We define a basis for TRSO(3) such that each tangent vector
η∈TRSO(3) is identified with a vector v∈R3,
η=R[v]×=R
0−v3v2
v30−v1
−v2v10
.(3)
Note that (3) defines a bijection between η∈TRSO(3) and
v∈R3. For d= 2, we can define a similar basis for the
1-dimensional tangent space TRSO(2), where each tangent
vector η∈TRSO(2) is identified by a scalar v∈Ras,
η=R[v]×=R0−v
v0.(4)
We have overloaded the notation [·]×to map the input scalar or
vector to the corresponding skew-symmetric matrix in so(2) or
so(3). Under the basis given in (3) and (4), the inner product
on the tangent space is defined by the corresponding vector dot
product, i.e.,⟨η1, η2⟩=v⊤
1v2where v1, v2∈Rpare vector
representations of η1and η2, and p= dim SO(d) = d(d−
1)/2. We define the function Exp : Rp→SO(d)as,
Exp(v) = exp([v]×),(5)
where exp(·)denotes the conventional matrix exponential.
Note that Exp : Rp→SO(d)should not be confused with
the exponential mapping on Riemannian manifolds Expx:
TxM→M, although the two are closely related in the case of
rotations. Specifically, at a point R∈SO(d)where d∈ {2,3},
the exponential map can be written as ExpR(η) = RExp(v).
Lastly, we also denote Log as the inverse of Exp in (5).
II. RELATED WORKS
In this section, we review related work in collaborative
SLAM (Sec. II-A), graph structure on rotation averaging and
PGO (Sec. II-B), and the applications of spectral sparsification
and Laplacian linear solvers (Sec. II-C).
A. Collaborative SLAM
Systems. State-of-the-art collaborative SLAM systems rely
on optimization-based back-ends to accurately estimate robots’
trajectories and maps in a global reference frame. In fully cen-
tralized systems (e.g., [1]–[3]), robots upload their processed
measurements to a central server that in practice could contain
e.g., odometry factors, visual keyframes, and/or lidar keyed
scans. Using this information, the server is responsible for
managing the multi-robot maps and solving the entire back-
end optimization problem. In contrast, in systems leveraging
distributed computation (e.g. [4]–[6], [18], [27]), robots col-
laborate to solve back-end optimization by coordinating with
a server or among themselves. The resulting communication
usually involves exchanging intermediate iterates needed by
distributed optimization to attain convergence.
Optimization Algorithms. To solve factor graph optimiza-
tion in a multi-robot setting, Cunningham et al. develop DDF-
SAM [14], [16] where each agent communicates a “condensed
graph” produced by marginalizing out internal variables (those
without inter-robot measurements) in its local Gaussian factor
graph. Researchers have also developed information-based
sparsification methods to sparsify the dense information matrix
after marginalization using Chow-Liu tree (e.g., [28], [29]) or
convex optimization (e.g., [30], [31]). In these works, sparsifi-
cation is guided by an information-theoretic objective such as
the Kullback-Leibler divergence, and requires linearization to
compute the information matrix. In comparison, our approach
sparsifies the graph Laplacian that does not depend on lin-
earization, and furthermore the sparsified results are used by
collaborative optimization to achieve fast convergence.
From an optimization perspective, marginalization corre-
sponds to a domain decomposition approach (e.g., see [32,
Chapter 14]) where one eliminates a subset of variables
in the Hessian using the Schur complement. Related works
use sparse approximations of the resulting matrix (e.g., with
tree-based sparsity patterns) to precondition the optimization
[33]–[36]. Recent work [27] combines domain decomposition
with event-triggered transmission to improve communication
efficiency during collaborative estimation. Zhang et al. [37] de-
velop a centralized incremental solver for multi-robot SLAM.
Fully decentralized solvers for SLAM have also gained in-
creasing attention; see [8]–[13], [38]. In the broader field of
optimization, related works include decentralized consensus
optimization methods such as [39]–[42]. Compared to these
fully decentralized methods, the proposed approach assumes
a central server but achieves significantly faster convergence
by implementing approximate second-order optimization.
B. Graph Structure in Rotation Averaging and PGO
Prior works have investigated the impact of graph structure
on rotation averaging and PGO problems from different per-
spectives. One line of research [43]–[45] adopts an estimation-
theoretic approach and shows that the Fisher information
matrix is closely related to the underlying graph Laplacian
matrix. Eriksson et al. [46] establish sufficient conditions for
strong duality to hold in rotation averaging, where the derived
analytical error bound depends on the algebraic connectivity
of the graph. Recently, Bernreiter et al. [47] use tools from
graph signal processing to correct onboard estimation errors in
multi-robot mapping. Doherty et al. [48] propose a measure-
ment selection approach for pose graph SLAM that seeks to
maximize the algebraic connectivity of the underlying graph.
This paper differs from the aforementioned works by analyzing
the impact of graph structure on the underlying optimization
problems, and exploiting the theoretical analysis to design
novel optimization algorithms in the multi-robot setting.
Among related works in this area, the ones most related to
this paper are [49]–[53]. Carlone [49] analyzes the influences
of graph connectivity and noise level on the convergence
of Gauss-Newton methods when solving PGO. Tron [50]
derives the Riemannian Hessian of rotation averaging un-
der the geodesic distance, and uses the results to prove
convergence of Riemannian gradient descent. In a pair of