
2 A. Hippert-Ferrer et al.
Within Gaussian graphical models (GGMs), graph learning boils down to
the problem of estimating the precision (inverse covariance) matrix of a Gaus-
sian Markov random field (Friedman et al.,2008;Lake and Tenenbaum,2010).
In practice, achieving an accurate covariance matrix estimation is often a dif-
ficult task due to low sample support. Thus, it is common to introduce prior
assumptions on the structure of this matrix that guarantee a correct estimation
with fewer samples. A popular approach is related to low-rank factorizations,
which relies on the assumption that the data is driven by an underlying low-
dimensional linear model, corrupted by an independent perturbation. The re-
sulting covariance matrix decomposition then involves a core that is a low-rank
positive semi-definite matrix. Such model is ubiquitous in statistics, and for ex-
ample, at the heart of probabilistic principal component analysis (Tipping and
Bishop,1999), low-rank factor analysis (Robertson and Symons,2007;Khamaru
and Mazumder,2019;Rubin and Thayer,1982), and their many generalizations.
Since GGMs and low-rank factorizations share a common root in structured
covariance (or precision) matrix estimation, it appears desirable to leverage both
approaches in a unified graph learning formulation. On one hand, linear dimen-
sion reduction approaches rely on particular spectral structures that can be
beneficial for graph learning (Kumar et al.,2020). On the other hand, it also
opens the way to graph-oriented view of sparse principal component analysis
(Yoshida and West,2010;Meng et al.,2014). Though theoretically appealing,
such unification is challenging because it formulates optimization problems with
objective functions and constraints that apply both on the covariance matrix
and its inverse. Thus, deriving single-step learning algorithms for these models
has only recently been addressed (Chandra et al.,2021).
In this paper, we propose a new family of methods for graph learning with
low-rank constraints on the covariance matrix, hereafter referred to as graphical
factor models (GFM). First, we reformulate graph learning as a problem that
encompasses both elliptical distributions and low-rank factor models. The main
interest of generalizing Gaussian graphical models to elliptical ones is to en-
sure robustness to underlying heavy-tailed distributions (Vogel and Fried,2011;
Finegold and Drton,2014;Zhang et al.,2013;de Miranda Cardoso et al.,2021).
Moreover, additionally considering low-rank factor models allows for an effective
dimensionality reduction. The main novelty of our approach is to tackle the re-
sulting class of constrained and penalized maximum likelihood estimation in a
unified way with Riemannian optimization (Absil et al.,2008;Boumal,2020). To
do so, we leverage geometries of of both the positive definite matrices (Bhatia,
2009), and positive semi-definite matrices of fixed rank (Bonnabel and Sepul-
chre,2009;Bouchard et al.,2021) that are well suited to the considered models.
The corresponding tools allows us to develop optimization methods that ensure
the desired structures for the covariance matrix, thus providing a flexible and
numerically efficient framework for learning graphical factor models.
Finally, experiments1conducted on synthetic and real-world data sets demon-
strate the interest of considering both elliptical distributions and factor model
1The code is available at: https://github.com/ahippert/graphfactormodel.