
Demystifying the characterization of SDP matrices in mathematical
programming
Daniel Porumbel
October 25, 2022
Argument
This manuscript was written because I found no other introduction to SDP programming that targets the same
audience. This work is intended to be accessible to anybody who does not hate maths, who knows what a derivative
is and accepts (or has a proof of) results like det(AB) = det(A) det(B). If you know this, I think you can understand
most of this text without buying other books; my goal is not to remind/enumerate a list of results but to (try to)
make the reader examine (the proofs of) these results to get full insight into them.
A first difference compared to other existing introductions to SDP is that this work comes out of a mind that was
itself struggling to understand. This may seem to be only a weakness, but, paradoxically, it is both a weakness and a
strength. First, I did not try to overpower the reader, but I tried to minimize the distance between the author and
the reader as much as possible, even hoping to achieve a small level of mutual empathy. This enabled me avoid a
quite common pitfall: many long-acknowledged experts tend to forget the difficulties of beginners. Other experts try
to make all proofs as short as possible and to dismiss as unimportant certain key results they have seen thousands of
time in their career. I also avoided this, even if I did shorten a few proofs when I revised this manuscript two years
after it was first written. However, I also kept certain proofs that seem longer than necessary because I feel they
offer more insight; an important goal is to capture the “spirit” of each proven result instead of reducing it to a flow
of formulae.
The very first key step towards mastering SDP programming is to get full insight into the eigen-decomposition
of real symmetric matrices. It is enough to see the way many other introductions to SDP programming address
this eigen-decomposition to understand that their target audience is different from mine. They often list the eigen-
decomposition without proof, while I give two proofs to really familiarize the reader with it.1
If you can say “Ok, I still vaguely remember the eigen-decomposition (and other key SDP properties) from my
(under-)graduate studies some n≥5 years ago; I don’t need a proof”, then you do not belong to my target audience.
I am highly skeptical that such approach can lead to anything but superficial learning. Anyhow, my brain functions
in the most opposite manner. I like to learn by checking all proofs by myself and I can’t stand taking things for
granted. The only unproven facts from this manuscript are the fundamental theorem of algebra and two results from
Section 5.3.2.3. But I do provide complete proofs, for instance, for the Cholesky decomposition of SDP matrices, the
strong duality theorem for linear conic programming (including SDP programming), six equivalent formulations of the
Lov´asz theta number, the copositive formulation of the maximum stable, a few convexification results for quadratic
programs and many others. I tried to prove everything by myself, so that certain proofs are original although this
introduction was not intended to be research work; of course I got help multiple times from the internet, articles and
books, the references being indicated as footnote citations.
The most essential building blocks are presented in the first part. One should really master this first part before
jumping to the second one; the essentials from the first part may even be generally useful for reading other SDP
1In “Handbook of Semidefinite Programming Theory, Algorithms, and Applications” by H. Wolkowicz, R. Saigal and
L. Vandenberghe, the eigen-decomposition (called spectral theorem) is listed with no proof in Chapter 2 “Convex Analysis on
Symmetric Matrices”. The introduction of the “Handbook on Semidefinite, Conic and Polynomial Optimization” by M. An-
jost and J.B. Lasserre refers the reader to the (700 pages long) book “Matrix analysis” by Horn and Johnson. As a side
remark, the introductions of both these handbooks are rather short (14 or resp. 22 pages) and they mainly remind/enumerate
different key results pointing to other books for proofs. In “Semidefinite Programming for Combinatorial Optimization”, by
C. Helmberg, the eigen-decomposition is presented in an appendix and redirects the reader to the same “Matrix analysis” book.
The slides of the course “Programmation lin´eaire et optimisation combinatoire” of Fr´ed´eric Roupin for the “Master Parisien
de Recherche Op´erationnelle” (lipn.univ-paris13.fr/~roupin/docs/MPROSDPRoupin2018-partie1.pdf) provide many results
from my manuscript but no proof is given. The MIT course “Introduction to Semidefinite Programming ” by R. Freund does
not even provide the SDP definition or the eigen-decomposition. The book “Convex Optimization” by S. Boyd and L. Vanden-
berghe starts using SDP matrices from the beginning (e.g., to define ellipsoids in Section 2.2.2) without defining the concept of
SDP matrix, not even in appendix. The argument could extend to other non-trivial concepts that are taken as pre-requisite in
above works. For instance, the above “Convex Optimization” book introduces the square root of an SDP matrix (in five lines
in Appendix A.5.2), without showing the uniqueness – the proof takes half a page in Appendix B.4 of this manuscript.
1
arXiv:2210.13072v1 [math.OC] 24 Oct 2022