
Most density matrices of interest of size d×dhave some underlying structures. Such structures,
for example, the low rank rstructure [15,16,17,18,19,20] or the permutation property [21,22],
can be utilized for efficient matrix construction. Specifically, the low rank rmatrices are suitable
for compressed sensing frameworks, given that the number of Pauli observable measurements,
m∼O(rd) (ignoring some log ddependence), is sufficient to recover the density matrix ρ, instead
of having to find the full d2information set [16,17,23,24,25,26]. Guaranteed reconstruction
is reliant on the restricted isometry property (RIP), which is proven to exist for ordinary Pauli
observable measurements [27].
Since the dimension of the matrix d= 2kgrows exponentially with the qubit number k, the
complexity of the reconstruction increases very quickly. The fact that, up to now, experimental
demonstrations of tomography have only been performed for small qubit numbers [28,29], shows
the difficulties. When the matrix dimension dis large, two aspects are particularly relevant in
deciding the quality of the tomography. One is the sample complexity and the other is the time
complexity. In addition, the algorithm used to recover the density matrix must also guarantee the
accuracy.
Sample complexity relates to the fundamental question, how many copies of ρare necessary
and sufficient to determine a state [30]. Theoretically, according to the positive operator-value
measurement (POVM) scheme, Ref. [30] showed that O(dr log(d/)/) copies of ρare sufficient for
tomography to obtain a description with 1−fidelity, and the necessary lower bound is Ω( rd
log(d/r)).
In another study, [31] improved the lower bound by changing the number of copies to Ω(rd/). For
Pauli measurement, [32] showed that O(10k
δ2) copies are sufficient to can accomplish the tomography
of a k-qubit system down to a trace distance error δ.
Time complexity determines the efficiency of an algorithm, which is crucial for matrix recovery
in practical applications. The computational time can be slow for calculations involving the entire
matrix, especially when the system size is large. Many standard and state-of-the-art algorithms
require solving the eigen systems or doing the singular value decomposition (SVD), especially
when they involve projection related to the eigenspectrum [10,33,34], singular value contracting
operator [35,36,37], or a unitary transformation of eigenbasis [38,14]. Both SVD or eigenvalue
decomposition have time complexity O(d3) and thus can be slow. Other time consuming operations
involving the full d×dmatrix include Hessian calculation [39] and matrix inverse [40,41,42].
Although the efficiency can be improved in [39] by switching over from initial costly rapid descent
and computing less costly proxy for Hessian [39], it is still heuristic and provides no theoretical
guarantee of performance and convergence yet. Some extension of the matrix inversion case [40]
can also improve the efficiency [43]. However, this relies on the graphical-processing unit (GPU)
and the linear matrix inversion of the full matrix is not an efficient approach from the algorithmic
point of view. Without utilization of the structures behind the matrix, these algorithms tend to be
slow in recovering matrices when the system is large.
A good algorithm should guarantee both the accuracy and efficiency in finding the answer.
The difference between the final constructed matrix ˆρand the underlying true density matrix ρ
ultimately contains both the error due to the algorithm itself and the error intrinsic to the input
measurement data. The analysis and control of the error bound of this estimated difference is
important for the correctness of the algorithm. The metric of the error can vary from algorithm to
algorithm. The error bound is shown in nuclear norm in the projected least squares error approach
[44]. In the convex optimization approach within the compressed sensing framework, the error
bounds are shown in both the nuclear norm [45,46] and the Frobenius norm [45].
Since the difficulty in tomography is largely caused by the limitations of the algorithms, it is
important to find more efficient algorithms. Time complexity can be reduced if the underlying
2