Matrix measure concentration inequalities and bounds

November 25, 2014 — March 8, 2021

dynamical systems
functional analysis
high d
linear algebra
model selection
probability
stochastic processes
Figure 1

Concentration inequalities for matrix-valued random variables, which are, loosely speaking, promises that some matrix is close to some known value measured in some metric with some probability.

Recommended overviews are J. A. Tropp (2015);van Handel (2017);Vershynin (2018).

1 Matrix Chernoff

J. A. Tropp (2015) summarises:

In recent years, random matrices have come to play a major role in computational mathematics, but most of the classical areas of random matrix theory remain the province of experts. Over the last decade, with the advent of matrix concentration inequalities, research has advanced to the point where we can conquer many (formerly) challenging problems with a page or two of arithmetic.

Are these related?

Nikhil Srivastava’s Discrepancy, Graphs, and the Kadison-Singer Problem has an interesting example of bounds via discrepancy theory (and only indirectly probability). D. Gross (2011) is also readable and gives results for matrices over the complex field.

2 Matrix Chebychev

As discussed in, e.g. Paulin, Mackey, and Tropp (2016).

Let \(\mathbf{X} \in \mathbb{H}^{d}\) be a random matrix. For all \(t>0\) \[ \mathbb{P}\{\|\mathbf{X}\| \geq t\} \leq \inf _{p \geq 1} t^{-p} \cdot \mathbb{E}\|\mathbf{X}\|_{S_{p}}^{p} \] Furthermore, \[ \mathbb{E}\|\mathbf{X}\| \leq \inf _{p \geq 1}\left(\mathbb{E}\|\mathbf{X}\|_{S_{p}}^{p}\right)^{1 / p}. \]

3 Matrix Bernstein

TBC. Bounds the spectral norm.

4 Matrix Efron-Stein

Figure 2

The “classical” Efron-Stein inequalities are simple. The Matrix ones, not so much

e.g. Paulin, Mackey, and Tropp (2016).

5 Gaussian

Handy results from Vershynin (2018):

Takes \(X \sim N\left(0, I_{n}\right).\)

Show that, for any fixed vectors \(u, v \in \mathbb{R}^{n},\) we have \[ \mathbb{E}\langle X, u\rangle\langle X, v\rangle=\langle u, v\rangle \]

Given a vector \(u \in \mathbb{R}^{n}\), consider the random variable \(X_{u}:=\langle X, u\rangle .\)

Further, we know that \(X_{u} \sim N\left(0,\|u\|_{2}^{2}\right) .\) It follows that \[ \mathbb{E}\left[(X_{u}-X_{v})^2\right]^{1/2}=\|u-v\|_{2} \] for any fixed vectors $u, v \(\mathbb{R}^{n} .\)

Grothendieck’s identity: For any fixed vectors \(u, v \in S^{n-1},\) we have \[ \mathbb{E} \operatorname{sign}X_{u} \operatorname{sign}X_{v}=\frac{2}{\pi} \arcsin \langle u, v\rangle. \]

6 References

Ahlswede, and Winter. 2002. Strong Converse for Identification via Quantum Channels.” IEEE Transactions on Information Theory.
Ando. 1995. Matrix Young Inequalities.” In Operator Theory in Function Spaces and Banach Lattices: Essays Dedicated to A.C. Zaanen on the Occasion of His 80th Birthday. Operator Theory Advances and Applications.
Bach. 2013. Sharp Analysis of Low-Rank Kernel Matrix Approximations. In COLT.
Bakherad, Krnic, and Moslehian. 2016. Reverses of the Young Inequality for Matrices and Operators.” Rocky Mountain Journal of Mathematics.
Bhatia. 1997. Matrix Analysis. Graduate Texts in Mathematics.
Bhatia, and Parthasarathy. 2000. Positive Definite Functions and Operator Inequalities.” Bulletin of the London Mathematical Society.
Bishop, Del Moral, and Niclas. 2018. An Introduction to Wishart Matrix Moments.” Foundations and Trends® in Machine Learning.
Boucheron, Lugosi, and Massart. 2013. Concentration Inequalities: A Nonasymptotic Theory of Independence.
Bousquet, Luxburg, and Rtsch. 2004. Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2-14, 2003, T Bingen, Germany, August 4-16, 2003, Revised Lectures.
Brault, d’Alché-Buc, and Heinonen. 2016. Random Fourier Features for Operator-Valued Kernels.” In Proceedings of The 8th Asian Conference on Machine Learning.
Camino, Helton, Skelton, et al. 2003. “Matrix Inequalities: A Symbolic Procedure to Determine Convexity Automatically.” Integral Equation and Operator Theory.
Candès, and Recht. 2009. Exact Matrix Completion via Convex Optimization.” Foundations of Computational Mathematics.
Conde. 2013. “Young Type Inequalities for Positive Operators.” Annals of Functional Analysis.
Gross, D. 2011. Recovering Low-Rank Matrices From Few Coefficients in Any Basis.” IEEE Transactions on Information Theory.
Gross, David, Liu, Flammia, et al. 2010. Quantum State Tomography via Compressed Sensing.” Physical Review Letters.
Hirzallah, and Kittaneh. 2000. Matrix Young Inequalities for the Hilbert–Schmidt Norm.” Linear Algebra and Its Applications.
Huijsmans, Kaashoek, Luxemburg, et al., eds. 1995. Operator Theory in Function Spaces and Banach Lattices: Essays Dedicated to A.C. Zaanen on the Occasion of His 80th Birthday.
Liao, and Wu. 2015. Reverse Arithmetic-Harmonic Mean and Mixed Mean Operator Inequalities.” Journal of Inequalities and Applications.
Lodhia, Levin, and Levina. 2021. Matrix Means and a Novel High-Dimensional Shrinkage Phenomenon.”
Magnus, and Neudecker. 2019. Matrix differential calculus with applications in statistics and econometrics. Wiley series in probability and statistics.
Mahoney. 2016. Lecture Notes on Spectral Graph Methods.” arXiv Preprint arXiv:1608.04845.
Marton. 2015. Logarithmic Sobolev Inequalities in Discrete Product Spaces: A Proof by a Transportation Cost Distance.” arXiv:1507.02803 [Math].
Massart. 2007. Concentration Inequalities and Model Selection: Ecole d’Eté de Probabilités de Saint-Flour XXXIII - 2003. Lecture Notes in Mathematics 1896.
Mercer. 2000. Bounds for A–G, A–H, G–H, and a Family of Inequalities of Ky Fan’s Type, Using a General Method.” Journal of Mathematical Analysis and Applications.
Mohri, Rostamizadeh, and Talwalkar. 2018. Foundations of Machine Learning. Adaptive Computation and Machine Learning.
Oertel. 2020. Grothendieck’s Inequality and Completely Correlation Preserving Functions – a Summary of Recent Results and an Indication of Related Research Problems.”
Oliveira. 2010. The Spectrum of Random \(k\)-Lifts of Large Graphs (with Possibly Large \(k\)).” Journal of Combinatorics.
Paulin, Mackey, and Tropp. 2016. Efron–Stein Inequalities for Random Matrices.” The Annals of Probability.
Sababheh. 2018. On the Matrix Harmonic Mean.”
Sharma. 2008. Some More Inequalities for Arithmetic Mean, Harmonic Mean and Variance.” Journal of Mathematical Inequalities.
Stam. 1982. Limit Theorems for Uniform Distributions on Spheres in High-Dimensional Euclidean Spaces.” Journal of Applied Probability.
Tropp, Joel A. 2015. An Introduction to Matrix Concentration Inequalities.
Tropp, Joel. 2019. Matrix Concentration & Computational Linear Algebra / ENS Short Course.
Uchiyama. 2020. Some Results on Matrix Means.” Advances in Operator Theory.
van de Geer. 2014. Statistical Theory for High-Dimensional Models.” arXiv:1409.8557 [Math, Stat].
van Handel. 2017. Structured Random Matrices.” In Convexity and Concentration. The IMA Volumes in Mathematics and Its Applications.
Vershynin. 2011. Introduction to the Non-Asymptotic Analysis of Random Matrices.” arXiv:1011.3027 [Cs, Math].
———. 2015. Estimation in High Dimensions: A Geometric Perspective.” In Sampling Theory, a Renaissance: Compressive Sensing and Other Developments. Applied and Numerical Harmonic Analysis.
———. 2016. Four Lectures on Probabilistic Methods for Data Science.”
———. 2018. High-Dimensional Probability: An Introduction with Applications in Data Science.
Wainwright. 2019. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics 48.
Woodruff. 2014. Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science 1.0.