Compressed sensing and sampling
A fancy ways of counting zero
August 18, 2014 — June 14, 2017
Higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon.
See also matrix factorisations, restricted isometry properties, Riesz bases…
1 Basic Compressed Sensing
I’ll follow the intro of (E. J. Candès et al. 2011), which tries to unify many variants.
We attempt to recover a signal \(x_k\in \mathbb{R}^d\) from \(m\ll n\) measurements \(y_k\) of the form
\[y_k =\langle a_k, x\rangle + z_k,\, 1\leq k \leq m,\]
or, as a matrix equation,
\[ y = Ax + z \]
where \(A\) is the \(m \times d\) stacked measurement matrices, and the \(z\) terms denote i.i.d. measurement noise.
Now, if \(x\) is a sparse vector, and \(A\) satisfies a restricted isometry property or something then we can construct an estimate \(\hat{x}\) with small error by minimizing
\[ \hat{x}=\min \|\dot{x}\|_1 \text{ subject to } \|A\dot{x}-y\|_2 < \varepsilon, \]
where \(\varepsilon> \|z\|_2^2.\)
In the lecture notes on restricted isometry properties, Candès and Tao talk about not vectors \(x\in \mathbb{R}^d\) but functions \(f:G \mapsto \mathbb{C}\) on Abelian groups like \(G=\mathbb{Z}/d\mathbb{Z},\) which is convenient for some phrasing, since then when I say my signal is \(s\)-sparse, it means that its support \(\operatorname{supp} \tilde{f}=S\subset G\) where \(|S|=s\).
In the finite-dimensional vector framing, we can talk about best sparse approximations \(x_s\) to non-sparse vectors, \(x\).
\[ x_s = \argmin_{\|\dot{x}\|_0\leq s} \|x-\dot{x}\|_2 \]
where all the coefficients apart from the \(s\) largest are zeroed.
The basic results find attractive convex problems with high probability in a nest of nastier ones. There are also greedy optimization versions, which are formulated as above, but no longer necessarily a convex optimization; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding.
For all of these the results tend to be something like
with data \(y,\) the difference between my estimate of \(\hat{x}\) and \(\hat{x}_\text{oracle}\) is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set \(S=\operatorname{supp}(x)\).
Candés gives an example result
\[ \|\hat{x}-x\|_2 \leq C_0\frac{\|x-x_s\|_1}{\sqrt{s}} + C_1\varepsilon \]
conditional upon
\[ \delta_2s(A) < \sqrt{2} -1 \]
where this \(\delta_s(\cdot)\) gives the restricted isometry constant of a matrix, defined as the smallest constant such that \((1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2\) for all \(s\)-sparse \(x\). That is, the measurement matrix does not change the norm of sparse signals “much”, and in particular, does not null them when \(\delta_s < 1.\)
This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating.
Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I haven’t tried yet. But generating random matrices that have a certain RIC with high probability is easy; that’s a neat trick in this area.
2 Redundant compressed sensing
🏗 For now see Frame theory.
3 Introductory texts
Aside: see the rather good practical python notebook in numerical tours.
Terry Tao’s exposition is great as usual. See, e.g.
[…] we can at least give an informal geometric argument as to why \(\ell^1\) minimization is more likely to recover a sparse solution than \(\ell^2\) minimization. The set of all \(f\) whose Fourier coefficients match the observed data \(c_\xi\) forms an affine subspace of the space of all functions. The \(\ell^2\) minimiser can then be viewed geometrically by taking l^2 balls (i.e. Euclidean balls) centred at the origin, and gradually increasing the radius of the ball until the first point of contact with the affine subspace. In general, there is no reason to expect this point of contact to be sparse (i.e. to lie on a high-codimension coordinate subspace). If however we replace \(\ell^2\) with \(\ell^1\), then the Euclidean balls are replaced by octahedra, which are much “pointier” (especially in high dimensions) and whose corners lie on coordinate subspaces. So the point of first contact is now much more likely to be sparse. The idea of using \(\ell^1\) as a “convex relaxation” of \(\ell^0\) is a powerful one in applied mathematics; see for instance (J. A. Tropp 2006) on the topic.
Hegde, Baraniuk, Davenport and Duarte have an open source textbook
Gabriel Peyre’s Compressed Sensing of Images
Igor Carron’s Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning
4 …Using random projections
Classic. Notes under low dimensional projections
5 …Using deterministic projections
Surely this is close to quasi monte carlo?
Dustin G. Mixon Achieving the Welch bound with difference sets
I blogged about constructing harmonic frames using difference sets. We proved that such harmonic frames are equiangular tight frames, thereby having minimal coherence between columns. I concluded the entry by conjecturing that incoherent harmonic frames are as good for compressed sensing as harmonic frames whose rows were randomly drawn from the discrete Fourier transform (DFT) matrix
A variant on the compressed sensing of Yves Meyer
recent work of Yves Meyer might be relevant:
A variant on the compressed sensing of Emmanuel Candes, Basarab Matei and Yves Meyer
Simple quasicrystals are sets of stable sampling, Basarab Matei and Yves Meyer
These papers are interesting because their approach to compressed sensing is very different. Specifically, their sparse vectors are actually functions of compact support with sufficiently small Lebesgue measure. As such, concepts like conditioning are replaced with that of stable sampling, and the results must be interpreted in the context of functional analysis. The papers demonstrate that sampling frequencies according to a (deterministic) simple quasicrystal will uniquely determine sufficiently sparse functions, and furthermore, the sparsest function in the preimage can be recovered by L1-minimization provided it’s nonnegative.
6 Bayesian
Sparse Bayes can be tricky. See, perhaps, Bayesian Compressive Sensing.
7 Phase transitions
How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition, which is a known phenomenon in ML. Hmm.
8 Weird things to be classified
csgm, (Bora et al. 2017) compressed sensing using generative models, tries to find a model which is sparse with respect to… some manifold of the latent variables of… a generative model? or something?