Density estimation
Especially non- or semiparametrically
June 6, 2016 — October 16, 2019
A statistical estimation problem where you are not trying to estimate a function of a distribution of random observations, but the distribution itself. In a sense, all of statistics implicitly does density estimation, but this is often instrumental in the course of discovering some actual parameter of interest. (Although maybe you’re interested in Bayesian statistics and you care a lot about the shape of the posterior density in particular.)
So, estimating distributions nonparametrically is not too weird a function approximation problem. We wish to find a density function \(f:\mathcal{X}\to\mathbb{R}\) such that \(\int_{\mathcal{X}}f(x)dx=1\) and \(\forall x \in \mathcal{X},f(x)\geq 0\).
We might set ourselves different loss functions than usual in statistical regression problems; instead of, e.g. expected \(L_p\) prediction error we might use a traditional function approximation \(L_p\) loss, or a probability divergence measure.
The most common density estimate, that we use implicitly all the time, is to not work with densities as such but distributions. We take the empirical distribution as a distribution estimate; that is, taking the data as a model for itself. This has various non-useful features such as being rough and rather hard to visualize as a density.
Question: When would I actually want to estimate, specifically, a density?
Visualization, sure. Nonparametric regression without any better ideas. As latent parameters in a deep probabilistic model.
What about non-parametric conditional density estimation? Are there any general ways to do this?
1 Divergence measures/contrasts
There are many choices for loss functions between densities here; any of the probability metrics will do. For reasons of tradition or convenience, when the object of interest is the density itself, certain choices dominate:
- \(L_2\) with respect to the density over Lebesgue measure on the state space, which we call the MISE, and works out nicely for convolution kernels.
- KL-divergence. (May not do what you want if you care about performance near 0. See (Hall 1987).)
- Hellinger distance
- Wasserstein divergences.
- …
But having chosen the divergence you wish to minimise, you now have to choose with respect to which criterion you wish to minimise it? Minimax? in probability? In expectation? …? Every combination is a different publication. Hmf.
2 Minimising Expected (or whatever) MISE
This works fine for Kernel Density Estimators where it turns out just to be a Wiener filter where you have to choose a bandwidth. How do you do this for other estimators, though?
3 Connection to point processes
There is a connection between spatial point process intensity estimation and density estimation. See Densities and intensities.
4 Spline/wavelet estimations
🏗
5 Mixture models
See mixture models.
6 Gaussian processes
Gaussian process can provide posterior densities over densities somehow? (Tokdar 2007; Lenk 2003)
7 Normalizing flow models
A.k.a. measure transport etc. Where one uses reparameterization. 🏗
8 k-NN estimates
Filed here because too small to do elsewhere.
To use nearest neighbour methods, the integer k must be selected. This is similar to bandwidth selection, although here k is discrete, not continuous. K.C. Li (Annals of Statistics, 1987) showed that for the knn regression estimator under conditional homoskedasticity, it is asymptotically optimal to pick k by Mallows, Generalized CV, or CV. Andrews (Journal of Econometrics, 1991) generalised this result to the case of heteroskedasticity, and showed that CV is asymptotically optimal.
9 Kernel density estimators
a.k.a. kernel smoothing.
9.1 Fancy ones
HT Gery Geenens for a lecture he just gave on convolution kernel density estimation where he drew a parallel between additive noise in kde estimation and multiplicative noise in non-negative-valued variables.