Generically approximating probability distributions
March 12, 2021 — March 22, 2021
approximation
functional analysis
metrics
model selection
optimization
probability
statistics
There are various approximations we might use for a probability distribution. Empirical CDFs, kernel density estimates, variational approximation, Edgeworth expansions, Laplace approximations…
From each of these, we might get close in some metric to the desired target.
This is a broad topic that I cannot hope to cover in full generality. Special cases of interest include:
- Statements about where the probability mass is with high probability (concentration theorems)
- Statements about the asymptotic distributions of variables eventually approaching some distribution as some parameter goes to infinity (limit theorems). Most famously, a lot of things approach normal distributions, but there are many limit theorems.
There are other types of results besides these in this domain. I am interested in collecting results that tell me about how various combinations of variables approach a limiting distribution in some probability metric.
1 Stein’s method
See Stein’s method.
2 References
Chatterjee, and Meckes. 2008. “Multivariate Normal Approximation Using Exchangeable Pairs.” arXiv:math/0701464.
Meckes. 2006. “An Infinitesimal Version of Stein’s Method of Exchangeable Pairs.”
———. 2009. “On Stein’s Method for Multivariate Normal Approximation.” In High Dimensional Probability V: The Luminy Volume.
Reinert, and Röllin. 2007. “Multivariate Normal Approximation with Stein’s Method of Exchangeable Pairs Under a General Linearity Condition.”
Stein. 1972. “A Bound for the Error in the Normal Approximation to the Distribution of a Sum of Dependent Random Variables.” Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory.
———. 1986. Approximate Computation of Expectations.