Bayesian inference for misspecified models
May 30, 2016 — September 20, 2023
Bayes
how do science
statistics
Placeholder.
See also M-open.
For now, consider Christian Robert’s brief intro:
A few thoughts (and many links to my blog entries!) about that meme that all models are wrong:
- While the hypothetical model is indeed almost invariably and irremediably wrong, it still makes sense to act in an efficient or coherent manner with respect to this model if this is the best one can do. The resulting inference produces an evaluation of the formal model that is the “closest” to the actual data-generating model (if any);
- There exist Bayesian approaches that can do without the model, a most recent example being the papers by Bissiri et al. (with my comments) and by Watson and Holmes (which I discussed with Judith Rousseau);
- In a connected way, there exists a whole branch of Bayesian statistics dealing with M-open inference;
- And yet another direction I like a lot is the SafeBayes approach of Peter Grünwald, who takes into account model misspecification to replace the likelihood with a downgraded version expressed as a power of the original likelihood.
- The very recent Read Paper by Gelman and Hennig addresses this issue, albeit in a convoluted manner (and I added some comments on my blog). I presume you could gather material for a discussion from the entries about your question.
- In a sense, Bayesians should be the least concerned among statisticians and modellers about this aspect since the sampling model is to be taken as one of several prior assumptions and the outcome is conditional or relative to all those prior assumptions.
1 Gibbs posteriors
Gibbs posteriors seem to be an attempt to address the M-open problem by removing the need for a valid likelihood.
2 References
Baek, Aquino, and Mukherjee. 2023. “Generalized Bayes Approach to Inverse Problems with Model Misspecification.” Inverse Problems.
Bissiri, Holmes, and Walker. 2016. “A General Framework for Updating Belief Distributions.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Bochkina. 2023. “Bernstein–von Mises Theorem and Misspecified Models: A Review.” In Foundations of Modern Statistics. Springer Proceedings in Mathematics & Statistics.
Dellaporta, Knoblauch, Damoulas, et al. 2022. “Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat].
Farmer, Nakamura, and Steinsson. 2021. “Learning About the Long Run.” Working Paper. Working Paper Series.
Gill, and King. 2004. “What to Do When Your Hessian Is Not Invertible: Alternatives to Model Respecification in Nonlinear Estimation.” Sociological Methods & Research.
Grendár, and Judge. 2012. “Not All Empirical Divergence Minimizing Statistical Methods Are Created Equal?” AIP Conference Proceedings.
Grünwald, and van Ommen. 2017. “Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It.” Bayesian Analysis.
Kleijn, and van der Vaart. 2006. “Misspecification in Infinite-Dimensional Bayesian Statistics.” The Annals of Statistics.
Kleijn, and van der Vaart. 2012. “The Bernstein-Von-Mises Theorem Under Misspecification.” Electronic Journal of Statistics.
Knoblauch, Jewson, and Damoulas. 2019. “Generalized Variational Inference: Three Arguments for Deriving New Posteriors.”
———. 2022. “An Optimization-Centric View on Bayes’ Rule: Reviewing and Generalizing Variational Inference.” Journal of Machine Learning Research.
Loecher. 2021. “The Perils of Misspecified Priors and Optional Stopping in Multi-Armed Bandits.” Frontiers in Artificial Intelligence.
Lv, and Liu. 2014. “Model Selection Principles in Misspecified Models.” Journal of the Royal Statistical Society Series B: Statistical Methodology.
Masegosa. 2020. “Learning Under Model Misspecification: Applications to Variational and Ensemble Methods.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS’20.
Matsubara, Knoblauch, Briol, et al. 2022. “Robust Generalised Bayesian Inference for Intractable Likelihoods.” Journal of the Royal Statistical Society Series B: Statistical Methodology.
Medina, Olea, Rush, et al. 2021. “On the Robustness to Misspecification of \(\alpha\)-Posteriors and Their Variational Approximations.”
Müller. 2013. “Risk of Bayesian Inference in Misspecified Models, and the Sandwich Covariance Matrix.” Econometrica.
Nott, Drovandi, and Frazier. 2023. “Bayesian Inference for Misspecified Generative Models.”
Pati, Bhattacharya, Pillai, et al. 2014. “Posterior Contraction in Sparse Bayesian Factor Models for Massive Covariance Matrices.” The Annals of Statistics.
Schmon, Cannon, and Knoblauch. 2021. “Generalized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat].
Schwartz. 1965. “On Bayes Procedures.” Zeitschrift Für Wahrscheinlichkeitstheorie Und Verwandte Gebiete.
Shalizi. 2009. “Dynamics of Bayesian Updating with Dependent Data and Misspecified Models.” Electronic Journal of Statistics.
Vansteelandt, Bekaert, and Claeskens. 2012. “On Model Selection and Model Misspecification in Causal Inference.” Statistical Methods in Medical Research.
Walker. 2013. “Bayesian Inference with Misspecified Models.” Journal of Statistical Planning and Inference.
Wang, and Blei. 2019. “Variational Bayes Under Model Misspecification.” In Advances in Neural Information Processing Systems.