M-open, M-complete, M-closed

May 30, 2016 — July 23, 2023

Bayes
how do science
statistics
Figure 1

Placeholder.

Yuling Yao, The likelihood principle in model check and model evaluation

We are (only) interested in estimating an unknown parameter \(\theta\), and there are two data-generating experiments both involving \(\theta\) with observable outcomes \(y_1\) and \(y_2\) and likelihoods \(p_1\left(y_1 \mid \theta\right)\) and \(p_2\left(y_2 \mid \theta\right)\). If the outcome-experiment pair satisfies \(p_1\left(y_1 \mid \theta\right) \propto p_2\left(y_2 \mid \theta\right)\), (viewed as a function of \(\theta\)) then these two experiments and two observations will provide the same amount of information about \(\theta\).”

This idea seems to be useful in thinking about M-open, M-complete, M-closed problems.

1 References

Berger, Wolpert, Bayarri, et al. 1988. The Likelihood Principle.” Lecture Notes-Monograph Series.
Bernardo, and Smith. 2000. Bayesian Theory.
Clarke. 2003. Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot Be Ignored.” The Journal of Machine Learning Research.
Clyde, and Iversen. 2013. Bayesian Model Averaging in the M-Open Framework.” In Bayesian Theory and Applications.
Dellaporta, Knoblauch, Damoulas, et al. 2022. Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap.” arXiv:2202.04744 [Cs, Stat].
Jansen. n.d. “Robust Bayesian Inference Under Model Misspecification.”
Knoblauch, Jewson, and Damoulas. 2019. Generalized Variational Inference: Three Arguments for Deriving New Posteriors.”
———. 2022. “An Optimization-Centric View on Bayes’ Rule: Reviewing and Generalizing Variational Inference.” Journal of Machine Learning Research.
Le, and Clarke. 2017. A Bayes Interpretation of Stacking for M-Complete and M-Open Settings.” Bayesian Analysis.
Lyddon, Walker, and Holmes. 2018. Nonparametric Learning from Bayesian Models with Randomized Objective Functions.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Masegosa. 2020. Learning Under Model Misspecification: Applications to Variational and Ensemble Methods.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS’20.
Matsubara, Knoblauch, Briol, et al. 2022. Robust Generalised Bayesian Inference for Intractable Likelihoods.” Journal of the Royal Statistical Society Series B: Statistical Methodology.
Minka. 2002. Bayesian Model Averaging Is Not Model Combination.”
Pacchiardi, and Dutta. 2022. Generalized Bayesian Likelihood-Free Inference Using Scoring Rules Estimators.” arXiv:2104.03889 [Stat].
Schmon, Cannon, and Knoblauch. 2021. Generalized Posteriors in Approximate Bayesian Computation.” arXiv:2011.08644 [Stat].
Yao, Vehtari, Simpson, et al. 2018. Using Stacking to Average Bayesian Predictive Distributions.” Bayesian Analysis.