Bayesian inverse problems

March 30, 2016 — January 13, 2022

functional analysis
linear algebra
probability
sparser than thou
statistics
Figure 1

Inverse problem solution with a probabilistic approach. In Bayesian terms, say we have a model which gives us the density of a certain output observation \(y\) for a given input \(x\) which we write as \(p(y\mid x)\). By Bayes’ rule we can find the density of inputs for a given observed output by \[p(x \mid y)=\frac{p(x) p(y \mid x)}{p(y)}.\] The process of computing \[p(x \mid y)\] is the most basic step of Bayesian inference, nothing special to see here.

In the world I live in, \(p(y \mid x)\) is not completely specified, but is a regression density with unknown parameters \(\theta\) that we must also learn, that may have prior densities of their own. Maybe I also wish to parameterise the density for the prior on \(x\), \(p(x \mid \lambda),\) which is typically independent of \(\theta.\) Now the model is a hierarchical Bayes model, leading to a directed factorisation \[p(x,y,\theta,\lambda)=p(\theta)p(\lambda)p(x\mid \lambda) p(y\mid x,\theta).\] We can use more Bayes rule to write the density of interest as \[p(x, \theta, \lambda \mid y) \propto p(y \mid x, \theta)p(x \mid\lambda)p(\lambda)p(\theta).\] Solving this is also, I believe, sometimes called joint inversion. For my applications, we usually want to do this in two phases. In the first, we have some data set of \(N\) input-output pairs indexed by \(i,\) \(\mathcal{D}=\{(x_i, y_i:i=1,\dots,N)\}\) which we use to estimate posterior density \(p(\theta,\lambda \mid \mathcal{D})\) in some learning phase. Thereafter we only ever wish to find \(p(x, \theta, \lambda \mid y,\mathcal{D})\) or possibly even \(p(x \mid y,\mathcal{D})\) but either way do not thereafter update \(\theta, \lambda|\mathcal{D}\).

If the problem is high dimensional, in the sense that \(x\in \mathbb{R}^n\) for \(n\) large and ill-posed, in the sense that, e.g. \(y\in\mathbb{R}^m\) with \(n>m\), we have a particular set of challenges which it is useful to group under the heading of functional inverse problems.1 A classic example of this class of problem is “What was the true image that was blurred to create this corrupted version?”

1 Laplace method

We can use Laplace approximation to approximate latent density.

Laplace approximations seem like they might have an attractive feature: providing estimates also for inverse problems (Breslow and Clayton 1993; Wacker 2017; Alexanderian et al. 2016; Alexanderian 2021) by leveraging the delta method. I think this should come out nice in network linearization approaches such as Foong et al. (2019) and Immer, Korzepa, and Bauer (2021).

Suppose we have a regression network that outputs (perhaps approximately) a Gaussian distribution for outputs given inputs.

TBC.

Figure 2

2 References

Alexanderian. 2021. Optimal Experimental Design for Infinite-Dimensional Bayesian Inverse Problems Governed by PDEs: A Review.” arXiv:2005.12998 [Math].
Alexanderian, Petra, Stadler, et al. 2016. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-Dimensional Bayesian Nonlinear Inverse Problems.” SIAM Journal on Scientific Computing.
Borgonovo, Castaings, and Tarantola. 2012. Model Emulation and Moment-Independent Sensitivity Analysis: An Application to Environmental Modelling.” Environmental Modelling & Software, Emulation techniques for the reduction and sensitivity analysis of complex environmental models,.
Breslow, and Clayton. 1993. Approximate Inference in Generalized Linear Mixed Models.” Journal of the American Statistical Association.
Bui-Thanh. 2012. A Gentle Tutorial on Statistical Inversion Using the Bayesian Paradigm.”
Chung, and Chung. 2014. An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices.” Inverse Problems.
Cordero, Soto-Quiros, and Torregrosa. 2021. A General Class of Arbitrary Order Iterative Methods for Computing Generalized Inverses.” Applied Mathematics and Computation.
Cunningham, Zabounidis, Agrawal, et al. 2020. Normalizing Flows Across Dimensions.”
Dashti, and Stuart. 2015. The Bayesian Approach To Inverse Problems.” arXiv:1302.6989 [Math].
Foong, Li, Hernández-Lobato, et al. 2019. ‘In-Between’ Uncertainty in Bayesian Neural Networks.” arXiv:1906.11537 [Cs, Stat].
Giordano, and Nickl. 2020. Consistency of Bayesian Inference with Gaussian Process Priors in an Elliptic Inverse Problem.” Inverse Problems.
Grumitt, Karamanis, and Seljak. 2023. Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian Inverse Problems.”
Immer, Korzepa, and Bauer. 2021. Improving Predictions of Bayesian Neural Nets via Local Linearization.” In International Conference on Artificial Intelligence and Statistics.
Kaipio, and Somersalo. 2005. Statistical and Computational Inverse Problems. Applied Mathematical Sciences.
Kaipio, and Somersalo. 2007. Statistical Inverse Problems: Discretization, Model Reduction and Inverse Crimes.” Journal of Computational and Applied Mathematics.
Kennedy, and O’Hagan. 2001. Bayesian Calibration of Computer Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Knapik, van der Vaart, and van Zanten. 2011. Bayesian Inverse Problems with Gaussian Priors.” The Annals of Statistics.
Matthies, Zander, Rosić, et al. 2016. Parameter Estimation via Conditional Expectation: A Bayesian Inversion.” Advanced Modeling and Simulation in Engineering Sciences.
Mosegaard, and Tarantola. 1995. Monte Carlo Sampling of Solutions to Inverse Problems.” Journal of Geophysical Research: Solid Earth.
———. 2002. Probabilistic Approach to Inverse Problems.” In International Geophysics. International Handbook of Earthquake and Engineering Seismology, Part A.
O’Hagan. 2006. Bayesian Analysis of Computer Code Outputs: A Tutorial.” Reliability Engineering & System Safety, The Fourth International Conference on Sensitivity Analysis of Model Output (SAMO 2004),.
Plumlee. 2017. Bayesian Calibration of Inexact Computer Models.” Journal of the American Statistical Association.
Sainsbury-Dale, Zammit-Mangion, and Huser. 2022. Fast Optimal Estimation with Intractable Models Using Permutation-Invariant Neural Networks.”
Sambridge, and Mosegaard. 2002. Monte Carlo Methods in Geophysical Inverse Problems.” Reviews of Geophysics.
Särkkä, Solin, and Hartikainen. 2013. Spatiotemporal Learning via Infinite-Dimensional Bayesian Filtering and Smoothing: A Look at Gaussian Process Regression Through Kalman Filtering.” IEEE Signal Processing Magazine.
Schillings, and Stuart. 2017. Analysis of the Ensemble Kalman Filter for Inverse Problems.” SIAM Journal on Numerical Analysis.
Schwab, and Stuart. 2012. Sparse Deterministic Approximation of Bayesian Inverse Problems.” Inverse Problems.
Spantini, Cui, Willcox, et al. 2017. Goal-Oriented Optimal Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Spantini, Solonen, Cui, et al. 2015. Optimal Low-Rank Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Stuart. 2010. Inverse Problems: A Bayesian Perspective.” Acta Numerica.
Tait, and Damoulas. 2020. Variational Autoencoding of PDE Inverse Problems.” arXiv:2006.15641 [Cs, Stat].
Tarantola. 2005. Inverse Problem Theory and Methods for Model Parameter Estimation.
———. 2007. Mapping Of Probabilities.
Tenorio. 2017. An Introduction to Data Analysis and Uncertainty Quantification for Inverse Problems. Mathematics in Industry.
Tonolini, Radford, Turpin, et al. 2020. Variational Inference for Computational Imaging Inverse Problems.” Journal of Machine Learning Research.
Wacker. 2017. Laplace’s Method in Bayesian Inverse Problems.” arXiv:1701.07989 [Math].
Wei, Fan, Carin, et al. 2017. An Inner-Loop Free Solution to Inverse Problems Using Deep Neural Networks.” arXiv:1709.01841 [Cs].
Yang, Meng, and Karniadakis. 2021. B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data.” Journal of Computational Physics.
Zammit-Mangion, Bertolacci, Fisher, et al. 2021. WOMBAT v1.0: A fully Bayesian global flux-inversion framework.” Geoscientific Model Development Discussions.
Zhao, and Cui. 2023. Tensor-Based Methods for Sequential State and Parameter Estimation in State Space Models.”

Footnotes

  1. There is also a strand of the literature which refers to any form of Bayesian inference as an inverse problem, but this usage does not draw a helpful distinction for me so I avoid it.↩︎