English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Low-rank Computation of Posterior Covariance Matrices in Bayesian Inverse Problems

MPS-Authors
/persons/resource/persons86253

Benner,  Peter
Computational Methods in Systems and Control Theory, Max Planck Institute for Dynamics of Complex Technical Systems, Max Planck Society;

/persons/resource/persons206849

Qiu,  Yue
Computational Methods in Systems and Control Theory, Max Planck Institute for Dynamics of Complex Technical Systems, Max Planck Society;

/persons/resource/persons86493

Stoll,  Martin
Numerical Linear Algebra for Dynamical Systems, Max Planck Institute for Dynamics of Complex Technical Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

1703.05638.zip
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Benner, P., Qiu, Y., & Stoll, M. (in preparation). Low-rank Computation of Posterior Covariance Matrices in Bayesian Inverse Problems.


Cite as: https://hdl.handle.net/21.11116/0000-0000-2E2A-F
Abstract
We consider the problem of estimating the uncertainty in statistical inverse problems using Bayesian inference. When the probability density of the noise and the prior are Gaussian, the solution of such a statistical inverse problem is also Gaussian. Therefore, the underlying solution is characterized by the mean and covariance matrix of the posterior probability density. However, the covariance matrix of the posterior probability density is full and large. Hence, the computation of such a matrix is impossible for large dimensional parameter spaces. It is shown that for many ill-posed problems, the Hessian matrix of the data misfit part has low numerical rank and it is therefore possible to perform a low-rank approach to approximate the posterior covariance matrix. For such a low-rank approximation, one needs to solve a forward partial differential equation (PDE) and the adjoint PDE in both space and time. This in turn gives $\mathcal{O}(n_x n_t)$ complexity for both, computation and storage, where $n_x$ is the dimension of the spatial domain and $n_t$ is the dimension of the time domain. Such computations and storage demand are infeasible for large problems. To overcome this obstacle, we develop a new approach that utilizes a recently developed low-rank in time algorithm together with the low-rank Hessian method. We reduce both the computational complexity and storage requirement from $\mathcal{O}(n_x n_t)$ to $\mathcal{O}(n_x + n_t)$. We use numerical experiments to illustrate the advantages of our approach.