Skip to main content

Loading Events

« All Events

  • This event has passed.

Troy Butler, University of Colorado Denver, Data Consistent Inversion: An Interactive Talk Using Jupyter Notebooks

March 26, 2019 | 3:00 pm - 4:00 pm EDT

(Brief Note: In this talk, we utilize Jupyter notebooks to re-create some of our published results in real-time and also build a “computational intuition” for the ideas presented. In this way, we are (mostly) transparent about all the computations involved in our work. I will email these materials to anyone interested after the presentation.)
Models are useful for simulating key processes and generating significant amounts of (simulated) data on quantities of interest (QoI) computed as a set of functionals from a model solution. This simulated data can be compared directly to observable data to address many important questions in scientific modeling. However, many key characteristics governing system behavior described as input parameters in the model remain hidden to direct observation. Thus, scientific inference fundamentally depends on the formulation and solution of a stochastic inverse problem (SIP) to describe sets of probable model parameters.

Statistical Bayesian inference (see e.g., [1, 2]) is the most common approach solving the SIP using both data and an assumed error model on the QoI to construct posterior distributions of model inputs and model discrepancies. We have recently developed an alternative “consistent” Bayesian solution to the SIP based on the measure-theoretic principles developed in [3]. We refer to this approach as “Data Consistent Inversion” and prove that this approach produces a distribution that is consistent in the sense that its push-forward through the QoI map will match the distribution on the observable data, i.e., we say that this distribution is consistent with the model and the data [4].

Our approach only requires approximating the push-forward probability density of the prior, which is fundamentally a forward propagation of uncertainty. We briefly summarize this approach including existence, uniqueness, and stability of solutions. A comparison to statistical Bayesian inference is also provided.

Motivated by computationally expensive models, we discuss the impact of using approximate models to approximate the QoI on the construction of the push-forward of the prior density. We then outline the basic theoretical argument of convergence of the push-forward of the prior density using a generalized version of the Arzela-Ascoli theorem to prove a converse of Scheffe’s theorem and discuss rates of convergence [5].

References
[1] M. Kennedy and A. O’Hagan, “Bayesian calibration of computer models”, Journal of the Royal Statistical Society: Series B (Statistical Methodology), Vol. 63, pp. 425-464, (2001).

[2] A. M. Stuart, “Inverse problems: A Bayesian perspective”, Acta Numerica, Vol. 19, pp. 451–559, (2010).

[3] J. Breidt, T. Butler, and D. Estep, “A measure-theoretic computational method to inverse sensitivity problems I: method and analysis”, SIAM J. Numer. Analysis, 49, pp. 1836–1859, (2012).
[4] T. Butler, J. Jakeman and T. Wildey, “Combining Push-Forward Measures and Bayes’ Rule to Construct Consistent Solutions to Stochastic Inverse Problems”, SIAM J. Sci. Comput., 40(2), A984–A1011, (2018).
[5] T. Butler, J. Jakeman and T. Wildey, “Convergence of Probability Densities using Approximate Models for Forward and Inverse Problems in Uncertainty Quantification”, SIAM J. Sci. Comput., 40(5), A3523-A3548, (2018).

Details

Date:
March 26, 2019
Time:
3:00 pm - 4:00 pm EDT
Event Category:

Venue