Sullivan, T.J. 2015. Introduction to Uncertainty Quantification. Vol. 63. Texts in Applied Mathematics. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-23395-6.
Let be our model (technically a measurable function). Consider a Random Variable with a known probability measure, The goal of forward UQ is to determine the push-forward of through That is, , but more specifically,
(To be rigorous, the probability measures could map from more arbitrary sigma algebras, rather than the power set.)
Determining is the gold standard of forward UQ, but this often quite difficult, so there are often simplified cases of interest.
Reliability or Certification
As a starting point, for some event of failure , determine the failure probability $$f_{# \mu} (Y_{\text{fail}}) = \mathbb{P}\mu ({ x \in \mathbb{X} : f(x) \in Y{\text{fail}}}).$$
This is a specific class of forward UQ problems.
We often do not need the above formalism, so we will continue by taking our forward model as and a random variable , in which case the goal of forward UQ is to predict the distribution of , .
Monte Carlo
The baseline, most common sense method is to take samples and compute
And use . The drawback is the computational expense of and the slow convergence to the distribution.
This can be paired with Generative Modeling, where we aim to generate samples from .
Other Propagation Methods
Some approaches linearize , and then propagate the distribution through a linear function (which may not be trivial for some distributions). Other approaches approximate the distribution with a nice distribution (such as normal), and then propagate this through the nonlinear function . One such approach is the Unscented Transform, which is a specific Sigma Point Method.
When involves the solution to a differential equation, this propagation can be described through the general Fokker-Plank Equation (see also PINNs for Evolution of pdfs), which adds a dimension to the differential equation to give an equation for the pdf.
Importance Sampling
The main idea is to use a biasing distribution that is different from such that predictions of moments (aka evaluations of expected value) have lower variance. Then, fewer samples need to be drawn. The difficulty is constructing the biasing distribution .
Surrogate Modeling
The approach is to replace with that is cheaper to evaluate (where linearization is one example). The difficulty is having these functions disagree. This ties into the idea of Computational Irreducibility. If there exists a function that is cheaper than , then is computationally reducible.