User Tools

Site Tools


glossary

This is an old revision of the document!


Overview

VVUQ is Validation, Verification and Uncertainty Quantification.

Validation is the process of analyzing the accuracy with which the model represents the real world process (Oberkampf, 2010).

Verification is the process of identifying whether the computational model accurately simulates the mathematical model and its solution (Oberkampf, 2010). Verification can be divided into code verification (finding and fixing errors in the numerical algorithms or in the source code, ensuring good programming practices) and solution verification (estimation of the numerical error).

Uncertainty Quantification (UQ) is the discipline, which seeks to estimate uncertainty in the model input and output parameters, to analyse the sources of these uncertainties, and to reduce their quantities.

VECMA Specific Terms

Patterns within VECMA are abstractions that describe, in a non-application and non-domain specific manner, a workflow or algorithm for conducting validation, verification, uncertainty quantification or sensitivity analysis. The set of patterns described in the proposal is detailed here.

Uncertainty Quantification Pattern (UQP) is the term for workflows and algorithms focussed on uncertainty quantification and propagation or sensitivity analysis.

Validation & Verification Pattern (VVP) is our description for workflows and algorithms concerned with validation and verification.

Primitives is the term used in the proposal for what we now call Patterns.

VECMA toolkit (VECMAtk) is the software, and associated workflow descriptions, designed or curated within the project with the goal of implementing UQPs and VVPs.

Elements are general components which are designed to perform common elements of UQPs or VVPs. We have identified four categories of element that will need to be implemented in or supported by VECMAtk: sampling, analysis, comparison and distribution description.

Development Tracks

Fast track developments are prioritized in the first year, and finalized by M18. They serve two main purposes: to rapidly establish first working, usable, UQ techniques on existing multiscale computing applications, and to rapidly obtain practical experience that supports the deep track design and implementation processes.

Deep track developments take place over the full duration of the project, but are especially prioritised after the first year. The deep track developments serve to deliver the most innovative project outputs.

Uncertainty Quantification

Uncertainty is the absence or imperfection of information frequently measured by variance, however, fully characterised by a distribution function (Hammitt, 1999; Rothschild, 1970).

Aleatory uncertainty is a natural variability of a model parameter, which can not be reduced (Der Kiureghian, 2009).

Epistemic uncertainty is a type of imprecision, which is possible to reduce by obtaining more knowledge or improving models (Der Kiureghian, 2009).

Error is a recognisable inaccuracy and, in contrast to uncertainty, does not arise due to lack of knowledge (Oberkampf, 2002).

Classes of UQ Analysis and Methodology

Forward UQ (Uncertainty propagation) is a process of estimation uncertainty in the model output given uncertainty in the model input parameters by incorporating this inputs uncertainty into the model function (Johnstone, 2016).

Inverse UQ estimates uncertainty in the model input parameters given observations of the model output (Nagel, 2017; Wu 2017).

Non-intrusive UQ methods are such methods to explore uncertainty in a computational model that do not require any modification of the existing model code and use the model as a black box (van der Bos, 2017; Eldred, 2009).

Intrusive UQ methods are the techniques, which are based on analysis of the model function and usually require modifications to be made to the model code (Soize, 2017).

Semi-intrusive UQ is a family of methods, which reduces the cost of UQ in multi-component models by exploring the structure of the overall model without affecting individual components (Nikishova, 2018).

Sensitivity Analysis

Sensitivity Analysis identifies the model portions of uncertainty coming from each uncertain model parameters or group of parameters (Saltelli, 2001).

Global sensitivity analyse the sensitivity of a model's output to global variations of its inputs, i.e. across the whole variation range of the input parameters (Sobol, 2001).

Local sensitivity methods analyse the sensitivity of a model's output to local variations of its inputs, i.e. in the neighbourhood of a particular input vecto (Saltelli, 2010).

Surrogate models or metamodels are an alternatives to the original code, which produce approximately similar output in a shorter period of time (Owen, 2017).

Multiscale Modelling

Multiscale modeling is the field of solving problems which have important features at multiple scales of time and/or space. In the scope of VECMA this typically means that separate constituent models (or submodels) are run which describe phenomena at a particular scale with information being passed between them.

Coupling is the process in which output from one model or submodel is passed to another for further computation.

Scale bridging is the process in which outputs from a submodel at one scale are converted to become appropriate inputs for a submodel at a different scale.

Depending on the detail of the model, the interaction between two submodels may have feedback or not, signified by a one- or two-way coupling. In general, the coupling topology of the submodels may be cyclic or acyclic (Chopard2014):

  • acyclic coupling topologies initiate each submodel only once and thus has a single synchronisation point.
  • cyclic coupling topologies may provide new inputs to submodels a number of times, equating to multiple synchronisation points.

Multiscale Modelling and Simulation Framework (MMSF) is a framework designed to facilitate the design and execution of multiscale, multi-science applications. Its main emphasis is in pressing multi-scale modellers to clearly separate single-scale models and the scale bridging methods needed for them to interact (Chopard2014).

References

B. Chopard, J. Borgdorff and A. G. Hoekstra. “A framework for multi-scale modelling.”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372 (2014), doi: 10.1098/rsta.2013.0378

A. Der Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?.” Structural Safety 31.2 (2009): 105-112, https://doi.org/10.1016/j.strusafe.2008.06.020

M. S. Eldred and J. Burkardt, “Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification.” 47th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition (2009), https://doi.org/10.2514/6.2009-976

J. K. Hammitt and A. I. Shlyakhter. “The expected value of information and the probability of surprise.” Risk Analysis 19.1 (1999): 135-152, https://doi.org/10.1023/A:1006966613058

R. H. Johnstone, et al., “Uncertainty and variability in models of the cardiac action potential: Can we build trustworthy models?.” Journal of molecular and cellular cardiology 96 (2016): 49-62, https://doi.org/10.1016/j.yjmcc.2015.11.018

J. B. Nagel, “Bayesian techniques for inverse uncertainty quantification”. Diss. ETH Zurich, (2017), https://doi.org/10.3929/ethz-a-010835772

A. Nikishova and A. G. Hoekstra. “Semi-intrusive uncertainty quantification for multiscale models.” (2018), preprint: https://arxiv.org/abs/1806.09341

W. L. Oberkampf, et al., “Error and uncertainty in modeling and simulation.” Reliability Engineering & System Safety 75.3 (2002): 333-357, https://doi.org/10.1016/S0951-8320(01)00120-X

W. L. Oberkampf and C. J. Roy, “Verification and validation in scientific computing”. Cambridge University Press, 2010, ISBN: 9780511760396, https://doi.org/10.1017/CBO9780511760396

N. E. Owen, et al., “Comparison of surrogate-based uncertainty quantification methods for computationally expensive simulators.” SIAM/ASA Journal on Uncertainty Quantification 5.1 (2017): 403-435, https://epubs.siam.org/doi/10.1137/15M1046812

Rothschild, Michael, and Joseph E. Stiglitz. “Increasing risk: I. A definition.” Journal of Economic theory 2.3 (1970): 225-243. https://doi.org/10.1016/0022-0531(70)90038-4

Saltelli, Andrea, et al. “Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index.” Computer Physics Communications 181.2 (2010): 259-270. https://doi.org/10.1016/j.cpc.2009.09.018

Saltelli, Andrea, and Paola Annoni. “How to avoid a perfunctory sensitivity analysis.” Environmental Modelling & Software 25.12 (2010): 1508-1517. https://doi.org/10.1016/j.envsoft.2010.04.012

Sobol, Ilya M. “Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates.” Mathematics and computers in simulation 55.1-3 (2001): 271-280. https://doi.org/10.1016/S0378-4754(00)00270-6

Soize, Christian. Uncertainty Quantification. Springer International Publishing AG, 2017. https://link.springer.com/book/10.1007/978-3-319-54339-0

van den Bos, L. M. M., Barry Koren, and Richard P. Dwight. “Non-intrusive uncertainty quantification using reduced cubature rules.” Journal of Computational Physics 332 (2017): 418-445. https://doi.org/10.1016/j.jcp.2016.12.011

Wu, Xu, and Tomasz Kozlowski. “Inverse uncertainty quantification of reactor simulations under the Bayesian framework using surrogate models constructed by polynomial chaos expansion.” Nuclear Engineering and Design 313 (2017): 29-52. https://doi.org/10.1016/j.nucengdes.2016.11.032

glossary.1552642393.txt.gz · Last modified: 2019/03/15 09:33 by dave.wright@ucl.ac.uk