The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

Different uncertainty quantification (UQ) methods such as adaptive sampling, sensitivity analysis, stochastic expansions, reliability methods, surrogate modeling, and Bayesian approaches have been applied to assess uncertainty associated with mathematical and computational models. Yet developing both accurate and efficient UQ methods for multiscale simulation still faces challenges. For instance, quantifying model-form uncertainty in quantum mechanics simulation requires in-depth knowledge of simulation mechanisms, since nonintrusive UQ methods and sensitive analysis will be very costly. Molecular dynamics simulation output is very sensitive to the selection of empirical interatomic potential functions, calibration processes rely on empirical and ad hoc methods, and the lack of verifiable experimental data that match simulation conditions at atomistic scales is common. Continuum scale simulations employ various simplifications for complex and nonlinear phenomena such as fracture in heterogeneous materials and fluid flow in random media. There is a lack of scalability for large system prediction where the dimension of input parameter space is very high and where parameters can have both continuous and discrete formats.

This Special Issue focuses on the state-of-the-art of methods to quantify model-form and parameter uncertainties for simulations at multiple scales. The goal is to have a wide representation of approaches to assess and improve the confidence of simulation-based predictions. These include how to quantify uncertainty propagated through simulations at multiple length and time scales, how to identify sources of uncertainty and devise uncertainty reduction and management strategies, how to improve the efficiency and scalability of uncertainty quantification approaches, and how to calibrate models effectively with information from experimental observations and other simulation results. A total of eight papers contributed to these topics in this Special Issue.

Model-form and parameter uncertainty associated with atomistic scale materials modeling and simulation was not widely recognized until recently. The major sources of model uncertainty in molecular dynamics simulation include the interatomic potentials, cut-off distance, numerical integrators, simulation acceleration mechanisms, etc. The paper by Mark Tschopp et al., entitled “Quantifying Parameter Sensitivity and Uncertainty of a Reactive Interatomic Potential for Saturated Hydrocarbons,” provided an excellent example of sensitivity analysis methods applied to interatomic potentials. Historically, the development of interatomic potential models for use in molecular dynamics has been based on ad hoc procedures, intuition of the potential developer, and no clear criteria for success or reliability of the resulting potentials. This paper presented a case study of methods that advance the goal of making the process more robust and systematic. In particular, this paper showed how to efficiently quantify parameter sensitivities and how to construct optimal training sets. This paper explored the parameters of the modified embedded-atom method through a design of experiments and Latin hypercube sampling approach to better understand how individual modified embedded-atom method parameters affect several properties of molecules (energy, bond distances, bond angles, and dihedral angles) and the correlation between various molecules in terms of these properties. The authors showed that a fractional factorial design over a large input space of 38 parameters required about 1/10 of the runs necessary to achieve similar results as a Latin hypercube sampling study. Extensive sensitivity tables were provided.

Model-form errors also exist at quantum-level first-principles calculation. For instance, in density-functional theory, errors are involved in computing exchange-correlation functionals because of local spin-density and Jacob's ladder approximations, Born–Oppenheimer, mean-field approximations, and others. In the paper by Tran et al. entitled “An Efficient First-Principles Saddle Point Searching Method Based on Distributed Kriging Metamodels,” Gaussian process regression or Kriging metamodel is used to construct the potential energy surface based on a density-functional theory calculation and simultaneously estimate model-form uncertainty. To solve the high dimensionality challenge in kriging, the symmetry in the materials system is taken advantage of to reduce the dimension. Furthermore, a distributed kriging strategy is developed so that metamodels are constructed from clustered data with reduced dimensions and faster convergence. The scalability of the algorithm is demonstrated with a speed-up of two orders of magnitude. Searching local minima and saddle points on the potential energy surface can be done efficiently on the metamodels.

Obtaining the probability distributions of the quantities of interest is an essential task in UQ. How to improve the sampling efficiency and obtain the maximum amount of information with the minimum level of effort is an important issue. The paper authored by Walshet al., entitled “Optimal Experimental Design Using a Consistent Bayesian Approach,” outlined a method for optimal acquisition of experimental data to inform the stochastic description of model input parameters. The proposed experimental design method is based on a new idea called “consistent Bayesian” estimation, in which the goal is to identify a posterior probability density of the parameters such that when the posterior is “pushed forward” through the computational model, the resulting model observations match the observed density on the experimental results. Note that this is related to but different from a typical Bayesian approach in which the posterior density is proportional to the likelihood times the prior density. Building on the consistent Bayesian approach, the authors presented an optimal experimental design method that identifies the observation(s) that maximize the expected information gain from the prior probability density on the model parameters. The authors used the Kullback–Leibler divergence within the calculation of expected information gain, and they used a discrete optimization approach to identify the best experimental design point from a set of discrete candidates. The authors demonstrated their approach on four computational models (convection diffusion, transient transport, computational mechanics, and incompressible flow in a porous media) to highlight the utility and properties of their approach.

Bayesian inference typically relies on Markov chain Monte Carlo sampling to estimate posterior probabilities especially when the distributions do not follow special analytical forms. By keeping track of the states during Markov chain transitions, the distributions of posteriors can be obtained. However, the sampling process is not efficient. Importance sampling strategies such as Metropolis–Hastings and hybrid Monte Carlo algorithms have been introduced to improve the efficiency. Yet they still face challenges when posteriors are multimodal or have narrow peaks and/or when the sampling space is high-dimensional. In the paper by Wu et al. entitled “Bayesian Annealed Sequential Importance Sampling (BASIS): An Unbiased Version of Transitional Markov Chain Monte Carlo,” a sequential importance sampling strategy is presented, where the bias introduced in transitional Markov chain Monte Carlo is reduced by limiting the length of Markov chain. In the transitional Markov chain Monte Carlo sampling, multiple intermediate and easy-to-sample target distributions are constructed, instead of directly using the final but difficult one, such that resampling of the intermediate ones enables better convergence to the final distribution. Computational experiments demonstrated the error reduction.

The presence of model-form uncertainty in simulation also brings challenging issues to model calibration. The paper by Ray et al., entitled “Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-In-Crossflow Case Study,” provided a typical case. Existing computational fluids dynamics (CFD) models have difficulty to predict complex turbulent interaction such as jet-in-crossflow with high Reynolds and Mach numbers, because of various simplifications during modeling. A natural way of reducing model-form errors is to revise models to include high-order approximations. In order to improve the accuracy of Reynolds-averaged Navier–Stokes simulation, a nonlinear eddy viscosity model is introduced. To calibrate the model based on experimental observations of turbulent stress and also make the model representative, the least absolute shrinkage and selection operator is applied and the number of parameters is reduced from seven to three. Bayesian calibration is further applied to the remaining three parameters based on vorticity field.

In engineering design processes, multiple simulation schemes with different fidelities may be available to simulate the same quantity of interest. For instance, in finite element analysis or CFD simulation, models can be constructed with different mesh sizes (coarse versus fine), dimensions (two-dimensional versus three-dimensional), or even physical models. Low-fidelity models require much less computational time than high-fidelity ones but involve larger model-form errors in general. Multifidelity modeling is a recent approach to strike a balance between efficiency and accuracy. In the paper authored by Wang et al. entitled “Propagation of Input Uncertainty in Presence of Model-Form Uncertainty: A Multi-Fidelity Approach for CFD Applications,” the multifidelity modeling is demonstrated in CFD domain. To estimate the life coefficient of an airfoil, information from a low-fidelity thin foil model and a high-fidelity Reynolds-averaged Navier–Stokes solver are combined, with the discrepancy modeled as a Gaussian process. It is shown that the multifidelity model can provide more details than single low- or high-fidelity model alone can provide. More importantly, model-form uncertainty can be potentially reduced with the multifidelity approach.

Among methods of evaluating the impacts of parameter uncertainty, global sensitivity analysis is a powerful tool to assess the contribution to output variance from each input parameter. In the paper by Li and Mahadevan entitled “Sensitivity Analysis of a Bayesian Network,” global sensitivity analysis is applied to evaluate the correlation between variables in Bayesian networks. By sampling the priors and propagating them through networks, the first-order Sobol’ indices as the indicators of sensitivities for observation variables or posteriors with respect to priors (and vice versa) can be assessed. The variance of prior when it is from a single observation can also serve as a rough estimation of posterior variance if the complete posterior distribution is not needed. The approach is demonstrated with system dynamics examples.

In dynamical systems where differential equations are typically used to model system behaviors, effects of parameter uncertainty can be conveniently estimated through spectral analysis. The paper by Remigius and Sarkar, entitled “Stochastic Bifurcations of a Nonlinear Acousto-Elastic System,” presented an analysis of an acoustic-elastic system consisting of a spinning disk in a fluid-filled enclosure. The nonlinear rotating plate dynamics governing the large amplitude oscillations of the disk are modeled with the von Kármán field equation which is coupled with an acoustic wave equation. The coupled field equations are discretized and solved at various rotation speeds. The authors demonstrated that there is a flutter instability with a Hopf bifurcation at a particular disk rotation speed. They then examined the effects of uncertainty in the damping parameters of the model on the results and used polynomial chaos expansions to propagate the damping parameter's uncertainty. The variation in the damping coefficients causes phase shifts and large amplitude variations which contribute to the degeneracy of the polynomial chaos expansion, resulting in the need for high-order expansions. From the marginal probability density functions of the output, the authors demonstrated that the system exhibits a stochastic bifurcation called a P-bifurcation with reference to the Hopf bifurcation of the deterministic system.

In summary, this collection of papers provides a glimpse of the current research trends in quantifying model-form and parameter uncertainty associated with complex models, which often are high-dimensional, dynamically evolving, and computationally demanding. We thank the authors for their contributions and the diligence to present their work in its best form. We also are in debt toward our reviewers who rigorously examined the submissions and provided helpful feedback to improve the quality of the included papers.