Abstract

Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.

1 Introduction

According to the ASME verification, validation, and uncertainty quantification committee standards and AIAA verification, validation, and uncertainty quantification guides [1,2], model validation is defined as the process of determining the degree to which a model is an accurate representation of the real phenomenon, from the perspective of the model's intended uses. The purpose of model validation is not only to assess the accuracy of a computational model but also to improve the model based on the validation results. Uncertainty quantification (UQ) has been recognized as a key component in verification and validation [3], whose aim is to build predictive computational models. Validation experiments may include measurement variability and processing uncertainty [4] while simulation predictions may include propagated uncertainty and numerical and model form errors [5]. The validation assessment is often performed using validation metrics that compare the uncertainties between simulation and experiment in the form of probability distributions [6].

If uncertainties in simulation and experiment are larger than errors, validation may not be useful even if two distributions are overlapped. Therefore, in order to have a meaningful validation, it is required to separate model error from uncertainty and reduce the uncertainty to be less than the model error, which is a unique approach in this paper compared to other approaches in the literature [7]. There are many different ways of reducing uncertainty, such as using more samples to reduce statistical uncertainty [8], bias correction [9], and model conditioning [10]. The objective of this paper is to share lessons learned from UQ and uncertainty reduction (UR) of a horizontal multiphase shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. The initial UQ for the horizontal shock tube experiment was presented by Park et al. [4]. The primary goal of this paper is to expose the model errors so that the model can be improved to ensure the required prediction capability of the simulation. This paper is an extension of our previous work on validation and uncertainty quantification of the shock-tube simulation [11].

The rest of the paper is organized as follows: Section 2 explains the multiphase shock tube simulation and experiment, which is an intermediate stage of a compressible multiphase turbulence project at the University of Florida. Section 3 summarizes the validation procedure and UQ, where the quantities of interest are defined, and uncertain variables are quantified. Section 4 presents three UR activities to reveal the model error in particle drag force, followed by conclusions and a summary of lessons learned from this work in Sec. 5.

2 Shock Tube Experiment and Simulation

The physics of shock interaction with a cloud of particles has many interesting applications, such as explosive volcanic eruptions, dust explosions in coal mines, and supernovae [12]. In addition, understanding this physics plays a key role in accurately predicting explosive dispersal of particles and controlling and designing the consequences of the explosion. The center for compressible multiphase turbulence at the University of Florida is developing software that can simulate a high-speed dispersal of an annular, dry particle bed driven by a core of reacting explosive. Figure 1 shows a blastpad experiment of explosive dispersal of solid particles conducted at Eglin Air Force Base under the guidance of the Air Force Research Laboratory [13]. This experiment serves as a testbed for exploring the rich physics of compressible multiphase instabilities and turbulence. The quantities of interest (QoI) are the shock location and particle front location as a function of time, which will be compared with simulation predictions for validation.

Fig. 1
Explosive dispersal of a solid particle bed using composition B as an explosive and steel particles
Fig. 1
Explosive dispersal of a solid particle bed using composition B as an explosive and steel particles
Close modal

Many interesting and complex physics are present in the experiment, including detonation chemistry, turbulence, particle collisions, drag forces, real gas effects, shock–particle interactions, and particle–gas interactions. Figure 2 shows eight physics models for simulating the behavior of explosives, gases, and particles. Among them, four key physics models were selected by the authors based on their importance to achieving the prediction capability of the simulation: (1) particle force, (2) particle collision, (3) compaction, and (4) multiphase turbulence.

Fig. 2
Physics models involved in the blastpad simulation
Fig. 2
Physics models involved in the blastpad simulation
Close modal

Since the blastpad experiment in Fig. 1 is expensive, only a small number of experiments are affordable. Therefore, simplified experiments and simulations are planned to focus on individual physics model validation while the effects of other models are either controlled or ignorable. Among the eight physics models in Fig. 2, in this paper, the multiphase shock tube simulation and experiment are used to validate the particle drag force model (T6) and the collision model (T4), which explain shock–particle and particle–particle interactions. Since the QoIs strongly depend on these two models, their accuracy is critically important to ensure the prediction capability of the blastpad simulation. The shock tube experiment is effective for these two models because the initial experimental conditions can be easily controlled such that the interaction between shock particles and gas particles mostly contributes to the QoI [14]. The effect of the compaction model is minimized by using a relatively low volume fraction. In addition, the effect of turbulence is ignorable by focusing on the motion of particles in an early time.

The apparatus of the shock tube experiment conducted by Sandia National Laboratories [14] is shown in Fig. 3(a). The shock tube of 5.2 m in length is composed of the driver, driven, and test sections. The particle curtain is located in the test section, where a glass window on the side makes it possible to observe the particle motion. The lime-glass particles in the reservoir fall through the slit to form a particle curtain. A shock wave is generated due to the pressure difference between the driver and driven sections when the diaphragm between them bursts. The planar shock wave travels through the driven section and is stabilized before arriving at the test section.

Fig. 3
Schematic illustration of the shock tube experiment and particle curtain: (a) shock tube apparatus, (b) particle curtain edge location curve, (c) Particle curtain before impact, (d) particle curtain before impact, and (e) Shileren image compared to the size of the test section
Fig. 3
Schematic illustration of the shock tube experiment and particle curtain: (a) shock tube apparatus, (b) particle curtain edge location curve, (c) Particle curtain before impact, (d) particle curtain before impact, and (e) Shileren image compared to the size of the test section
Close modal

When the shock moves in the flow direction, its interaction with particles is observed through the side window in the test section. The motion of the particle curtain is represented by the upstream front position (UFP) and the downstream front position (DFP) as a function of time as shown in Fig. 3(b). In addition, the thickness of the curtain is calculated by the distance between the two fronts. Initially, the curtain thickness is about 2 mm before hit by the shock (Fig. 3(c)), while the curtain moves and expands afterward (Fig. 3(d)).

Figures 3(c) and 3(d) show the schematic view of the glass window in the test section, where the black vertical bar in the middle represents the particle curtain and dashed lines are the observation window. The motion of the particle curtain is measured using imaging techniques. More specifically, a Schlieren imaging system [15] is used to capture the motion of the particle curtain and shock. Figure 3(e) is a Schlieren image taken through the observation window. The rectangles with the dashed line of Figs. 3(c) and 3(d) show the coverage of the Schlieren image. In addition, an X-ray radiography imaging system [16] is used to measure the volume fraction of the particle curtain. The Schlieren imaging system can take an image every 24.4 μs, while the X-ray imaging system can take a single image per experiment. Since the Schlieren and X-ray cannot be used simultaneously, either the curtain location or volume fraction can be measured, not both.

An additional challenge in measurement is due to the gap between the particle curtain and the sidewall. Figure 4 shows the schematic top view of the test section with the particle curtain where the curtain occupies about 80% of the width of the test section. The gap allows the air bypassing around the particle curtain, which makes the particles on both sides move faster than those at the center. The two-dimensional simulation results in the xy-plane in Fig. 4(b) support this behavior, where the particle motion on the edge (y = 0.04 m) is faster than that at the center (y = 0 m). The two vertical solid lines are UFP (red) on the left and DFP (blue) on the right when all particles are considered. On the other hand, the two dashed lines are UFP (green) on the left and DFP (cyan) on the right when only particles near the center (0 ≤ y ≤ 0.01) are considered. These particle front lines are determined as the left and right 2.5 percentile of particle volumes. Since the curtain movement can be observed through the window on the side, it is difficult to observe the central portion of the particle curtain. Due to the placement of the particle reservoir and collector, it is difficult to observe the particle movement in the vertical direction. Since one-dimensional simulation assumes that there is no variation through y- and z-coordinate direction, the gap effect can be a major model error. This fact initially provoked serious doubt about the comparison between the one-dimensional prediction and the experiment with gaps because the former cannot capture the gap effect.

Fig. 4
The gaps between the particle curtain and the walls and the predicted effect of the gap from 2D simulation: (a) top view of the gaps between the particle curtain and the sidewalls and picture in the test section and (b) predicted effect of the gap: the particles close to the wall (y = 0.4) moves faster than the particles at the center (y = 0)
Fig. 4
The gaps between the particle curtain and the walls and the predicted effect of the gap from 2D simulation: (a) top view of the gaps between the particle curtain and the sidewalls and picture in the test section and (b) predicted effect of the gap: the particles close to the wall (y = 0.4) moves faster than the particles at the center (y = 0)
Close modal

3 Uncertainty Quantification and Validation Process

Both shock tube simulation and experiment include many sources of uncertainty. To have meaningful validation, one must quantify them carefully. Figure 5 shows the validation and UQ framework for the simulation based on reducing error and uncertainty in both simulation and experiment. The major sources of uncertainty in the experiment are measurement variability, sampling uncertainty, and measurement processing uncertainty. The first one is aleatory, while the other two are epistemic uncertainty. The QoI can be different at different tests due to measurement uncertainty, while the measurement may have a bias due to calibration error, which is measurement processing uncertainty. These errors and uncertainty are quantified by repeating experiments or investigating the measurement process. In addition, the input conditions and their uncertainties should be quantified so that they can be used in the simulation. The major source of uncertainty in simulation is the propagated uncertainty, which is the effect of input uncertainty on QoIs. In addition, the stochastic variability and discretization error in the numerical computation process should be included. These errors and uncertainties are quantified by running the simulation multiple times with different realizations of input uncertainty and performing convergence analysis. When the difference between uncertainties in experiment and simulation is bigger than the difference in the mean values (model error), it is difficult to identify the model error. In such a case, UR is initiated to reduce the uncertainty in both the experiment and simulation until they are less than the model error. If the model error is larger than a threshold, model improvement is initiated to reduce the model error. This process is repeated until both the uncertainty and model error are less than a user-defined threshold.

Fig. 5
Sources of uncertainty and errors, and uncertainty and error reduction cycles
Fig. 5
Sources of uncertainty and errors, and uncertainty and error reduction cycles
Close modal
A unique approach in this paper is the separated treatment of model error from other sources of uncertainty. In the literature [7,17], the model form error is generally considered as a part of epistemic uncertainty, which is good for validating the prediction capability of simulation. However, to identify the model error, which is the main goal of this paper, it would be necessary to separate the model error from other uncertainties. With the given uncertainties in Fig. 5, the model error can be expressed as
(1)
where ymeasandymodel are, respectively, the means of measurement and model prediction. In addition, esamp,emeas,eprop,andenum are, respectively, sampling, measurement, propagated, and numerical errors. All these four errors are modeled as uncertain variables with mean values of zero. For example, measurement processing uncertainty is a bias error. However, since we do not know its exact value, it is modeled as a distribution with zero mean. Then, the discrepancy between the means of experimental measurement and the simulation calculation represents the model error, while its uncertainty is the sum of all four uncertainties
(2)
(3)

where E() and V() represent, respectively, the expected value and the variance of an uncertainty variable. Figure 6(a) illustrates the proposed method of defining model error. The model prediction becomes a distribution whose mean is at ymodel and the variance is the combination of that of propagated uncertainty (eprop) and numerical errors (enum). The experimental measurement is also a distribution whose mean is at ymeas, and its variance comes from that of sampling uncertainty (esamp) and measurement processing uncertainty (emeas). It is noted that the uncertainty in Fig. 6(a) is schematic and does not represent any specific distribution type. Traditionally, the model error is defined based on the model's accuracy. However, this is only possible when the level of model error is known. Since the goal of this paper is to quantify the model error, Eq. (1) is a proper way to estimate it with given uncertainties.

Fig. 6
Model error and associated uncertainty, calculated from the difference between measurement and prediction: (a) model error and its uncertainty, (b) uncertainty is larger than the mean, and (c) uncertainty is smaller than the mean
Fig. 6
Model error and associated uncertainty, calculated from the difference between measurement and prediction: (a) model error and its uncertainty, (b) uncertainty is larger than the mean, and (c) uncertainty is smaller than the mean
Close modal

The identification of the model error is the comparison between the mean of model error in Eq. (2) and its uncertainty in Eq. (3). The uncertainty can be measured in terms of the standard deviation. When the uncertainty is larger than the mean, it is impossible to identify the model error because the uncertainty covers the mean of model error, as shown in Fig. 6(b). Therefore, to reveal the model error, uncertainty in Eq. (3) must be reduced such that the mean of model error can be estimated, as shown in Fig. 6(c). Therefore, the “large uncertainty?” loop in Fig. 5 is repeated until both uncertainties from experiments and simulations are less than the mean of model error in Eq. (2). Once the uncertainty is reduced enough, the “large error?” loop in Fig. 5 is repeated until the level of model error is acceptable.

Table 1 summarizes important sources of uncertainty and error in the shock tube experiment. Additional measurements were performed to estimate uncertainty in particle diameter [21]. Schlieren image analysis is used to estimate the uncertainty in the curtain thickness [14]. X-ray image analysis is used to estimate the distribution and uncertainty in the initial particle volume fraction [16]. All uncertainties are considered uniformly distributed as we were able to identify their lower and upper bounds through measurements. In the case of initial volume fraction, however, it was possible to identify its distribution based on an X-ray image, which is detailed in Sec. 4.3. All the uncertainties except for measurement bias are used for simulation inputs to calculate the propagated uncertainty (eprop), while the measurement bias is included in the measurement uncertainty (emeas). First, a Kriging surrogate model of the curtain location is constructed using the volume fraction, particle diameter, and curtain thickness as input variables. Then, the Monte Carlo simulation with 10,000 samples was used to generate samples of curtain locations. To obtain the sampling uncertainty (esamp), experiments are repeated four times. To understand the influence of the unknown particle locations, we randomly placed particles and made ten runs of simulations with different random particle locations. The difference between the simulation results was small enough that we concluded that the effect of the randomness in the initial particle location on the QoI is negligible. The discretization error has been studied by Nili et al. [22], and the error from the numerical scheme will be discussed in Sec. 4.1(enum). The error due to the gap effect is quantified by comparing one- and two-dimensional simulations.

Table 1

Key uncertainty and error sources in the validation of the one-dimensional simulation

Uncertainty sourceDescription
Measurement bias in particle front positionsSystematic bias uncertainty because of the gap between the particle curtain and walls [−10,0]%
Initial particle volume fractionUncertainty in initial volume fraction measurement process and local variation in particle curtain [18,19] %
Initial particle positionsVariability in initial particle positions
Particle diametersVariability in particle diameters [100,130] μm
Initial curtain thicknessVariation in the curtain thickness [1.6,2.4] mm
Pressure at driver sectionNegligible measurement noise
DiscretizationError due to temporal and spatial discretization
Model errorError in the drag force model [20]
Gap effectError for not being able to measure DFP due to the gap
Numerical flux schemeError due to numerical scheme
Uncertainty sourceDescription
Measurement bias in particle front positionsSystematic bias uncertainty because of the gap between the particle curtain and walls [−10,0]%
Initial particle volume fractionUncertainty in initial volume fraction measurement process and local variation in particle curtain [18,19] %
Initial particle positionsVariability in initial particle positions
Particle diametersVariability in particle diameters [100,130] μm
Initial curtain thicknessVariation in the curtain thickness [1.6,2.4] mm
Pressure at driver sectionNegligible measurement noise
DiscretizationError due to temporal and spatial discretization
Model errorError in the drag force model [20]
Gap effectError for not being able to measure DFP due to the gap
Numerical flux schemeError due to numerical scheme

4 Error and Uncertainty Reduction

4.1 Uncertainty Due to Bias From a Numerical Flux Scheme.

To calculate the propagated uncertainty, it is necessary to run multiple simulations with varying input parameters according to their uncertainty distribution. Since simulations are computationally expensive, surrogate models are often used to replace them. During the design of experiments to build the surrogate model, we observed inconsistencies in the simulation results. Since the physical explanation of these inconsistencies was difficult, we conducted a parametric study to systematically investigate this behavior. The parametric design of experiments was conducted for three major sources of input uncertainty: particle volume fraction, curtain thickness, and particle diameter. Figure 7 shows the parametric design of experiments along four lines in the three-dimensional parameter space, where simulation is conducted at each circle. From the surrogate model, point A that shows a significant deviation from the actual simulation is selected (volume fraction of 23%, particle diameter of 110 μm, and curtain thickness of 2.4 mm). The purpose is to confirm if the simulation results along each line show a physically meaningful trend. Each line is used to identify anomalies in simulation for the corresponding parameter [23]. The QoIs (particle front positions) are calculated as the mean value over random initial particle positions. Since the purpose was to investigate the inconsistency of simulation results, the range of parameters is not the same as the variability of the input parameters reported in Table 1.

Fig. 7
Design of experiments along four parametric lines (blue: particle diameter change, red: particle curtain thickness change, gray: particle volume fraction change, and magenta: all three parameters change simultaneously)
Fig. 7
Design of experiments along four parametric lines (blue: particle diameter change, red: particle curtain thickness change, gray: particle volume fraction change, and magenta: all three parameters change simultaneously)
Close modal

Figure 8 shows the results of QoI at 500 μs along the four lines in Fig. 7. All four lines are normalized in the input variables, where 1 denotes their individual starting points and 0 denotes the ending point (point A). The values of different lines should have the same results at the 0 point, which is {23%, 110 μm, 2.4 mm}. Figure 8(a) shows that the DFP along the four lines is significantly different; it varies in the range of 25 and 60 mm. The behavior of the diameter line departs significantly from the other lines. The discontinuity along the diameter line near the value 0.45 cannot be explained physically. Since the trend is not consistent with physics, it is concluded that it was caused by the numerical error. The behavior was most likely to be associated with the advective upstream splitting method plus (AUSM+) scheme, a numerical flux calculation scheme [24]. The flux of a cell is calculated considering Lagrangian particles traveling in the numerical cells, and the particles are not always distributed evenly in a cell as AUSM+ assumes. This AUSM+ assumption provoked numerical instability when particles in one cell are extremely concentrated. Thus, the AUSM+ scheme was upgraded to the AUSM+up scheme to take into account the particle concentration in a cell [19]. After upgrading the AUSM+up scheme, prediction along the four lines is shown in Fig. 8(b), where the prediction uncertainty is significantly reduced, and the behavior along the different lines is consistent. After improving the numerical scheme, the range of the DFP prediction is reduced to between 25 and 30 mm. However, the prediction value itself at point A is changed significantly: from 40 mm using AUSM+ to 27 mm using AUSM+up. Therefore, by changing the numerical scheme, the numerical uncertainty is significantly reduced while the model error itself is increased. Therefore, the initial simulation with AUSM+ has a large numerical uncertainty such that the model error was unclear (see Fig. 9(a)) as the simulation results overlapped with experiment results due to its large uncertainty. As Fig. 9(b) shows, the inconsistency between the simulation and experimental results for the AUSM+up scheme is slightly larger compared to the AUSM+ scheme. This gives us a lesson that reducing uncertainty (numerical flux scheme) can reveal the hidden model error. In this case, the model error and numerical scheme were compensating for each other. It is noted that the results in Figs. 8 and 9 are slightly different because Fig. 8 is the result for point A while Fig. 9 is the result from three sources of uncertainty.

Fig. 8
Variation of downstream particle front location along different lines of the 3D parametric design of experiments: (a) using AUSM+ and (b) using AUSM+up
Fig. 8
Variation of downstream particle front location along different lines of the 3D parametric design of experiments: (a) using AUSM+ and (b) using AUSM+up
Close modal
Fig. 9
Comparisons between the simulation and experiment in terms of downstream front position (DFP) and UFP with propagated uncertainties for different flux schemes. Note that uncertainties due to the use of surrogate are not included because they were negligible: (a) AUSM+ and (b) AUSM+up.
Fig. 9
Comparisons between the simulation and experiment in terms of downstream front position (DFP) and UFP with propagated uncertainties for different flux schemes. Note that uncertainties due to the use of surrogate are not included because they were negligible: (a) AUSM+ and (b) AUSM+up.
Close modal

After the implementation of AUSM+up, the model error and uncertainty in the QoI were quantified. Figure 9(b) shows a comparison between the calculated and measured QoIs with uncertainties. The uncertainties are accumulated from the uncertainty sources in Table 1 by propagating them through the simulation. Figure 10 shows the model error in DFP with uncertainty. The contributions from different sources of uncertainty were indicated with bands of different colors. The simulation sampling uncertainty (red) represents the uncertainty in obtaining the mean front position from the repeated simulation runs. The measurement uncertainty (orange) is the uncertainty in the measuring process of the front positions. Both uncertainties are too small to be shown in the figure [14]. The largest contribution is from the uncertainty due to the inconsistency between simulation and experiment due to the gap. The second-largest uncertainty is the measurement processing uncertainty in the initial volume fraction based on X-ray images. The third-largest uncertainty is due to the limited number of experiments and high variability between them. Figure 10 shows that the error of the simulation has a wide distribution because of the large uncertainty. For example, the range of model error at t = 700 μs is [−2,8] mm, which represents uncertainty. Therefore, it is inconclusive if the model error is small or large. In Secs. 4.2 and 4.3, it will be discussed how to reduce the uncertainties so that the model error can be revealed clearly.

Fig. 10
Error in the DFP prediction with uncertainties from the uncertainty sources in Table 1 note that uncertainties due to the use of surrogate are not included because they were negligible
Fig. 10
Error in the DFP prediction with uncertainties from the uncertainty sources in Table 1 note that uncertainties due to the use of surrogate are not included because they were negligible
Close modal

4.2 Uncertainty in the Gap Effect.

Based on the ordering of uncertainty in Sec. 4.1, it was concluded that the gap effect is the largest source of uncertainty in the one-dimensional simulation. It is emphasized here that the uncertainty due to the gap was estimated based on expert's opinions. To assess the model adequacy properly, it would be necessary to reduce the uncertainty associated with the gap effect. Two options are possible for UR: (a) use a two-dimensional simulation that can model the gap, or (b) conduct a new shock tube experiment without having the gap [16]. The former can reduce the uncertainty by including the gap in the simulation, while the latter can reduce the uncertainty by removing the gap from the experiment. Both options were explored in this paper to quantify/reduce the uncertainty due to the gap effect. The two-dimensional simulation was performed where the x-axis is in the flow direction and the y-axis is in the depth direction of the test section. DeMauro et al. [16] performed an additional shock tube test by extending the particle curtain to the wall such that no gap exists between the particle curtain and the wall of the test section.

Figure 11 shows the comparisons and the corresponding error estimates with uncertainty for the two options. Figure 11(a) shows 95% confidence intervals of QoIs when the gap effect is included in the two-dimensional simulation. Both experiment and simulation used the particle curtain covering about 80% of that of the test section. Based on these results, Fig. 11(b) shows the distribution of the model error estimate. At 700 μs, the range of model error was [13,17] mm. Due to UR, the uncertainty in model error is reduced by 2.5 times, while the median of model error is increased almost five times compared to that of Fig. 9(b). The reason for having a large error in the 2D simulation results in Fig. 11(b) is due to an incorrect volume fraction model (top-hat distribution) and the numerical scheme AUSM+ (AUSM+up was not available for the 2D simulation). Figure 11(c) shows 95% confidence intervals of QoIs when the gap effect is removed. That is, the particle curtain fills the depth of the test section during the experiment, which is then compared with a one-dimensional simulation. Figure 11(d) shows the corresponding distribution of the model error estimate. At 700 μs, the range of model error is [4,8] mm; that is, the uncertainty is reduced by 2.5 times and the median is increased by two times. Both options significantly reduce uncertainty, while revealing model error.

Fig. 11
Model error estimate with uncertainty after the uncertainty reduction: (a) comparisons between the 2D simulation modeling the gap and experiment with the gap, (b) error in the 2D simulation and the corresponding uncertainty, (c) comparisons between the 1D simulation and experiment without the gap, and (d) error in the 1D simulation and the corresponding uncertainty
Fig. 11
Model error estimate with uncertainty after the uncertainty reduction: (a) comparisons between the 2D simulation modeling the gap and experiment with the gap, (b) error in the 2D simulation and the corresponding uncertainty, (c) comparisons between the 1D simulation and experiment without the gap, and (d) error in the 1D simulation and the corresponding uncertainty
Close modal

An interesting observation from the new experiment is that the influence of the gap was minimal contrary to the comments from experts and the simulation study. Figure 12 shows a comparison between the mean front positions of the experiments with and without the gap. It clearly shows that the influence of the gap is ignorable. This study provided a lesson that some epistemic uncertainties are unintentionally exaggerated. Without quantifying them, they can increase the prediction uncertainty. This corresponds to the case when an erroneous estimate of uncertainty can mislead the UQ process. The follow-up experiments without the gap showed that the gap effect was ignorable.

Fig. 12
Particle front locations from experiments with and without the gap
Fig. 12
Particle front locations from experiments with and without the gap
Close modal

4.3 Reducing the Uncertainty in the Initial Volume Fraction.

Since the largest uncertainty source, the gap effect, was removed, the second-largest uncertainty source, the initial volume fraction, became the next target for uncertainty reduction. The particle volume fraction can be estimated using X-ray radiography, where the intensity of the image is attenuated when the X-ray beam passes through particles. DeMauro et al. [16] used the Beer–Lambert law to estimate the volume fraction from intensity measurements.

In all simulations so far, a constant volume fraction of 21% was used through the curtain thickness, which was the maximum volume fraction from the X-ray image processing [25]. However, X-ray images showed that the particle volume fraction is not uniform through the curtain thickness. Rather, they showed a bell-shaped density distribution, which was also observed by Wagner et al. [25]. Therefore, the uniformity assumption with the maximum volume fraction uses many more particles than the experiment. Such a conservative estimate of the volume fraction makes the number of particles in the experiment different from that in the simulation. To make the simulation condition consistent with the experiment, it would be necessary to use a variable volume fraction through the curtain thickness. A rigorous UQ study was carried out to reduce the inconsistency in this paper.

An important issue is that the particle volume fraction cannot be measured directly, but it is estimated based on image intensity attenuation. The particle volume fraction was measured from X-ray images through a calibration and fitting process [4], which introduces another source of uncertainty, called measurement processing uncertainty. The uncertainty in the process was propagated to the uncertainty in the measured volume fraction. Finally, a bell-shaped initial particle volume fraction shown in Fig. 13 was identified along with measurement processing uncertainty. This is a significant uncertainty reduction from the constant volume fraction of 21% in the previous simulation.

Fig. 13
Initial particle volume fraction profile with measurement processing uncertainty
Fig. 13
Initial particle volume fraction profile with measurement processing uncertainty
Close modal

After the error and uncertainty in the initial volume fraction were reduced with the bell-shaped profile, Fig. 14 shows the comparison between experiment and simulation, where the uncertainty in the error estimate has been reduced compared to Fig. 11(d). After this uncertainty reduction, the error estimate at 700 μs becomes [4,6] mm. Compared to the initial error estimate of [−2, 8] mm, the reduced error estimate provides much accurate information about the prediction error in the simulation. Now, the uncertainty in model error is less than the mean of model error. Therefore, uncertainty reduction revealed the model error.

Fig. 14
Using one-dimensional simulation with the reduced uncertainty in the volume fraction: (a) comparisons plot and (b) error in the 1D simulation and the corresponding uncertainty
Fig. 14
Using one-dimensional simulation with the reduced uncertainty in the volume fraction: (a) comparisons plot and (b) error in the 1D simulation and the corresponding uncertainty
Close modal

After UR for the initial volume fraction, the largest remaining uncertainty is the sampling uncertainty, which can only be reduced by increasing the number of experiments. Since all experiments were already finished, it was impractical to have more experiments. In addition, the uncertainty in the prediction error in Fig. 14(b) is small enough to reveal the model error. Therefore, it is determined to stop the UR process.

5 Concluding Remarks and Lessons Learned

In this paper, the importance of UR is emphasized as a tool to expose the model error in the validation process of the multiphase shock tube simulation. The model error is separated from epistemic uncertainty such that the UQ process yields the error estimate with uncertainty distribution. Initially, the error estimate was not informative due to the large uncertainty in it. Therefore, a series of uncertainty reductions has been conducted until the uncertainty in the model error becomes much smaller than the error itself. The possible sources of error and uncertainty were (a) inconsistency between simulation and experiment, (b) lack of knowledge in physics, and (c) inaccurate information in simulation inputs. It has been shown that removing error and uncertainty does not always improve the prediction accuracy due to the canceling effect between different errors. Initially, the error estimate for the DFP at t = 700 μs was [−2, 8] mm while the measured mean DFP was 65 mm. After rigorous uncertainty reduction, it was [4,6] mm, which revealed the model error.

The next step will be to improve the particle force model of the simulation so that the discrepancy between the simulation and experiment can be reduced. However, the particle force model is composed of five subforce components: quasi-steady force model, pressure gradient force model, added mass force model, inviscid viscous force model, and particle collision model. The improvement of the individual models can be planned based on the importance of them on achieving the desired prediction accuracy. A systematic approach is considered based on the idea of global sensitivity analysis [26].

The lesson learned through UR in this paper is illustrated in Fig. 15. In this paper, success is defined when the experimental results do not contradict the simulations because the uncertainty is larger than the difference between the two means. Failure is defined when the experimental results are clearly different from the simulations because the uncertainty is smaller than the difference between the two means. Figure 15(a) illustrates a scenario where uncertainties in both simulation and experiment are so large that validation is not useful. This was the case when the initial shock tube simulation was finished. Uncertainty is considered too large when the trend of prediction/measurement cannot be determined when parameters change. For both simulation and experiment, the dashed line represents the mean prediction or measurement, while the range represents uncertainty associated with predictions or measurements. Even if the two distributions are almost overlapped, it cannot confirm that the model has a prediction capability. This case is called “empty success” since it does not yield a meaningful conclusion. Once the uncertainties in both simulation and experiment are reduced enough, it is possible that the validation may reveal the model error as shown in Fig. 15(b). This was the case when the shock tube simulation and experiment went through the rigorous UR process. Uncertainty needs to be reduced until the trend of function can be clearly shown when parameters change. Even if the validation fails, it provides useful information for improving the model. Therefore, this case is referred to as “useful failure”. The definition of validation in ASME standards and AIAA guides includes the model improvement process, which is only possible after rigorous UR. When multiple models are involved in simulation, it is possible that errors may cancel each other. When the error from one model (e1) is compensated with that of other models (e2), the final prediction looks accurate. However, improving the second model may increase the magnitude of the prediction error. We call it “deceptive success,” as shown in Fig. 15(c). This happened before an additional experiment was conducted with the gap. It is necessary to conduct a series of uncertainty reduction to have meaningful validation.

Fig. 15
Possible validation scenarios of simulation based on the error and uncertainty: (a) empty success due to large uncertainty in both simulation and experiment, (b) useful failure after uncertainty reduction, and (c) deceptive success due to compensating multiple errors: (a) empty success, (b) useful failure, and (c) deceptive success
Fig. 15
Possible validation scenarios of simulation based on the error and uncertainty: (a) empty success due to large uncertainty in both simulation and experiment, (b) useful failure after uncertainty reduction, and (c) deceptive success due to compensating multiple errors: (a) empty success, (b) useful failure, and (c) deceptive success
Close modal

Acknowledgment

This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

Funding Data

  • National Nuclear Security Administration (Grant No. DE-NA0002378; Funder ID: 10.13039/100006168).

References

1.
ASME
,
2009
, “
Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer
,” ASME, New York, Standard No. VV20.
2.
AIAA
,
2002
, “
Guide for the Verification and Validation of Computational Fluid Dynamics Simulations
,”
AIAA
Paper No. G-077-1998. 10.1115/G-077-1998
3.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2015
, “
Integration of Model Verification, Validation, and Calibration for Uncertainty Quantification in Engineering Systems
,”
Reliab. Eng. Syst. Saf.
,
138
, pp.
194
209
.10.1016/j.ress.2015.01.023
4.
Park
,
C.
,
Matthew
,
J.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
, March
2019
, “
Epistemic Uncertainty Stemming From Measurement processing - A Case Study of Multiphase Shock Tube Experiments
,”
ASME J. Verif. Valid. Uncertainty Quantif.
,
3
(
4
), p.
041001
.10.1115/1.4042814
5.
Kim
,
H.-S.
,
Jang
,
S.-G.
,
Kim
,
N. H.
, and
Choi
,
J.-H.
,
2016
, “
Statistical Calibration and Validation of Elasto-Plastic Insertion Analysis in Pyrotechnically Actuated Deviced
,”
Struct. Multidiscip. Optim.
,
54
(
6
), pp.
1573
1585
.10.1007/s00158-016-1545-8
6.
Thacker
,
B. H.
, and
Paez
,
T. L.
,
2014
, “
A Simple Probabilistic Validation Metric for the Comparison of Uncertain Model and Test Results
,”
AIAA
Paper No. 2014-0121.10.2514/6.2014-0121
7.
Sankararaman
,
S.
,
Ling
,
Y.
, and
Mahadevan
,
S.
,
2011
, “
Uncertainty Quantification and Model Validation of Fatigue Crack Growth Prediction
,”
Eng. Fract. Mech.
,
78
(
7
), pp.
1487
1504
.10.1016/j.engfracmech.2011.02.017
8.
Bae
,
S.
,
Kim
,
N. H.
, and
Jang
,
S.-G.
,
2018
, “
Reliability-Based Design Optimization Under Sampling Uncertainty: Shifting Design Versus Shaping Uncertainty
,”
Struct. Multidiscip. Optim.
,
57
(
5
), pp.
1845
1855
.10.1007/s00158-018-1936-0
9.
Xi
,
Z.
,
Fu
,
Y.
, and
Yang
,
R.-J.
,
2013
, “
Model Bias Characterization in the Design Space Under Uncertainty
,”
Int. J. Perform. Eng.
,
9
(
4
), pp.
433
444
.10.23940/ijpe.13.4.p433.mag
10.
Romero
,
V. J.
,
2007
, “
Validated Model? Not So Fast—The Need for Model “Conditioning” as an Essential Addendum to Model Validation
,”
AIAA
Paper No. 2007-1953.10.2514/6.2007-1953
11.
Park
,
C.
,
Fernández-Godino
,
M. G.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
,
2016
, “
Validation, Uncertainty Quantification and Uncertainty Reduction for a Shock Tube Simulation
,”
AIAA
Paper No. 2016–1192.10.2514/6.2016-1192
12.
Zhang
,
F.
,
Frost
,
D. L.
,
Thibault
,
P. A.
, and
Murray
,
S. B.
,
2001
, “
Explosive Dispersal of Solid Particles
,”
Shock Waves
,
10
(
6
), pp.
431
443
.10.1007/PL00004050
13.
Hughes
,
K. T.
,
Balachandar
,
S.
,
Diggs
,
A.
,
Haftka
,
R. T.
,
Kim
,
N. H.
, and
Littrell
,
D.
,
2020
, “
Simulation-Driven Design of Experiments Examining the Large-Scale, Explosive Dispersal of Particles
,”
Shock Waves
,
30
(
4
), pp.
325
347
.10.1007/s00193-019-00927-x
14.
Wagner
,
J. L.
,
Beresh
,
S. J.
,
Kearney
,
S. P.
,
Trott
,
W. M.
,
Castaneda
,
J. N.
,
Pruett
,
B. O.
, and
Baer
,
M. R.
,
2012
, “
A Multiphase Shock Tube for Shock Wave Interactions With Dense Particle Fields
,”
Exp. Fluids
,
52
(
6
), pp.
1507
1517
.10.1007/s00348-012-1272-x
15.
Linne
,
M.
,
2013
, “
Imaging in the Optically Dense Regions of a Spray: A Review of Developing Techniques
,”
Prog. Energy Combust. Sci.
,
39
(
5
), pp.
403
440
.10.1016/j.pecs.2013.06.001
16.
DeMauro
,
E. P.
,
Wagner
,
J. L.
,
DeChant
,
L. J.
,
Beresh
,
S.
,
Farias
,
J.
,
Turpin
,
P.
,
Sealy
,
A.
,
Albert
,
W. S.
, and
Sanderson
,
P.
,
2017
, “
Measurements of the Initial Transient of a Dense Particle Curtain Following Shock Wave Impingement
,”
AIAA
Paper No. 2017–1466.10.2514/6.2017-1466
17.
Coleman
,
H. W.
, and
Steele
,
W. G.
,
2009
,
Experimentation, Validation, and Uncertainty Analysis for Engineers
, 3rd ed.,
Wiley
,
Hoboken, NJ
.
18.
Wagner
,
J. L.
,
Beresh
,
S. J.
,
Kearney
,
S. P.
,
Pruett
,
B. O.
, and
Wright
,
E. K.
,
2012
, “
Shock Tube Investigation of Quasi-Steady Drag in Shock-Particle Interactions
,”
Phys. Fluids
,
24
(
12
), p.
123301
.10.1063/1.4768816
19.
Liou
,
M. S.
,
Chang
,
C. H.
,
Nguyen
,
L.
, and
Theofanous
,
T. G.
,
2008
, “
How to Solve Compressible Multifluid Equations: A Simple, Robust, and Accurate Method
,”
AIAA J.
,
46
(
9
), pp.
2345
2356
.10.2514/1.34793
20.
Nili
,
S.
,
Park
,
C.
,
Haftka
,
R. T.
,
Kim
,
N. H.
, and
Balachandar
,
S.
,
2017
, “
Sensitivity Analysis of Unsteady Force Models for Two-Way Coupled Dispersed Multiphase Flow
,”
AIAA
Paper No. 2017–3800.10.2514/6.2017-3800
21.
Hughes
,
K. T.
,
Balachandar
,
S.
,
Kim
,
N. H.
,
Park
,
C.
,
Haftka
,
R. T.
,
Diggs
,
A.
,
Littrell
,
D.
, and
Darr
,
J.
,
2018
, “
Forensic Uncertainty Quantification for Experiments on the Explosively Driven Motion of Particles
,”
ASME J. Verif. Valid. Uncertainty Quantif.
,
3
(
4
), p.
041004
.10.1115/1.4043478
22.
Nili
,
S.
,
2019
,
Error Analysis of Particle Force Model of an Euler-Lagrange Multiphase Dispersed Flow
,
University of Florida
, Gainesville, FL.
23.
Fernandez-Godino
,
M. G.
,
Diggs
,
A.
,
Park
,
C.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
,
2016
, “
Anomaly Detection Via Groups of Simulations
,”
AIAA
Paper No. 2016–1195.10.2514/6.2016-1195
24.
Liou
,
M. S.
,
1996
, “
A Sequel to AUSM: AUSM+
,”
J. Comput. Phys.
,
129
(
2
), pp.
364
382
.10.1006/jcph.1996.0256
25.
Wagner
,
J. L.
,
Kearney
,
S. P.
,
Beresh
,
S. J.
,
Demauro
,
E. P.
, and
Pruett
,
B. O.
,
2015
, “
Flash X-Ray Measurements on the Shock-Induced Dispersal of a Dense Particle Curtain
,”
Exp. Fluids
,
56
(
12
), p.
213
.10.1007/s00348-015-2087-3
26.
Nili
,
S.
,
Park
,
C.
,
Kim
,
N. H.
,
Haftka
,
R. T.
, and
Balachandar
,
S.
,
2021
, “
Prioritizing Possible Force Models Error in Multiphase Flow Using Global Sensitivity Analysis
,”
AIAA J.
,
59
(
5
), pp.
1749
1759
. 10.2514/1.J058657