Abstract
A robust and complete uncertainty estimation method is developed to quantify the uncertainty of turbulence quantities measured by hot-wire anemometry (HWA) at the inlet of a short-duration turbine test rig. The uncertainty is categorized into two macro-uncertainty sources: the measurement-related uncertainty (the uncertainty of each instantaneous velocity sample) and the uncertainty stemming from the statistical treatment of the time series. The former is addressed by the implementation of a Monte Carlo (MC) method. The latter, which is directly related to the duration of the acquired signal, is estimated using the moving block bootstrap (MBB) method, a nonparametric resampling algorithm suitable for correlated time series. This methodology allows computing the confidence intervals of the spanwise distributions of mean velocity, turbulence intensity, length scales, and other statistical moments at the inlet of the turbine test section.
Introduction
In order to support modern high-fidelity computational fluid dynamics, experiments should be able to provide reliable, detailed turbulence parameters in the flows of interest. On the one hand, accurate turbulent boundary conditions are necessary for the accuracy of the predictions. On the other hand, the ability to simulate the turbulent flow field with turbulence-resolving large eddy simulation and direct numerical simulations implies that higher-order turbulence statistics should be available for validation [1]. The validation process thus requires experimental data accompanied by uncertainty bounds. This creates the need for a reliable estimation of the uncertainty of such statistics.
As suggested by Favier [2], two macro-uncertainty sources can be identified for experimental data: the measurement related uncertainty and the uncertainty associated with the statistical data reduction. The first item addresses the bias and random errors that result from the application of a particular instrument and measurement methodology (e.g., sensor, calibration, installation, flow conditions, data reduction, etc.). The second item refers to the significance of the sample size when computing statistical quantities. For the case of turbulence parameters, it is equivalent to the observation time (i.e., the sampling duration) with respect to the period of the phenomenon. In other words, in the case of turbulence research, the first item refers to the uncertainty of each single sample, while the second to the uncertainty due to the statistical treatment of the finite time series. For simplicity, the first item will be referred to as measurement uncertainty and the second as statistical uncertainty for the remainder of this paper.
The measurement uncertainty budget is normally evaluated by two widely accepted standards, the ASME Measurement Uncertainty [3] and the ISO Guide to the Expression of Uncertainty in Measurements (GUM) [4]. The two methods differ in terms of classification of uncertainty sources, but are consistent in terms of results and computation methodology: the final uncertainty is derived from the propagation of all the uncertainty terms through a first-order approximation of a Taylor's series expansion. Such a procedure implicitly assumes the linearization of the measurement model and that the input and output quantities would follow a normal or a scaled and shifted t-distribution. According to the GUM [4], when the model features significant nonlinearity, the inclusion of higher-order terms in the Taylor series expansion is required. This procedure can be algebraically complex and the assumption of the symmetry of the distributions should still be met. When this assumption is not valid, a Monte Carlo method (MC) is suggested to correctly propagate the probability distributions of the variables. The GUM supplement [5] provides guidelines for the application of MC in measurement uncertainty computations.
The aforementioned standards refer to single-sample experiments. The statistical uncertainty of the population mean is only included, evaluated through the variance of a “large” number of independent samples N. Concerning the uncertainty of higher statistical moments, general formulas can be found in the literature, always based on large sample theory and independent samples [6]. For turbulent time series, independent samples are ensured when the sampling time is at least twice the integral time scale Tint [7,8]. While this might be the optimum for the statistical convergence of time-averaged quantities, a higher sampling rate is required for spectral analysis, especially when small scales need to be resolved. Moreover, a large number of independent samples require that the time duration of the time series is considerably larger than the integral time scale, which can be a significant limitation for short-duration tests.
In these conditions, we propose the use of a nonparametric resampling algorithm like the moving block bootstrap method (MBB) to process short time-series. This method maintains the correlation of the time series and it is thus suitable for turbulent flows sampled at a high rate. The MBB technique also allows for the computation of the uncertainty of all statistics without requiring the derivation of complex formulas (see, e.g., the expressions for the variance of rth order moments in Ref. [6]). Finally, the MBB does not require any assumption about the underlying probability distribution of the population. This is also an essential requirement for the treatment of velocity distributions in turbulent flows that are often non-normal.
This work aims at developing an uncertainty estimation methodology for turbulence statistics which considers both the measurement and the statistical uncertainty. The former is assessed through the implementation of a MC method in order to take into account the nonlinearity resulting from the data reduction process, while the latter is quantified by means of the MBB algorithm. The uncertainty computation method is applied to hot-wire anemometry (HWA) measurements performed at the inlet section of a transonic turbine stage tested in a blowdown facility. The limited measurement time (of the order of 100–300 ms) in this type of test rigs increases the statistical uncertainty of turbulence parameters as statistical convergence is not ensured. Therefore, it is necessary that the statistical uncertainty source is included in the total uncertainty budget. With the present methodology, confidence intervals can be computed for all the turbulence parameters of interest.
Uncertainty Estimation Methods
The Monte Carlo Method.
The MC uncertainty computation aims at computing the probability distribution of a certain output quantity by propagating the probability distributions of the inputs through a measurement model. In the present case, it is used to estimate the uncertainty of the velocity time series, taking into account all the uncertainty sources occurring during the measurements and the hot-wire (HW) calibration. As will be explained in the section The Moving Block Bootstrap Method, the MBB method is expected to account for the random uncertainty of velocity samples of the same test. Therefore, such uncertainty sources (e.g., random noise of the HW output voltage) should not be included in the MC to avoid considering them twice.
The ASME standard [3] is used for the categorization of uncertainty sources. Uncertainty sources that cause a constant bias to the measured quantity are defined as systematic, while those that cause a variability to the measured quantity are defined as random. The procedure to define the input quantities, following Coleman and Steele [9], is illustrated in Fig. 1 for two variables, X1 and X2, affected by the uncertainty sources s1, b1, b2 and b2, b3, respectively, where s represents a random uncertainty source and b a systematic uncertainty source. At each ith iteration, the input probability density function (PDF) are randomly sampled so that the input variables to the measurement model Y are computed as: and , being and the expected values. Error sources common to multiple variables (such as b2 in this example) are automatically taken into account, leading therefore to a straight-forward implementation of correlated uncertainty terms. The values X1i and X2i constitute the inputs to the measurement model Y = f(X1, X2) for this iteration. By collecting M realizations (i.e., the number of MC iterations) of the output variable Y, the probability distribution is then constructed by sorting the output values in nondecreasing order, allowing therefore to compute its expected value and any other statistical moments at the desired confidence level p. The algorithm is schematically presented in Fig. 2. It should be noted that when the output distribution is asymmetric, different coverage intervals can be defined, including the probabilistically symmetric and the shortest coverage interval [5]. In this study, the shortest coverage interval is used.
The quality of the results of a MC uncertainty computation directly depends on the representativeness of the PDFs of the uncertainty sources. When the latter are not known a priori, a common practice is to assume a normal distribution for random uncertainty sources while the systematic uncertainty items are defined on the basis of the available data and the engineering judgment.
The Moving Block Bootstrap Method.
The bootstrap method is a resampling algorithm that allows computing the statistical inference of a group of samples when its size is statistically too small (i.e., nonrepresentative) or when the probability distribution is unknown. The method, developed originally by Efron [10] in an econometric framework, consists in the generation of B sample series that are constructed by randomly resampling the original data series x. The statistic of interest θi is then computed for each sample series, leading therefore to the generation of a population θB for which a mean value μB and a variance σB can then be computed.
Although the bootstrap technique provides a good estimation of the sampling distribution [11], its direct application to turbulence measurements would be inconsistent since, by randomly resampling the original data series, the intrinsic correlation of turbulent phenomena is automatically degraded. In order to overcome this issue, the MBB methodology by Kunsch [12] is adopted. The procedure first splits the original data series x, consisting of N samples, into N − c + 1 overlapping blocks. N/c blocks will then be extracted and randomly concatenated, generating therefore a bootstrapped data series of roughly the same size of the original one. The process is then repeated B times in order to create a population of the statistic parameter of interest. A principle sketch of the moving block bootstrap method is shown in Fig. 3.
As suggested by Politis and White [13], the optimal block length c can be determined by retrieving first an estimate of the integral time scale through the computation of the autocorrelation function of the original dataset. Nevertheless, when dealing with limited observation times, the autocorrelation function could present high-amplitude oscillations leading to an incorrect evaluation [14]. In the latter case, Theunissen et al. [15] proposed to perform a conservative selection of the block length by increasing it by at least a factor of 2. In the frame of the present project, the optimal block length was selected through a sensitivity analysis. The optimum block length coincides with the minimum number of samples per block that provides convergence on the mean value of the bootstrapped integral length scale. Coherently, the number of bootstrap repetitions B was defined by monitoring the convergence on the mean and on the confidence interval limits of all the turbulence statistics of interest.
Garcia et al. [16] validated the MBB method both analytically and experimentally using 80 repeatable tests. The authors reported a good agreement on the computed uncertainty of the turbulence parameters. They concluded that the MBB method is capable of quantifying the statistical errors that produce scatter in repeated measurements. Based on this study, we consider the statistical uncertainty computed by the MBB method to take into account also random errors that produce scatter from sample-to-sample, e.g., noise in the HW voltage. This type of uncertainties is not included in the measurement uncertainty budget.
Experimental Methods and Setup
The CT3 Compression Tube Facility.
The tests were conducted in the rotating turbine rig of the von Karman Institute (VKI). This short-duration facility operates on the principle of a compression tube and allows testing a high-pressure turbine stage at a larger scale, while preserving the actual engine Mach and Reynolds numbers and gas-to-wall temperature ratio [17]. The layout of the facility is presented in Fig. 4.
The duration of the test is around 0.3 s. In the present experimental campaign, one operating point has been investigated, whose operating conditions are given in Table 1.
Experimental test conditions at midspan
Quantity | Value | Units |
---|---|---|
Turbine inlet Mach number | 0.10 | |
Inlet total pressure P01 | 1040 | mbar |
Inlet total temperature T01 | 463 | K |
Wall-to-gas temperature ratio | 0.68 | |
Nondimensional rotation speed | 281.3 | |
Total-to-static turbine pressure ratio | 2.21 | |
Chord-based Reynolds number at plane 2 | 6.93 × 105 |
Quantity | Value | Units |
---|---|---|
Turbine inlet Mach number | 0.10 | |
Inlet total pressure P01 | 1040 | mbar |
Inlet total temperature T01 | 463 | K |
Wall-to-gas temperature ratio | 0.68 | |
Nondimensional rotation speed | 281.3 | |
Total-to-static turbine pressure ratio | 2.21 | |
Chord-based Reynolds number at plane 2 | 6.93 × 105 |
The Measurement Plane.
The test section in this experimental campaign hosts a single high-pressure stage consisting of a stator with 38 blades and a rotor with 48 blades. The measurements are performed at the inlet of the stage, half a stator axial chord upstream of the stator leading edge. The span at this position is 74.71 mm. The instrumentation used at the measurement plane and their setup are presented in Fig. 5. A rake of four Kiel type pressure probes is used to measure the total pressure P0,1 along the span, and a rake of four thermocouples is used to measure the total temperature T0,1. Two reference probes consisting of a thermocouple and a Kiel probe are used to measure T0,1 and P0,1 at midspan. All probes are placed at different circumferential positions. The static pressure is measured at the tip and hub endwalls with five static pressure taps distributed over one vane pitch.

(a) Instrumentation and setup at measurement plane, (b) total pressure and thermocouple rakes, reference probes, and (c) HW probe
Figure 5(a) shows the employed HW probe as mounted in the test section. It features two HW heads, placed at half span distance from each other. The double-head configuration allows to sweep the channel span (by traversing radially the probe) with a reduced number of tests. The probe is also equipped with a thermocouple in order to measure the total temperature between the HWs. Due to the high inlet total temperature (∼440 K), the operating temperature of the HWs had to be increased to reduce the sensitivity to the flow temperature. For this reason, a platinum and nickel alloy (90%Pt/10%Ni) was selected as wire material. This allowed using an operating temperature Tw of ∼540 K and ∼570 K, for heads 1 and 2, respectively. The Pt-Ni alloy was chosen as it offers high resistance to oxidation compared to conventional platinum-plated tungsten wires, whose operating temperature limit is ∼550 K. The wires are of 9 μm diameter and 2 mm long. The actual sensing length of the HW is 1 mm, considering that copper-plated stubs, used to eliminate the interference with the prongs, cover the wire along 0.5 mm at each extremity.
Hot-Wire Calibration Method.
In the frame of the present investigation, a nondimensional HW calibration based on Nusselt (Nu) and Reynolds (Re) numbers is employed, in order to eliminate the effect of the local flow conditions on the HW output [18,19]. The methodology uses an effective wire temperature and empirical correlations to establish a unique Nu–Re calibration curve. The procedure is only briefly described hereafter as the detailed analysis and evaluation of the performance of the calibration method are provided in Ref. [20].
where Rtop and Rs are constant resistances which depend on the circuit.
The viscosity μ and the thermal conductivity k of the fluid are evaluated at the total temperature of the flow. Tw is defined as the temperature which collapses a set of Nu–Re data obtained at different flow temperatures on a single curve. The R2 coefficient of a fourth-order polynomial fit is used as the selection criterion. This is the most suitable definition of the wire temperature to represent the true convective heat transfer process from the wire to the flow. In Fig. 6, a set of data obtained at different flow total temperatures is presented. When plotted in terms of voltage and velocity, a different curve is created for each temperature level. All the datasets collapse into a single curve when plotted in function of Nu and Re using the effective wire temperature, according to Eq. (2). A single Nu–Re calibration polynomial is obtained that can be employed to retrieve the flow velocity from nonisothermal flows using HWA.

Effect of temperature on the absolute HW calibration (left) and nondimensional HW calibration (right)
This calibration methodology, based on the fact that Nusselt is a unique function of Reynolds within the flow conditions of the experiment, eliminates the need for an in situ calibration. Therefore, a simple heater-equipped open jet facility can be employed for the calibration of the HWs. The calibration is performed at a different temperature and velocity compared to the measurement conditions, but at the same Reynolds number as the turbine experiment. The resulting calibration curve for HW head 2 is presented in Fig. 6, with the effective operating wire temperature computed at 572 K.
Processing Methodology.
The first step in the measurement methodology is to detect the interval for which the flow is stabilized and extract the corresponding signals. In Fig. 7, the extracted signal parts are indicated for the inlet total temperature T0,1, the inlet total pressure P0,1, the static pressure at the tip Ps,tip, and for the HW signal. For this time interval, which lasts approximately 100–150 ms, the P0,1 and Ps,tip signals are stabilized. The T0,1 signal presents some low-frequency fluctuations. The flow temperature variations with time are reflected in the HW raw voltage signal, because of the HW dependency on temperature.
Based on the calibration methodology, the total temperature and the total and static pressure values should be known at the HW location in order to retrieve the instantaneous velocity from the HW. Therefore, the data obtained for all the tests for this operating point are used to build radial profiles of P0, Ps, and T0. The measurements of T0 and P0 are normalized by the reference value at midspan for each test. Considering the uncertainty in the positioning of the probe, the values are averaged over 2% of the span and fitted with a Fourier fit. The measurements of Ps at the endwalls are normalized by the P0 value at midspan for each test and the mean values are fitted with a linear curve to estimate the static pressure values along the span. The nondimensional profiles and the midspan values of T0 and P0 are used to determine the local values at each HW measurement position. A mean value of the pressure measurements is considered for each test. This operation is valid since the flow pressure values remain very stable (±0.1% of the mean) throughout the entire duration of the turbine run. The fluctuations in total temperature registered during the test duration are taken into account using the time-resolved signal from the thermocouple of the HW probe.
The HW instantaneous voltage is transformed into Nusselt number applying Eq. (4) and the instantaneous T0 signal. The instantaneous wire Reynolds number is then retrieved from the Nu–Re calibration law. The instantaneous velocity is computed by Eq. (5), where the density and the dynamic viscosity of the fluid are estimated using the local temperature and pressure values. The velocity time series is further processed to compute turbulence statistics.
Turbulence Statistics.
The skewness is a measure of the lack of symmetry of the distribution and is zero for a symmetric distribution.
Uncertainty Analysis
In this section, the uncertainty analysis methodology and the uncertainty sources are presented in detail.
Measurement Uncertainty Sources.
The measurement uncertainty consists of two major sources: (a) the uncertainty of the HW calibration polynomial and (b) the uncertainty of the local flow properties at the HW measurement position. The effect of the temporal and spatial resolution of the HW is briefly discussed as well.
Hot-Wire Calibration Uncertainty.
The uncertainty stemming from the HW calibration can be categorized into two parts: the uncertainty of the calibration points and the uncertainty due to the fitting of the calibration curve. The HW voltage, the total temperature, the total and static pressure measurements are used to compute the calibration points (Nu, Re). The points are then fitted with a fourth-order polynomial. For each Monte Carlo iteration i, the distributions of the input variables Eb, T0, P0, and Pamb are randomly sampled, the ith set of calibration points (Nu, Re)i is computed and finally this set is fitted with a fourth-order polynomial pi. At the end of M iterations, a distribution of M values is obtained for each coefficient of the polynomial p. If the model used for the fitting correctly represents the physical phenomenon, any fitting errors should be attributed to random errors in the measurements that cause scatter in the calibration points. Therefore, if the random uncertainty in the input variables (in this case Eb, T0, P0, and Pamb) is correctly estimated, then the curve fitting error should be automatically accounted for by the uncertainty of the polynomial.
Initially, the random uncertainties for the input variables are estimated based on the variance of their output at flow-off conditions (instrument noise). The systematic uncertainties are evaluated based on the instrument's documentation and calibration. The total pressure and temperature are measured inside the calibration nozzle by means of a K-type thermocouple and a Validyne differential pressure transducer, respectively. The static pressure at the measurement point is the ambient pressure, measured by a Druck DPI 150 pressure indicator (Druck Leicester, Leicestershire, UK). The thermocouple and the pressure transducer have been calibrated against a PT100 thermometer and a Druck DPI 610 pressure indicator, respectively. The elemental uncertainties originating from the instruments are summarized in Table 2. The uncertainty values s represent the uncertainty at the standard deviation level for normal distributions, while for uniform distributions the values a and b represent the lower and upper limits.
Elemental uncertainties of instruments used in the HW calibration chain
Property | Type | Distribution | Value |
---|---|---|---|
HW voltage | Random | Normal | s = 0.001 V |
Ambient pressure | Systematic | Uniform | a = b = 5 Pa |
PT100 | Systematic | Uniform | a = b = 0.1 K |
Thermocouple | Random | Normal | s = 0.0015 K |
Thermocouple | Systematic | Uniform | a = b = 1.1 K |
Druck DPI 610 | Systematic | Uniform | a = b = 17.5 Pa |
Validyne | Random | Normal | s = 0.0012 V |
Validyne | Systematic | Uniform | a = b = 32.5 Pa |
Property | Type | Distribution | Value |
---|---|---|---|
HW voltage | Random | Normal | s = 0.001 V |
Ambient pressure | Systematic | Uniform | a = b = 5 Pa |
PT100 | Systematic | Uniform | a = b = 0.1 K |
Thermocouple | Random | Normal | s = 0.0015 K |
Thermocouple | Systematic | Uniform | a = b = 1.1 K |
Druck DPI 610 | Systematic | Uniform | a = b = 17.5 Pa |
Validyne | Random | Normal | s = 0.0012 V |
Validyne | Systematic | Uniform | a = b = 32.5 Pa |
where Nui contains the uncertainty of the measured Nusselt number, pi the uncertainty of the calibration polynomial parameters, and si the uncertainty due to the fitting error. The uncertainty si is a systematic term for the measured Reynolds number, as the calibration curve is fixed. The parameter s has a mean value of zero and a standard deviation of 0.0273 and 0.1176 for HW1 and HW2, respectively.

Hot-wire calibration curve with uncertainty bars (left) and insight view into the calibration curve (right)
Uncertainty of Local Flow Properties.
The flow properties at the HW measurement position are used to retrieve the velocity from the HW calibration. These flow properties are the total temperature T0, the total pressure P0, and the static pressure Ps. As previously explained, the profiles of P0 and T0 along the span are defined by measurements with different instruments. The values are normalized by the value at midspan for each test. The nondimensional profiles and measurement at midspan at each specific test are used to obtain the local flow data at each HW measurement position. The uncertainty therefore stems from the uncertainty in the determination of the profile and the bias error due to the measurement at midspan during each test.
where is the mean value at a certain span location and is drawn from a normal distribution with zero mean value and standard deviation equal to std. For each MC iteration i, the values for all H span positions are fitted with a Fourier model to get the spanwise profile . The same procedure applies to the measurements of total pressure. An analog procedure is followed for Ps: the values at hub and tip are computed and the profile fit is obtained by linear interpolation between these two values. The uncertainty of the profiles at the 95% confidence level is shown in Fig. 11.
The bias error of the T0 measurement is assumed to follow a normal distribution with zero mean and standard deviation equal to 0.25 K. The uncertainty of the P0 measurement, being a differential measurement, originates from two error sources: the error in the measurement of the differential pressure and the error in the measurement of the ambient pressure . is estimated to follow a normal distribution with zero mean and standard deviation equal to 0.25 mbar. follows a uniform distribution with limits at ±0.05 mbar (instrument's resolution). The methodology to compute the flow properties at the HW span position for one MC iteration is schematically presented in Fig. 12. and are the values of the midspan measurement for the specific test.
Temporal and Spatial Resolution Effects.
The measurement of turbulence statistics is affected by errors arising from the instrument's temporal and spatial resolution. The HW's cutoff frequency (corresponding to an attenuation of the signal of −3 dB) has been estimated to be between 17 and 20 kHz based on the result of the anemometer's embedded square-wave test. The anemometer has been adjusted to feature an optimally damped second-order response [24] which corresponds to a maximally flat frequency response up to the system's cutoff frequency. In other words, the response of the HW resembles the behavior of a low-pass filter, resulting in attenuation of the energy of the turbulent fluctuations over a certain frequency.
The HW's finite length can lead to spatial filtering when the turbulent flow scales are comparable or smaller than the heated sensor size [25,26]. This means that the magnitude of the turbulent fluctuations is attenuated over a certain wavenumber due to spatial resolution issues, independently of the frequency response. The temporal and spatial resolutions are thus inherently connected.
where PSD is the power spectral density of the velocity fluctuations. Figure 13 shows that converges to its final value at around flimit = 8.5 kHz within 0.002 m/s.

Methodology for the computation of the flow properties T0, P0, and Ps at the spanwise position of the HW probe
As the limit frequency is less than half of the estimated HW cutoff frequency, we can conclude that the frequency range which contributes to the value is not affected by the system's finite temporal resolution and this uncertainty source can be safely neglected.
To assess the effect of the spatial resolution, Taylor's frozen vortex hypothesis [21] is used to compute the eddy streamwise length scale corresponding to each frequency (leddy = Uconv/f). In Fig. 12, the second x-axis represents the ratio of the wire length to the local eddy length scale. The dashed (black) vertical line identifies the frequency for which this ratio is equal to 1. This frequency is approximately 38 kHz for the flow under investigation, four times higher than the limit frequency of 8.5 kHz. Spatial filtering initiates at frequencies lower than those corresponding to leddy/lw = 1, depending on the nature of the turbulent flow field. This effect is more severe with increasing anisotropy of the flow, see, e.g., Cameron [27]. Since only single-wire measurements were conducted in this work, there is no information on the degree of anisotropy of the flow-field. Although the flow is expected to feature a certain degree of anisotropy, it is assumed that the spatial resolution has a negligible contribution to the uncertainty on .
The errors due to the HW temporal and spatial resolution can be considered negligible for the velocity RMS, the turbulence intensity and the integral length scale. However, this is not the case for other quantities such as the Kolmogorov length and time scales, the skewness. The Kolmogorov scales correspond to a part of the velocity spectrum that is likely attenuated by the spatial filtering and by the wire finite time response even at frequencies below the system cutoff frequency. The skewness is affected by the nonlinear dynamic response of the anemometer, especially for large turbulence intensities [28], and by spatial filtering [29].
Uncertainty of the Time Series.
All the turbulence statistics of interest are also computed at each iteration i. After M iterations, the output of the MC loop is the distribution GY (composed by M points) for each turbulence parameter Y and a distribution for each sample of the velocity time series. The measurement uncertainty is defined for each parameter by computing the confidence intervals at the 95% level. The methodology is presented in Fig. 14. The velocity time series with 95% confidence intervals is presented in Fig. 15 and the histograms for two measurement samples A and B are presented in Fig. 16. The distributions are slightly skewed and the median value of the velocity is a rather more suitable parameter than the mean value. The same degree of skewness is observed for both points. This is because the systematic uncertainty sources considered mainly affect the ensemble of the time series by introducing a bias. The only factor causing variability is the calibration polynomial uncertainty pi.
Statistical Uncertainty.
The statistical uncertainty of the turbulence statistics is computed by applying the MBB to the median velocity time series (Fig. 15). The output is a distribution FY for each turbulence parameter, and the computed confidence intervals express the statistical uncertainty at the 95% confidence level. The statistical uncertainty is independent of the measurement uncertainty, as the latter only considers systematic uncertainty sources. Applying the bootstrap to the mean velocity time series or to any times series Ui(t) (corresponding to the i MC iteration) yields the same statistical uncertainty, only with a small variation on the mean value.
The histogram of the velocity time series for one test is presented in Fig. 17. The distribution deviates from normality, featuring positive skewness. This reinforces the necessity to use a nonparametric method which does not require a priori assumptions on the probability function of the velocity distribution.
The application of the MBB requires the selection of two parameters: the number of the bootstrap series B and the optimal block length copt for which the intrinsic correlation of the signal is maintained. copt was selected by a sensitivity analysis as the block length for which the mean integral length scale, computed by Eq. (12), converges to its final value. This indicates the convergence of the power spectrum. Convergence was reached when the block length was approximately one-tenth of the whole signal, i.e., copt = 1800 samples.
B is selected by monitoring the convergence of all the turbulence statistics of interest. Indicatively, the convergence for the turbulence intensity and the integral length scale are reported in Figs. 18 and 19, respectively. It can be seen that the computation of the standard deviation requires a smaller number of iterations compared to the computation of the confidence interval limits. Therefore, the computation of a confidence interval is computationally more expensive compared to the computation of the variance. Moreover, the number of bootstrap series B required for convergence is higher for the integral length scale (78,000 series) than for the convergence on the turbulence intensity (35,000 series). Considering the convergence histories of all the statistics, B = 105 was selected. The histograms of the bootstrapped turbulence quantities for the same test are displayed in Fig. 20. The empirical distributions are well approximated by Gaussian distributions.

Mean, standard deviation, and confidence interval limits of the turbulence intensity, convergence with bootstrap series number B

Mean, standard deviation, and confidence interval limits of the integral length scale, convergence with bootstrap series number B
Results and Discussion
The spanwise distributions of the turbulence statistics measured at the turbine inlet plane are presented in Fig. 21. The measured moments are bounded by error bars that indicate the total uncertainty at the 95% confidence interval. These distributions are also accompanied by bar graphs that showcase the contributions of each macro-uncertainty source for three selected tests with the HW probe at different span positions.

Spanwise distribution of turbulence statistics with 95% uncertainty (left) and uncertainty budget at three spanwise positions (right): (a) mean velocity, (b) turbulence intensity, (c) integral length scale, (d) kolmogorov length scale, and (e) skewness
The turbulence intensity features values of 8–10% at midspan and increases toward the endwalls, reaching ∼20% and ∼25% close to the hub and tip, respectively. These values are in agreement with measurements performed in the past in the same plane by Yasa et al. [30]. The length scales distributions feature the opposite trend, with length scales being larger at midspan and gradually decreasing toward the walls. The skewness presents positive values for most span positions, with almost-Gaussian values observed very close to the endwalls. As explained by Jiménez [31], the shape of the PDF of turbulent velocity fluctuations is dominated by large-scale motion. Therefore, the positive skewness between 10% and 90% of the span can be attributed to the presence of large coherent structures in the channel, while that close to Gaussian values are observed inside the hub and tip boundary layers.
The measurement uncertainty is the predominant uncertainty source on the computation of the mean velocity. This contribution is significant due to the propagation of the uncertainty of all the flow properties on the HW signal. For the other statistics, the statistical uncertainty prevails. It can be seen that the measurement uncertainty is important for the length scales as the mean velocity value is required in their computation (Eqs. (12) and (13)). The measurement uncertainty of the corresponding time scales is significantly smaller, as they depend mainly on the shape of the power spectrum. Similarly, the measurement uncertainty is almost negligible for the skewness as only the fluctuating velocity component is needed for its computation (Eq. (9)). Finally, the integral length scale is more affected by the statistical uncertainty compared to the Kolmogorov scale, as the short signal duration limits the convergence of the power spectrum at the low-frequency range.
As the measurement uncertainty does not have a significant effect on the computation of the velocity fluctuations, the measurement uncertainty of the turbulence statistics can be reduced by reducing the uncertainty of the mean velocity. One possibility could be the use of a second measurement of the time-averaged flow velocity by another instrument featuring lower uncertainty over the range of test flow conditions (e.g., pneumatic pressure probes). The statistical uncertainty can be reduced by increasing the duration of the velocity signal, which is limited by the test duration in short-duration facilities.
Conclusions
In this work, a method for the quantification of the uncertainty of turbulence statistics is presented. The method considers both the measurement uncertainty stemming from the experimental process and the statistical uncertainty originating from the statistical treatment of the velocity time series. It is applied to HWA measurements performed at the inlet of a transonic turbine stage, in a short-duration wind tunnel. The application is presented in detail, in order to serve as an example for the uncertainty computation in this type of facilities or, more in general, for any time-constrained fluid dynamics experiment.
The method consists in identifying the measurement uncertainty sources and propagating them through the measurement model using a Monte Carlo method, resulting in the computation of the velocity time series' uncertainty and of the measurement uncertainty of the turbulence statistics. The aforementioned uncertainty does not consider the added uncertainty due to the finite signal duration. This contribution, the statistical uncertainty, is computed by applying the moving block bootstrap, a nonparametric resampling algorithm, to the median velocity time series. The strongly non-Gaussian values of the skewness measured along the span confirm the need of a nonparametric method that does not require a priori assumptions on the distribution of the velocity time series.
The uncertainty budget of the computed statistics reveals that the measurement uncertainty is the predominant source only for the mean velocity, while for the other parameters the statistical uncertainty is the major contributor. The measurement uncertainty is important only for turbulence statistics that require the mean velocity value for their estimation. On the other hand, the statistical uncertainty depends mainly on the duration of the velocity time signal that is bounded by the time constraints imposed by the experimental setup.
To conclude, this work presents a practical application of an uncertainty quantification method for turbulence statistics that can be applied to any type of experiment and measurement technique. In this context, an exhaustive list of uncertainty sources affecting HWA in particular, (e.g., turbulence anisotropy, velocity gradients, etc.) has not been included in this work. The important statistical uncertainty contributions observed in this analysis confirm the need to consider this type of uncertainty in turbulence experiments, in particular for short-duration tests. The proposed method has the advantage of being nonparametric, thus requiring no a priori assumptions on the probability distribution of the turbulent velocity. The computation of the measurement uncertainty requires a reliable estimation of the elemental uncertainties and their probability distributions, but thanks to the MC propagation, it avoids the computation of complex sensitivity derivatives that are required for the common uncertainty propagation based on the Taylor series method. Therefore, besides providing a complete estimation of the uncertainty, the method is also easy to implement and can prove to be a useful tool in turbulence research.
Acknowledgment
The authors would like to thank Roberto Cosentino for his significant contribution at the beginning of this work and Bogdan Cernat for his contribution to the experimental campaign. The first author acknowledges the support of the Fonds de la Rechereche Scientifique—FNRS, through the award of a FRIA doctoral grant.
Funding Data
The first author has been supported by a FRIA doctoral grant received by the Belgian Fund for Scientific Research (Fonds de la Recherche Scientifique-FNRS, Funder ID: 10.13039/501100002661).
Nomenclature
- b =
systematic uncertainty source
- B =
number of bootstrap series
- c =
MBB block length
- copt =
optimum block length
- dw =
wire diameter (m)
- Eb =
HW bridge voltage (V)
- f =
frequency (Hz)
- GUM =
Guide to the Expression of Uncertainty in Measurement
- H =
blade span position (%)
- HWA =
hot-wire anemometry
- k =
thermal conductivity of air (W/(m K))
- leddy =
turbulent eddy length scale (m)
- lw =
active wire length (m)
- Lint =
integral length scale (m)
- M =
number of MC samples
- Ma =
Mach number
- MBB =
moving block bootstrap
- MC =
Monte Carlo
- N =
number of samples of original signal
- Nu =
Nusselt number
- p =
HW calibration polynomial
- P =
flow pressure (Pa)
- Pamb =
ambient pressure (Pa)
- Pr =
Prandtl number
- PDF =
probability density function
- r =
recovery factor
- Rs =
resistance of hot-wire probe's elements (Ω)
- Rt =
top resistance of the HWA circuit (Ω)
- Rw =
wire resistance (Ω)
- Re =
Reynolds number
- RMS =
root-mean-square
- s =
random uncertainty source
- Su =
skewness
- T =
temperature (K)
- Tint =
integral time scale (s)
- Tu =
turbulence intensity (%)
- u =
fluctuating longitudinal velocity (m/s)
- U =
longitudinal velocity (m/s)
- =
rms of turbulent velocity fluctuations (m/s)
- VKI =
von Karman Institute
- 2D =
two-dimensional
- 3D =
three-dimensional