The eye sees only what the mind is prepared to comprehend.
― Robertson Davies
In the classical view of uncertainty, we see nicely distributed errors often associated with the Gaussian probability distribution. In this view of uncertainty, it is symmetric, and it is smoothly distributed. In most real cases our lack of knowledge is hardly like this at all. In most cases we have significant bias in what we don’t know. In addition, we have systems that are calibrated so that models (simulations) have sufficient congruence with measurements to yield comfort to those who use them. In particular when we have a calibrated result, the bias in uncertainty is utterly essential to represent if we hope to predict with any justifiable confidence. For most cases where calibration is utilized, the bias is inherent in that process, and should be reflected in the uncertainty. If we are engaged in genuine prediction with the uncertainty, these biases can have rather profound impacts away from where we have data and where we are actually predicting results.
In many modeling exercises it is quite difficult to separate the act of calibrating models from validating those models. In the simplest setting we see models calibrated using some sort of regression technique, most often a form of least squares. The use of these techniques only serves to amplify the kneejerk view of uncertainty as the errors between the data and the best fit are Gaussian in least square regression. Once we merge these models with a more complex nonlinear system model we begin to systematically diverge from this idealized circumstance.
For very many systems the model is needs to represent a careful balance between large nonlinear terms. In this sense we have many important problems where the core of the scientific or engineering problem is very close to an equilibrium state. Our weather and climate are epitomes of this principle. In these systems the answer is determined by small deviations away from the equilibrium state. Another wonderful example of this principle is a Type II Supernova where it teeters on the edge of this equilibrium and complex processes determine whether the star explodes or collapses into a black hole. A well represented equilibrium is more important to modeling than numerical or physical accuracy of the individual terms. In these cases the numerical error can be sufficient to upset the equilibrium rendering the simulation functionally useless. A calibration of a poorly understood model is used to compensate for the numerical error and place the system back into equilibrium. This common circumstance presents a distinct challenge for uncertainty estimation.
Quite often in these systems, the numerical error in the model is rather substantial even with today’s massive computers. Moreover, the numerical error will remain important for any foreseeable future, we cannot simply build computers powerful enough to erase this issue. So that we can continue to conduct useful modeling and achieve appropriate dynamic balances (where large terms almost equal and opposite are modeled), we must calibrate the model and compensate for the numerical error. In a vast number of cases this calibration is thrown into the broad category of turbulence effects, justified by the general lack of understanding that turbulence engenders. In this light any view of numerical error as an unbiased error is harmful. The standard view of the V&V community (Roache, Oberkampf & Roy, AIAA, ASME, DoE,…) views numerical error this way. In my opinion this unbiased view renders the numerical uncertainty almost useless for many applied cases. Well-behaved and controlled numerical error is invariably a strong bias error as we model in a mesh converged manner. We see results approach the “mesh independent” solution (by the way I hate that term because it lets people off the hook for doing verification/error estimation). If a solution is approached in a manner that is consistent with convergence, the error is almost always a bias. Presenting it as such serves the purpose of improving simulations directly.
The path forward for useful uncertainty quantification for the systems must acknowledge the generally biased nature of the numerical error, and the anti-bias that the calibration often takes on. These two effects cancel each other out where the calibration data exists, but when we get into an extrapolation of results into a predictive scenario we can expect these uncertainties to grow. This can be seen in the standard view of uncertainty for regression where any uncertainty grows quite large when data is no longer tethering the calibration. Therefore we might expect numerical errors to remain similar in the extrapolated region, but the compensation modeling uncertainty would grow away from calibration data. This could have profound consequences for many of the simulations of near equilibrium systems that dominate high consequence investigations. To get a reasonable and defensible uncertainty for these predictions the bias is absolutely essential. Without bias in the uncertainty and the effects of the cancellation in calibrated results, the uncertainty will be poorly and inaccurately represented. If the bias is not included in the assessment we can expect the predicted uncertainty to be wildly out of character with the reality of modeled prediction.
We all see only that which we are trained to see.
― Robert Anton Wilson
Oberkampf, William L., and Christopher J. Roy. Verification and validation in scientific computing. Cambridge University Press, 2010.
Roache, Patrick J. Verification and validation in computational science and engineering. Vol. 895. Albuquerque, NM: Hermosa, 1998.
V&V, ASME ASME. “V&V 10-2006: guide for verification and validation in computational solid mechanics.” New York: American Society of Mechanical Engineers (ASME) (2006).