Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

In looking at the dynamic surrounding verification and validation recently I’ve noticed imagesa very grim evolution of the narrative. Two things have happened to undermine the maturity of V&V. One I’ve spoken about in the past, the tendency to drop verification and focus solely on validation, which is bad enough. In the absence of verification, validation starts to become rather strained and drift toward calibration. Assurances that one is properly solving the model they are claiming to be solving are unsupported by evidence. This is bad enough all by itself. The use of V&V as a vehicle for improving modeling and simulation credibility is threatened by this alone, but something worse looms even larger.

A more common and pervasive trend is the conflation of validation with uncertainty quantification. It has become very common for uncertainty quantification (UQ) to be defined as the whole of validation. To some extent this is fueled by a focus on high performance computing where UQ provides a huge appetite for computationimgresal cycles thus eliciting lots of love and support in HPC circles. Validation must be about experiments and a broad cross section of uncertainties that may only be examined through a devotion to multi-disciplinary work and collaboration. One must always remember that validation can never be separated from measurements in the real world whether experimental or observational. The experiment-simulation connection in validation is primal and non-negotiable.

There are three types of lies — lies, damn lies, and statistics.

― Benjamin Disraeli

A second part of the issue is the hot topic nature of UQ. UQ has become a buzzword and seems to be a hot issue in publishing and research. Saying you’re doing UQ seems to be a means to squeezing money out of funding agents. In addition UQ can be done relatively automatically and mechanically. Tools and techniques exist to enable UQ to be dfig10_roleone without much deep thought even though it touches upon many deep technical topics. Actual validation is far harder and more holistic. The core to any work in validation is serious experimental expertise and hard-nosed comparison with simulations. The detailed nature of the experiment and its intrinsic errors and uncertainties is the key to any comparison. Without knowing the experimental uncertainty any computational uncertainty is context free. My grumpy intellectual would quip that validation requires thinking and that leads people to avoid it because thinking is so hard. The deeper issues are that validation is complex and mutli-disciplinary in nature making it collaborative and difficult. Experts in a single discipline can do UQ, so it is an easy out.

Five percent of the people think;

ten percent of the people think they think;

and the other eighty-five percent would rather die than think.

― Thomas A. Edison

quote-thinking-is-hard-work-that-s-why-so-few-do-it-albert-einstein-79-80-43.jpg

One of the biggest issues is the stunning incompleteness of UQ in general. Most commonly UQ is done via an exploration of the variation of parameters in models. Complex models of reality have a lot of constants that are not known with great precision. Various techniques may be utilized to efficiently examine the variation in imgrescomputational solutions due to changes in these parameters. Among the methods used are things like Markov Chain Monte Carlo (MCMC), polynomial chaos, and other sampling methods. The results from this work are useful and sound, but form a rather incomplete view of uncertainty. Even in these cases the sampling is often subject to lack of certainty with the assessment driven by the difficulty of determining uncertainty in high dimensional spaces. Modeling and simulation suffers from a host of other uncertainties not covered by these methodologies. For example most simulations have some degree of numerical error that may be quite large. Numerous techniques exist for exploring its magnitude and nature. Many systems being modeled have some stochastic or variability associated with them. Modeling assumptions are often made in simulating a system or experiment. The solution may change greatly on the basis of these assumptions or modeling approximations. A different computational modeler may make much different assumptions and produce a different solution.

Judge a man by his questions rather than by his answers.

― Voltaire

If validation is to be done properly a fairly complete accounting of modeling and simulation uncertainty is needed. One also needs to also understand the experimental error and uncertainty with equal completeness. One must be acutely aware of the intrinsic lack of certainty in the estimation of uncertainty. The combination of the solutions and the sizes of each uncertainty provides a modeling and simulation solution into proper context. Without knowledge of the uncertainties in each data source, the distance between solutions cannot be judged. For example if the experimental precision is very good and the uncertainty is quite small, the simulation needs to be equally precise to be judged well. Conversely a large experimental uncertainty would allow model to be much looser, and still be judged well. More critically the experiment wouldn’t provide actionable evidence on research needs, and expert judgment would reign.

ZmachineThe whole of the uncertainty provides an important source of scientific tension. If experimental uncertainty is small, it requires modeling and simulation to be equally precise to imply good results. It pushes the modeling to improve to meet the high standard of the experiment. If the modeling and simulation is very good, but the experiments have large uncertainty, it should push the experiments to improve because they fail to constrain and distinguish between models. By having a deep and complete understanding of uncertainty, we can define where we need to put resources to improve. We know what aspects of our current knowledge are the most in need of attention and limiting progress.

One must always be aware of the significant attraction of short changing uncertainty estimation. Doing a complete job of estimating uncertainty almost always results in an increase in the magnitude of uncertainty. This is where science as a fundamentally human enterprise comes into play. People would rather think uncertainties are small than large. Uncertainty is uncomfortable and people shy away from discomfort. By under-estimating uncertainty people unconsciously put themselves at ease by doing incomplete work. A more rigorous and complete approach almost always produces a discomforting result. When one combines discomfort with difficulty of accomplishment, the necessary factors for lack of effort and completeness becomes clear. With this temptation in mind the tendency to take the easy route must be acknowledged.

The bottom line is the necessity understanding uncertainty in a holistic manner can produce useful and defensible context for science. It can allow us to understand where we need to improve our knowledge or practice. Without this accounting the whole images-1issue falls into the area of relying upon expert judgment or politics to make the decisions. We fail to understand where our knowledge is weak and potentially overlook experiments necessary for understanding. We may have the right experiments, but cannot make measurements of sufficient accuracy. We might have models of insufficient complexity, or numerical solutions with too much numerical error. All of these spell out different demands for resource allocation.

Much of the tension is captured in these two quotes although I hope Eddington was probably trying to be ironic!

Never trust an experimental result until it has been confirmed by theory

― Arthur Stanley Eddington

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Advertisements