Sometimes the hardest thing any of us can hope for is finding the courage to be honest with ourselves.

― Kira Saito

Today I’m writing about dealing with the unfortunate practice of failing to address uncertainty, and implicitly setting its value to zero, the smallest possible value. This approach is pernicious, and ubiquitous in computational science (and a lot of other science). It is a direct threat to progress and far too acceptable in practice.  I wrote about this at length decrying this standard practice, but it remains the most common practice in uncertainty quantification (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In a nutshell when someone doesn’t know what the uncertainty is they simply assign a value of zero to it. We can do something better, but first this needs to be recognized for what it is, systematic and accepted ignorance.

chart-with-huge-error-barsThe reasons for not estimating uncertainties are legion. Sometimes it is just too hard (or people are lazy). Sometimes the way of examining a problem is constructed to ignore the uncertainty by construction (a common route to ignore experimental variability and numerical error). In other cases the uncertainty is large and it is far more comfortable to be delusional about its size. Smaller uncertainty is comforting and implies a level of mastery that exudes confidence. Large uncertainty is worrying and implies a lack of control. For this reason getting away with choosing a zero uncertainty is a source of false confidence and unfounded comfort, but a deeply common human trait.

mistakesdemotivator_largeIf we can manage to overcome the multitude of human failings underpinning the choice of the default zero uncertainty, we are still left with the task of doing something better. To be clear, the major impediment is recognizing that the zero estimate of uncertainty is not acceptable (most “customers” like the zero estimate because it seems better even though its assuredly not!). Most of the time we have a complete absence of information to base uncertainty estimates upon. In some cases we can avoid zero uncertainty estimates by being more disciplined and industrious, in other cases we can think about the estimation from the beginning of the study and build the estimation into the work. In many cases we only have expert judgment to rely upon for estimation. In this case we need to employ a very simple and well-defined technique to providing an estimate.

Learning is not the accumulation of knowledge, but rather, one thing only: understanding

― Donna Jo Napoli

The best way to explore estimates is using the time-honored approach of bounding the uncertainty. One should be able to provide clear evidence that the uncertainty is both smaller and larger than certain known values. This provides bounds for the magnitude of uncertainty.  Depending on the purpose of the study, these magnitudes can be used to more appropriately use the results. This can then be used to provide some sort of reasonable and evidence based uncertainty to energize progress and underpin credibility. If the estimate of the smallest possible uncertainty is that ubiquitous zero, the estimate should be rejected out of hand. The uncertainty is never ZERO, not ever. Nothing is known with absolute certainty. If the uncertainty is very small there should be very strong evidence to support the bold assertion. We do know some things extremely well like Planck’s constant, but it still has an uncertainty of a finite size.

The flip side to the lower bound is the upper bound for the uncertainty. Generally b7cb11b1c07d55e2165046959098156aspeaking, there will be a worst case to consider or something more severe than the scenario at hand. Such large uncertainties are likely to be quite uncomfortable to those engaging in the work. This should be uncomfortable if we are doing things right. The goal of this exercise is not to minimize uncertainties, but get things right. If such bounding uncertainties are unavailable, one does not have the right to do high consequence decision-making with results. This is the unpleasant aspect of the process; this needs to be the delivery of the worst case. To be more concrete in the need for this part of the bounding exercise, if you don’t know how bad the uncertainty is you have no business using the results for anything serious. As stated before the bounding process needs to be evidence based, the assignment of lower and upper bounds for uncertainty should have a specific and defensible basis.

Belief can be manipulated. Only knowledge is dangerous.

― Frank Herbert

Once the bounds for the uncertainty are established along with associated evidence, some choices need to be made to use the information. To a large extent the most conservative choice is the easiest to defend meaning that the upper bound for uncertainty should be used. If the work is being engaged in an honest sense this would be a pessimistic perhaps in the extreme. If one thinks about things in a probabilistic sense, the bounds should establish an interval for the potential uncertainty. This interval is most likely to be defensibly treated with a uniform distribution. For most cases using a midpoint averaging the lower and upper bounds is a reasonable choice. If the application associated with the decision-making is extremely important, the upper bound or something skewed in that direction is probably advisable.

imagesTo some extent this is a rather easy lift intellectually. Cultural difficulty is another thing altogether. The indefensible optimism associated with the default zero uncertainty is extremely appealing.  It provides the user with a feeling that the results are good. People tend to feel that there is a single correct answer. The smaller the uncertainty is the better they feel about the answer. Large uncertainty is associated with lack of knowledge and associated with low achievement. The precision usually communicated with the default, standard approach is highly seductive. It takes a great deal of courage to take on the full depth of uncertainty along with the honest admission of how much is not known. It is far easier to simply do nothing and assert far greater knowledge while providing no evidence for the assertion.

Uncertainty is a discomforting concept for people. Certainty is easy and comfortable while uncertainty is difficult and carries doubt. It is problematic to consider the role of chance in events, and the fickle nature of reality. A great many important events occur largely by chance and could have quite easily turned out quite differently. Consider how often you encounter a near miss in life, something where danger seemed far to close and just missed you. When these events turn out disastrously they can be tragedies. How often have similar tragedies been barely averted? This same dynamic plays out in experiments that are repeated. An attempt is made to make the experiment reproducible. Occasionally something completely different unfolds. The repeated effects are never exactly the same; there is a small variation. These variations are the uncertainty and depending on the experiment, they have a magnitude.

What happens when you do the experiment exactly once? The simplest thing to do is First-Time-Measurements-of-Turbulent-Mixingconsider this experiment to be a completely determined event with no uncertainty at all. This is the knee jerk response of people is the consideration of this single event as being utterly and completely deterministic with no variation at all. If the experiment were repeated with every attempt to make it as perfect as possible, it would turn out slightly differently. This comes from the myriad of details associated with the experiment that determine the outcome. Generally the more complex and energetic the phenomenon of being examined is, the greater the variation (unless there are powerful forces attracting a very specific solution). There is always a variation, the only question is how large it is; it is never, ever identically zero. The choice to view the experiment as perfectly repeatable is usually an unconscious choice that has no credible basis. It is an incorrect and unjustified assumption that is usually made without a second thought. As such the choice is unquestionably bad for science or engineering. In many cases this unconscious choice is dangerous, and represents nothing more than wishful thinking.

to hope was to expect

― Jane Austen