As long as you’re moving, it’s easier to steer.

― Anonymous

**Just to be clear, this isn’t a good thing; it is a very bad thing!**

I have noticed that we tend to accept a phenomenally common and undeniably unfortunate practice where a failure to assess uncertainty means that the uncertainty reported (acknowledged, accepted) is identically ZERO**. In other words if we do nothing at all, no work, no judgment, the work (modeling, simulation, experiment, test) is allowed to provide an uncertainty that is ZERO.** This encourages scientists and engineers to continue to do nothing because this wildly optimistic assessment is a seeming benefit. If somebody does work to estimate the uncertainty the degree of uncertainty always gets larger as a result. This practice is desperately harmful to the practice and progress in science and incredibly common.

Of course this isn’t the reality, the uncertainty is actually some value, but the lack of assessed uncertainty is allowed to be accepted as ZERO. The problem is the failure of other scientists and engineers to demand an assessment instead of simply accepting the lack of due diligence or outright curiosity and common sense. **The reality is that the situation where the lack of knowledge is so dramatic, the estimated uncertainty should actually be much larger to account for this lack of knowledge**. Instead we create a cynical cycle where more information is greeted by more uncertainty rather than less. The only way to create a virtuous cycle is the acknowledgement that little information should mean large uncertainties, and part of the reward for good work is greater certainty (and lower uncertainty).

This entire post is related to a rather simple observation that has broad applications for how science and engineering is practiced today. **A great deal of work has this zero uncertainty writ large, i.e., there is no reported uncertainty at all, none, ZERO.** Yet, despite of the demonstrable and manifest shortcomings, a gullible or lazy community readily accepts the incomplete work. Some of the better work has uncertainties associated with it, but almost always varying degrees of incompleteness. Of course one should acknowledge up front that uncertainty estimation is always incomplete, but the degree of incompleteness can be spellbindingly large.

One way to deal with all of this uncertainty is to introduce a taxonomy of uncertainty where we can start to organize our lack of knowledge. **For modeling and simulation exercises I’m suggesting that three big bins for uncertainty be used: numerical, epistemic modeling, and modeling discrepancy.** Each of these categories has additional subcategories that may be used to organize the work toward a better and more complete technical assessment. In the definition for each category we get the idea of the texture in each, and an explicit view of intrinsic incompleteness.

**Numerical**: Discretization (time, space, distribution), nonlinear approximation, linear convergence, mesh, geometry, parallel computation, roundoff,…**Epistemic Modeling**: black box parametric, Bayesian, white box testing, evidence theory, polynomial chaos, boundary conditions, initial conditions, statistical,…**Modeling discrepancy**: Data uncertainty, model form, mean uncertainty, systematic bias, boundary conditions, initial conditions, measurement, statistical, …

A very specific thing to note is that the ability to assess any of these uncertainties is always incomplete and inadequate. Admitting and providing some deference to this nature is extremely important in getting to a better state of affairs. **A general principle to strive for in uncertainty estimation is a state where the application of greater effort yields smaller uncertainties**. A way to achieve this nature of things is to penalize the uncertainty estimation to account for incomplete information. Statistical methods always account for sampling by increasing a standard error proportionally to the root of the number of samples. As such there is an explicit benefit for gathering more data to reduce the uncertainty. This sort of measure is well suited to encourage a virtuous cycle of information collection. Instead modeling and simulation accepts a poisonous cycle where more information implicitly penalizes the effort by increasing uncertainty.

This whole post is predicated on the observation that we willingly enter into a system where effort increases the uncertainty. The direct opposite should be the objective where more effort results in smaller uncertainty. **We also need to embrace a state where we recognize that the universe has an irreducible core of uncertainty.** Admitting that perfect knowledge and prediction is impossible will allow us to focus more acutely on what we can predict. This is really a situation where we are willfully ignorant and over-confident about your knowledge. One might tag some of the general issue with reproducibility and replicatability of science to the same phenomena. Any effort that reports to provide a perfect set of data perfectly predicting reality should be rejected as being utterly ridiculous.

One of the next things to bring to the table is the application of expert knowledge and judgment to fill in where stronger technical work is missing. **Today expert judgment is implicitly present in the lack of assessment. It is a dangerous situation where experts simply assert that things are true or certain.** Instead of this expert system being directly identified, it is embedded in the results. A much better state of affairs is to ask for the uncertainty and the evidence for its value. If there has been work to assess the uncertainty this can be provided. If instead, the uncertainty is based on some sort expert judgment or previous experience, the evidence can be provided in this form.

Now let us be more concrete in the example of what this sort of evidence might look like within the expressed taxonomy for uncertainty. I’ll start with numerical uncertainty estimation that is the most commonly completely non-assessed uncertainty. Far too often a single calculation is simply shown and used without any discussion. In slightly better cases, the calculation will be given with some comments on the sensitivity of the results to the mesh and the statement that numerical errors are negligible at the mesh given. **Don’t buy it! This is usually complete bullshit!** In every case where no quantitative uncertainty is explicitly provided, you should be suspicious. In other cases unless the reasoning is stated as being expertise or experience it should be questioned. If it is stated as being experiential then the basis for this experience and its documentation should be given explicitly along with evidence that it is directly relevant.

So what does a better assessment look like?

Under ideal circumstances you would use a model for the error (uncertainty) and do enough computational work to determine the model. **The model or models would characterize all of the numerical effects influencing results. Most commonly, the discretization error is assumed to be the dominant numerical uncertainty (again evidence should be given).** If the error can be defined as being dependent on a single spatial length scale, the standard error model can be used and requires three meshes be used to determine its coefficients. This best practice is remarkably uncommon in practice. If fewer meshes are used, the model is under-determined and information in terms of expert judgment should be added. I have worked on the case of only two meshes being used, but it is clear what to do in that case.

In many cases there is no second mesh to provide any basis for standard numerical error estimation. Far too many calculational efforts provide a single calculation without any idea of the requisite uncertainties. In a nutshell, the philosophy in many cases is that the goal is to complete the best single calculation possible and creating a calculation that is capable of being assessed is not a priority. **In other words the value proposition for computation is either do the best single calculation without any idea of the uncertainty versus a lower quality simulation with a well-defined assessment of uncertainty**. Today the best single calculation is the default approach. This best single calculation then uses the default uncertainty estimate of exactly ZERO because nothing else is done. We need to adopt an attitude that will reject this approach because of the dangers associated with accepting a calculation without any quality assessment.

In the absence of data and direct work to support a strong technical assessment of uncertainty we have no choice except to provide evidence via expert judgment and experience. A significant advance would be a general sense that such assessments be expected and the default ZERO uncertainty is never accepted. For example there are situations where single experiments are conducted without any knowledge of how the results of the experiment fit within any distribution of results. **The standard approach to modeling is a desire to exactly replicate the results as if the experiment were a well-posed initial value problem instead of one realization of a distribution of results.** We end up chasing our tails in the process and inhibiting progress. Again we are left in the same boat, as before, the default uncertainty in the experimental data is ZERO. Instead we have no serious attempt to examine the width and nature of the distribution in our assessments. The result is a lack of focus on the true nature of our problems and inhibitions on progress.

The problems just continue in the assessment of various uncertainty sources. In many cases the practice of uncertainty estimation is viewed only as the establishment of the degree of uncertainty is modeling parameters used in various closure models. This is often termed as epistemic uncertainty or lack of knowledge. **This sometimes provides the only identified uncertainty in a calculation because tools exist for creating this data from calculations (often using a Monte Carlo sampling approach).** In other words the parametric uncertainty is often presented as being all the uncertainty! Such studies are rarely complete and always fail to include the full spectrum of parameters in modeling. Such studies are intrinsically limited by being embedded in a code that has other unchallenged assumptions.

This is a virtue, but ignores broader modeling issues almost completely. For example the basic equations and model used in simulations is rarely, if ever questioned. The governing equations minus the closure are assumed to be correct *a priori*. This is an extremely dangerous situation because these equations are not handed down from the creator on stone tablets, but full of assumptions that should be challenged and validated with regularity. **Instead this happens with such complete rarity despite being the dominant source of error in cases.** When this is true the capacity to create a predictive simulation is completely impossible. Take the application of incompressible flow equations, which is rarely questioned. These equations have a number of stark approximations that are taken as the truth almost without thought. The various unphysical aspects of the approximation are ignored. For compressible flow the equations are based on equilibrium assumptions, which are rarely challenged or studied.

A second area of systematic and egregious oversight by the community is aleatory or random uncertainty. This sort of uncertainty is clearly overlooked by our modeling approach in a way that most people fail to appreciate. Our models and governing equations are oriented toward solving the average or mean solution for a given engineering or science problem. This key question is usually muddled together in modeling by adopting an approach that mixes a specific experimental event with a model focused on the average. This results in a model that has an unclear separation of the general and specific. **Few experiments or events being simulated are viewed from the context that they are simply a single instantiation of a distribution of possible outcomes.** The distribution of possible outcomes is generally completely unknown and not even considered. This leads to an important source of systematic uncertainty that is completely ignored.

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Almost every validation exercise tries to examine the experiment as a well-posed initial value problem with a single correct answer instead of a single possible realization from an unknown distribution. More and more the nature of the distribution is the core of the scientific or engineering question we want to answer, yet our modeling approach is hopelessly stuck in the past because we are not framing the question we are answering thoughtfully. Often the key question we need to answer is how likely a certain bad outcome will be. **We want to know the likelihood of extreme events given a set of changes in a system.** Think about things like what does a hundred year flood look like under a scenario of climate change, or the likelihood that a mechanical part might fail under normal usage. Instead our fundamental models being the average response for the system are left to infer these extreme events from the average often without any knowledge of the underlying distributions. This implies a need to change the fundamental approach we take to modeling, but we won’t until we start to ask the right questions and characterize the right uncertainties.

One should avoid carrying out an experiment requiring more than 10 per cent accuracy.

― Walther Nernst

The key to progress is work toward some best practices that avoid these pitfalls. First and foremost a modeling and simulation activity should never allow itself to report or even imply that key uncertainties be ZERO. **If one has lots of data and make efforts to assess then uncertainties can be assigned through strong technical arguments. This is terribly or even embarrassingly uncommon even today. If one does not have the data or calculations to support uncertainty estimation then significant amounts of expert judgment and strong assumptions are necessary to estimate uncertainties**. The key is to make a significant commitment to being honest about what isn’t known and take a penalty for lack of knowledge and understanding. That penalty should be well grounded in evidence and experience. Making progress in these areas is essential to make modeling and simulation a vehicle appropriate to the hype we hear all the time.

Stagnation is self-abdication.

― Ryan Talbot

Modeling and simulation is looked at as one of the great opportunities for industrial, scientific and engineering improvements for society. **Right now we are hinging our improvements on a mass of software being moved onto increasingly exotic (and powerful) computers**. Increasingly the whole of our effort in modeling and simulation is being reduced to nothing but a software development activity. The holistic and integrating nature of modeling and simulation is being hollowed out and lost to a series of fatal assumptions. One of the places where computing’s power cannot change is how we practice our computational efforts. It can enable the practices in modeling and simulation by making it possible to do more computation. The key to fixing this dynamic is a commitment to understanding the nature and limits of our capability. Today we just assume that our modeling and simulation has mastery and no such assessment is needed.

The computational capability does nothing to improve experimental sciences necessary value in challenging our theory. **Moreover the whole sequence of necessary activities like model development, and analysis, method and algorithm development along with experimental science and engineering are all receiving almost no attention today**. These activities are absolutely necessary for modeling and simulation success along with the sort of systematic practices I’ve elaborated on in this post. Without a sea change in the attitude toward how modeling and simulation is practiced and what it depends upon, its promise as a technology will be stillborn and nullified by our collective hubris.

It is high time for those working to progress modeling and simulation to focus energy and effort it is needed. **Today we are avoiding a rational discussion of how to make modeling and simulation successful, and relying on hype to govern our decisions**. The goal should not be to assure that high performance computing is healthy, but rather modeling and simulation (or big data analysis) is healthy. High performance computing is simply a necessary tool for these capabilities, but not the soul of either. **We need to make sure the soul of modeling and simulation is healthy rather than the corrupted mass of stagnation we have.**

You view the world from

withina model.

― Nassim Nicholas Taleb

Pingback: If you don’t know uncertainty, bounding is the first step to estimating it | The Regularized Singularity

Pingback: Testing the limits of our knowledge | The Regularized Singularity

Pingback: Verification and Validation’s Biggest Hurtle is an Honest Assessment | The Regularized Singularity

Pingback: 10 Better Things for Scientific Computing to focus on in 2018 | The Regularized Singularity