The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

― Daniel J. Boorstin

All knowledge hurts.

― Cassandra Clare

Science is an important source and process for generating knowledge for humanity. Much of science is a core of well-known and well-tested knowledge about the univeunnamedrse. Most of the activity for working scientists is at the boundaries of our knowledge working to push back our current limits on what is known. The scientific method is there to provide structure and order to the expansion of knowledge. We have well chosen and understood ways to test proposed knowledge. A method of using and testing our theoretical knowledge in science is computational simulation. Within computational work the use of verification, validation with uncertainty quantification is basically the scientific method in action (https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/ ). One of the key activities in the accomplishment of our understanding of theory is the determination of uncertainties.

Unfortunately what we call “uncertainty quantification” is only a small piece of what uncertainty needs to be evaluated in testing knowledge. Too often people only do this narrow part of uncertainty quantification and falsely believe this is sufficient for science.

The basic premise is that we can test our knowledge via the systematic understanding of uncertainties. If we examine the uncertainty in an honest and complete manner, the limits of knowledge can be explored. Some uncertainty can be reduced via greater computational effort, some uncertainty can be reduced with improved modeling and some uncertainty is irreducible. It is absolutely essential to understand the nature of what we do and don’t know systematically. Careful and honest study is clear, and the strong tendency of people is to view all uncertainty as bad. This is not necessarily true. Uncertainty is completely unavoidable, and understanding the degree to which this is true can be a great unburdening. If an uncertainty is unavoidable, one can dispense with attempting to reduce it and simply figure out how to live with it.

Crays-Titan-SupercomputerIf the uncertainty is irreducible and unavoidable, the problem with not assessing uncertainty and taking an implied value of ZERO for uncertainty becomes truly dangerous (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In this case there is an uncertainty that should be there, and instead of building this knowledge into our work, we mindlessly ignore it. Sometimes it is actually mindfully ignored, which is utterly contemptible. This situation is so common as to be laughable, but actually provides the source for tragedy. Looking at weather and climate provides innumerable situations where ignorance of uncertainty pileofshitmay prove deadly in rather commonly encountered situations. As systems become more complex and energetic, chaotic character becomes more acute and common. This chaotic character leads to solutions that have natural variability. Understanding this natural variability is essential to understanding the system. Building this knowledge is the first step in moving to a capability to control and engineer it, and perhaps if wise, reduce it. If one does not possess the understanding of what the variability is, such variability cannot be addressed via systematic engineering or accommodation.

Courage doesn’t happen when you have all the answers. It happens when you are ready to face the questions you have been avoiding your whole life.

― Shannon L. Alder

This entire issue is engaged frequently. In the case of experiments for complex systems, the problem arises because lack of repeated experiments. Often such systems are complex and expensive, thus tests are carried out once. Even though any thoughtful examination of the situation would conclude that the results of the experiment are likely –almost certainly – highly variable, the experiment is treated as a unique event. Computational simulations of the experiment are viewed the same way; the calculation should try to reproduce this single experiment. This is potentially a recipe for disaster. A healthier point of view would be looking at this experiment as a single instance of drawn from a probability distribution. If the experiment were repeated there would be different results. A computational simulation if truly predictive would do exactly the same thing. Of course the simulations themselves are designed to compute the average response of such an experiment (based on mean field equations). All of this conspires to create big problems should the experiment actually draw from a low probability outcome (i.e., the tails of the distribution).5f282213e3d57606200fffd45374ecc5

To address this systemic issue we need to reframe both the experimental and theoretical practice. If an experiment fails to give repeatable results, we need to work to unveil the statistical nature of the results especially for complex, high consequence systems. Theoretical models need to have the same properties more mindfully rather than producing the variability by happenstance (when mean field models produce variability almost serendipitously). Computational simulations follow the theory by construction and great benefits to our knowledge and understanding by more structurally and mindfully building in the variability to the theory. Failure to address this issueostrich-head-in-sandsystematically is an ever-growing limit for science. We have a major scientific gap open in front of us and we are failing to acknowledge and attack it with our scientific tools. It is simply ignored almost by fiat. Changing our perspective would make a huge difference in experimental and theoretical science, and remove our collective heads from the sand about this matter.

Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.

― Marie Curie

At a deeper and more fundamental level the whole exploration of the true uncertainty is the search for the understanding. We seek to define our level of precision for the modeling of something by systematically examining the levels of certainty (by proxy of studying uncertainty). We need to understand the quality of our knowledge, and a complete survey of uncertainty is a path to this end. For computational modeling there are three broad categories for the uncertainty, the model itself, the model’s numerical solution, and the experimental fidelity used to grant confidence to the model. Each of these uncertainties can in turn be broken down into more detailed pieces. For

example we must compare to the model itself rather than the error in the solution of the model. It is important to choose some structure for the uncertainty and commit to an estimate of all portions of the structure. One should never take a zero magnitude for the uncertainty of any structural element by ignoring it.

Any fool can know. The point is to understand.

― Albert Einstein

One of the clearest ways to undermine this quest for understanding the boundaries of our knowledge is ignoring a key uncertainty. There are several usual suspects for the _12122_tex2html_wrap26willful uncertainty ignorance. Probably the most common uncertainty to be willfully ignorant of is numerical error. The key numerical error is discretization error that arises from the need to make a continuous problem, discrete and computable. The basic premise of computing is that more discrete degrees of freedom should produce a more accurate answer. Through examining the rate that this happens, the magnitude of the error can be estimated. Other estimates can be had though making some assumptions about the solution and relating the error the nature of the solution (like the magnitude of estimated derivatives). Other generally smaller numerical errors arise from solving systems of equations to a specified tolerance, parallel consistency error and round-off error. In most circumstances these are much smaller than discretization error, but are still non-zero.

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

A second category of uncertainty that is highly prone to being ignored is the experimental variability. Often this is the direct consequence of only doing a single experiment. Rather then know the physics of the problem well enough to make the conclusion that the experiment will be highly variable, this is ignored and we will endeavor to model the single experiment as a unique well-determined event. The result of this set of unjustified assumptions is wholesale ignorance of systematic and irreducible uncertainty. This is truly scientific malpractice. Addressing this shortcoming should be the focus of significant effort experimentally, as well as in modeling and its numerical solution. It is a very large and largely unaddressed issue in science.

In addition to ignoring the intrinsic variability in the experiments, the more classical and even less excusable uncertainty often ignored is measurement error. This error is always present even in cases where the experiment is well posed and utterly reproducible. Measurements are always finite precision and have some degree of error and uncertainty. This finite value should always be reported as part of the experimental comparison even when the uncertainty is small. In a deep sense the small uncertainty is more important because it lends credence to the sense that the data is high quality. Not reporting the uncertainty simply leaves this vague and undefined.

The-most-powerful-Exascale-ComputerThe last area of uncertainty is the modeling uncertainty. In the vast majority of cases this will be the largest source of uncertainty, but of course there will be exceptions. It has three major components, the choice of the overall discrete model, the choice of models or equations themselves, and the coefficients defining the specific model. The first two areas are usually the largest part of the uncertainty, and unfortunately the most commonly ignored in assessments. The last area is the most commonly addressed because it is amenable to automatic evaluation. Even in this case the work is generally incomplete and lacks full disclosure of the uncertainty.

Today in examining modeling uncertainty we most often attack the least important one systematically, and the more important modeling uncertainties are ignored. The “easy” uncertainty to attack is the coefficients in the model. This can be achieved using well-developed methods such as MCMC (Markov chain Monte Carlo). One can define a set of parameters to be varied and ranges for the variation. The calculations can bemcmc-samplingrepeated using values drawn to efficiently sample the probability space of the calculation and produce the uncertainty. This sampling is done for a very highly dimensional space, and carries significant errors. More often than not the degree of error associated with the under sampling is not included in the results. It most certainly should be.

The other two uncertainties are generally larger and thus more important to characterize. Unfortunately neither is amenable to the sort of turnkey black box approach the parametric uncertainty allows. As a result these uncertainties are usually completely ignored. These two areas of uncertainty are closely related. Any complex problems can be modeled in a variety of ways. An analyst ends up making innumerable assumptions and choices in the course of modeling a complex problem. One choice is the code used for the analysis and the specific settings within the code. Beyond this there are choices on how the problem is meshed, boundary conditions, initial conditions, submodels to close the equations, and analysis of results. Each and every one of these choices can produce changes in the results. It is very uncommon to see a problem solved in different or remotely independent ways. As a result the uncertainty from the modeling and analysis is usually completely hidden.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

To truly test our knowledge in a topic we must be open to a full and honest evaluation of the uncertainties. Knowledge and uncertainty are two sides of the same coin. If you don’t know the uncertainty, you don’t know the extent of your knowledge. Too often we only do the assessment that is easy. In this assessment strategy we also implicitly choose uncertainty estimates of ZERO for information we know is uncertain, but difficult to assess. This is a pernicious and dangerous practice. Due diligence and responsibility should dictate that some sort of uncertainty be estimated for all sources. If we cannot estimate these uncertainties in a credible and knowledgeable manner, we have no business in modeling and our experiments are not yielding their full value. The only responsible act in this case would be to produce work that would make such a bounding assessment possible.

Negative results are just what I want. They’re just as valuable to me as positive results. I can never find the thing that does the job best until I find the ones that don’t.

― Thomas A. Edison

Advertisements