Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

cargo-cultWe would be far better off removing the word “predictive” as a focus for science. If we replaced the emphasis on prediction with a focus on explanation and understanding, our science would improve overnight. The sense that our science must predict carries connotations that are unrelentingly counter-productive to the conduct of science. The side-effects of the predictivity undermine the scientific method at every turn. The goal of understanding nature and explaining what happens in the natural world is consistent with the conduct of high quality science. In many respects large swaths of the natural world are unpredictable in highly predictable ways. Our weather is a canonical example of this. Moreover, we find the weather to be unpredictable in a bounded manner as time scales become longer. Science that has focused on understanding and explanation has revealed these truths. Attempting to focus prediction under some circumstances is both foolhardy and technically impossible. As such the reality of prediction needs to be entered into carefully and thoughtfully under well-chosen circumstances. We also need the freedom to find out that we are wrong and incapable of prediction. Ultimately, we need to find out limits on prediction and work to improve or accept these limits.

“Predictive Science” is mostly just a buzzword. We put it in our proposals to improve the5f282213e3d57606200fffd45374ecc5chances of hitting funding. A slightly less cynical take would take predictive as the objective for science that is completely aspirational. In the context of our current world, we strive for predictive science as a means of confirming our mastery over a scientific subject. In this context the word predictive implies that the we understand the science well enough to foresee outcomes. We should also practice some deep humility in what this means. Predictivity is always a limited statement, and these limitations should always be firmly in mind. First, predictions are limited to some subset of what can be measured and fail for other quantities. The question is whether the predictions are correct for what matters? Secondly, the understanding is always waiting to be disproved by a reality that is more complex than we realize. Good science is acutely aware of these limitations and actively probes the boundary of our understanding.

Unknown-2In the modern world we constantly have new tools to help expand our understanding of science. Among the most important of these new tools is modeling and simulation. Modeling and simulation is simply an extension of the classical scientific approach. Computers allow us to solve our models in science more generally than classical means. This has increased the importance and role of models in science. We can envision more complex models having more general solutions with computational solutions. Part of this power comes with some substantial responsibility; computational simulations are highly technical and difficult. They come with a host of potential flaws, errors and uncertainties that cloud results and need focused assessment. Getting the science of computation correct and assessed to play a significant role in the scientific enterprise requires a broad multidisciplinary approach with substantial rigor. Playing a broad integrating role in predictive science is verification and validation (V&V). In a nutshell V&V is the scientific method as applied to modeling and simulation. Its outcomes are essential for making any claims regarding how predictive your science is.

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

Richard-feynmanWe can take a moment to articulate the scientific method and then restate it in a modern context using computational simulation. The scientific method involves making hypotheses about the universe and testing those hypotheses against observations of the natural world. One of the key ways to make observations are experiments where the measurements of reality are controlled and focused to elucidate nature more clearly. These hypotheses or theories usually produce models of reality, which take the form of mathematical statements. These models can be used to make predictions about what an observation will be, which then confirms the hypothesis. If the observations are in conflict with the model’s predictions, the hypothesis and model need to be discarded or modified. Over time observations become more accurate, often showing the flaws in models. This usually means a model needs to be refined rather than thrown out. This process is the source of progress in science. In a sense it is a competition between what we observe and how well we observe it, and the quality of our models of reality. Predictions are the crucible where this tension can be realized.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feigl

One of the best ways to understand how to do predictive science in the context of modeling and simulation is a simple realization. V&V is basically a methodology that encodes the scientific method into modeling and simulation. All of the content of V&V is assuring that science is being done with a simulation and we aren’t fooling ourselves. Verification is all about making sure the implementation of the model and its solution are credible and correct. The second half of verification is associated with estimating the errors in the numerical solution of the model. We need to assess the numerical uncertainty and the degree to which it clouds the model’s solution.

First-Time-Measurements-of-Turbulent-MixingValidation is then the structured comparison of the simulated model’s solution with observations. Validation is not something that is completed, but rather it is an assessment of work. At the end of the validation process evidence has been accumulated as to the state of the model. Is the model consistent with the observations? If the uncertainties in the modeling and simulation process along with the uncertainties in the observations can lead to the conclusion that the model is correct enough to be used. In many cases the model is found to be inadequate for the purpose and needs to be modified ˙or changed completely. This process is simply the hypothesis testing so central to the conduct of science.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

― George Box

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

― George Box

The George Box maxim about all models being wrong, but some being useful is important and key in the conduct of V&V. It is also central to modeling and simulation’s most important perspective, the constancy of necessity for improvement. Every model is a mathematical abstraction that has limited capacity for explaining nature. At the same time the model has a utility that may be sufficient for explaining everything we can measure. This does not mean that the model is right, or perfect, it means the model is adequate. The creative tension in science is the narrative of arc of refining hypotheses and models of reality or improving measurements and experiments to more acutely test the models. V&V is a process for achieving this end in computational simulations. Our goal should always be to find inadequacy in models and define the demand for improvement. If we do not have the measurements to demonstrate a model’s incorrectness, the experiments and measurements need to improve. All of this serves progress in science in a clear manner.

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

–Sir Arthur Stanley Eddington

flameLet’s take a well thought of and highly accepted model, the incompressible Navier-Stokes equations. This model is thought to largely contain the proper physics of fluid mechanics, most notably turbulence. Perhaps this is true although our lack of progress in turbulence might indicate that something is amiss. I will state without doubt that the incompressible Navier-Stokes equations are wrong in some clear and unambiguous ways. The deepest problem with the model is incompressibility. Incompressible fluids do not exist and the form of the mass equation showing divergence free velocity fields implies several deeply unphysical things. All materials in the universe are compressible and support sound waves, and this relation opposes this truth. Incompressible flow is largely divorced from thermodynamics and materials are thermodynamic. The system of equations violates causality rather severely, the sound waves travel at infinite speeds. All of this is true, but at the same time this system of equations is undeniably useful. There are large categories of fluid physics that they explain quite remarkably. Nonetheless the equations are also obviously unphysical. Whether or not this unphysical character is consequential should be something people keep in mind.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

19.3_F2_ThornquistIn conducting predictive science one of the most important things you can do is make a prediction. While you might start with something where you expect the prediction to be correct (or correct enough), the real learning comes from making predictions that turn out to be wrong. It is wrong predictions that will teach you something. Sometimes the thing you learn is something about your measurement or experiment that needs to be refined. At other times the wrong prediction can be traced back to the model itself. This is your demand and opportunity to improve the model. Is the difference due to something fundamental in the model’s assumptions? Or is it simply something that can be fixed by adjusting the closure of the model?  Too often we view failed predictions as problems when instead they are opportunities to improve the state of affairs. I might posit that if you succeed with a prediction, it is a call to improvement; either improve the measurement and experiment, or the model. Experiments should set out to show flaws in the models. If this is done the model needs to be improved. Successful predictions are simply not vehicles for improving scientific knowledge, they tell us we need to do better.

When the number of factors coming into play in a phenomenological complex is too large scientific method in most cases fails. One need only think of the weather, in which case the prediction even for a few days ahead is impossible.

― Albert Einstein

In this context we can view predictions as things that at some level we want to fail at. If the prediction is too easy, the experiment is not sufficiently challenging. Success and failure exists on a continuum. For simple enough predictions our models will always work, and for complex enough predictions, the models will always fail. The trick is finding the spot where the predictions are on the edge of credibility, and progress is needed and ripe. Too often we take the mindset is taken where predictions need to be successful. An experiment that is easy to predict is not a success, it is a waste. I would rather see predictions be focused at the edge of success and failure. If we are interested in making progress, predictions need to fail so that models can improve. By the same token a successful prediction indicates that the experiment and measurement need to be improved to more properly challenge the models. The real art of predictive science is working at the edge of our predictive modeling capability.

Projected_change_in_annual_mean_surface_air_temperature_from_the_late_20th_century_to_the_middle_21st_century,_based_on_SRES_emissions_scenario_A1BA healthy focus on predictive science with a taste for failure produces a strong driver for lubricating the scientific method and successfully integrating modeling and simulation as a valuable tool. Prediction involves two sides of science to work in concert; the experiment-observation of the natural world, and the modeling of the natural world via mathematical abstraction. The better the observations and experiments, the greater the challenge to models. Conversely, the better the model, the greater the challenge to observations. We need to tee up the tension between how we sense and perceive the natural world, and how we understand that world through modeling. It is important to examine where the ascendency in science exists. Are the observations too good for the models? Or can no observation challenge the models? This tells us clearly where we should prioritize.

We need to understand where progress is needed to advance science. We need to take advantages of technology in moving ahead in either vein. If observations are already quite refined, but new technology exists to improve them, it behooves us to take advantage of it. By the same token modeling can be improved via new technology such a solution methods, algorithmic improvements and faster computers. What is lacking from the current dialog is a clear focus on where the progress imperative exists. A part of integrating predictive science well is determining where the progress is most needed. We can bias our efforts to focus on where the progress is most needed while keeping opportunities to make improvements in mind.

ostrich-head-in-sandThe important word I haven’t mentioned yet is “uncertainty”. We cannot have predictive science without dealing with uncertainty and its sources. In general, we systematically or perhaps even pathologically underestimate how uncertain our knowledge is. We like to believe our experiments and models are more certain than they actually are. This is really easy to do in practice. For many categories of experiments, we ignore sources of uncertainty and simply get away with an estimate of zero for that uncertainty. If we do a single experiment, we never have to explicitly confront that the experiment isn’t completely reproducible. On the modeling side we see the particular experiment as something to be modeling precisely even if the phenomena of interest are highly variable. This is common and a source of willful cognitive dissonance. Rather than confront this rather fundamental uncertainty, we willfully ignore it. We do not run replicate experiments and measure the variation in results. We do not subject the modeling to reasonable variations in the experimental conditions and check the variation in the results. We pretend that the experiment is completely well-posed, and the model is too. In doing this we fail at the scientific method rather profoundly.

Another key source of uncertainty is numerical error. It is still common to present results without any sense of the numerical error. Typically, the mesh used for the calculation is asserted to be fine enough without any evidence. More commonly the results are simply given without any comment at all. At the same time the nation is investing huge amounts of money in faster computers that implicitly assume that faster computers yield better solutions, a priori. This entire dialog often proceeds without any support from evidence. It is 100% assumption. When one examines these issues directly there is often a large amount of numerical error that is being ignored. Numerical error is small in simple problems without complications. For real problems with real geometry and real boundary conditions with real constitutive models, the numerical errors are invariably significant. One should expect some evidence to be presented regarding its magnitude, and you should be suspicious if it’s not there. Too often we simply give simulations a pass on this detail and fail due diligence.

Truth has nothing to do with the conclusion, and everything to do with the methodology.

― Stefan Molyneux

In this sense the entirety of V&V is a set of processes for collecting evidence about credibility and uncertainty. In one respect verification is mostly an exercise in collecting evidence of credibility and due diligence for quality in computational tools. Are the models, codes and methods implemented in a credible and high-quality manner. Has the code development been conducted in a careful manner where the developers have checked and done a reasonable job of producing code without obvious bugs? Validation could be characterized by collecting uncertainties. We find upon examination that many uncertainties are ignored in both computational and experimental work. Without these uncertainties and the evidence surrounding them, the entire practice of validation is untethered from reality. We are left to investigate through assumption and supposition. This sort of validation practice has a tendency to simply regress to commonly accepted notions. In such an environments models are usually accepted as valid and evidence is often skewed toward that as a preordained conclusion. Without care and evidence, the engine of progress for science is disconnected.

downloadIn this light we can see that V&V is simply a structured way of collecting evidence necessary the scientific method. Collecting this evidence is difficult and requires assumptions to be challenged. Challenging assumptions is courting failure. Making progress requires failure and the invalidation of models. It requires doing experiments that we fail to be able to predict with existing models. We need to assure that the model is the problem, and the failure isn’t due to numerical error. To determine these predictive failures requires a good understanding of uncertainty in both experiments and computational modeling. The more genuinely high quality the experimental work is, the more genuinely testing the validation is to model. We can collect evidence about the correctness of the model and clear standards for judging improvements in the models. The same goes for the uncertainty in computations, which needs evidence so that progress can be measured.

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Now we get to the rub in the context of modeling and simulation in modern predictive science. To make progress we need to fail to be predictive. In other words, we need to fail in order to succeed. Success should be denoted by making progress in becoming more predictive. We should take the perspective that predictivity is a continuum, not a state. One of the fundamental precepts of stockpile steward ship is predictive modeling and simulation. We want confident and credible evidence that we are capable of faithfully predicting certain essential aspects of reality. The only way to succeed at this mission is continually challenging and pushing ourselves at the limit of our capability. This is means that failure should be an almost constant state of being. The problem is projecting a sense of success, which society demands while continually failing. We do not do this well. Instead we need to project a sense that we continually succeed at everything we promise.

fig10_roleIn the process we create conditions where the larger goal of prediction is undermined at every turn. Rather than define success in terms of real progress, we produce artificial measures of success. A key to improving this state of affairs is an honest assessment of all of our uncertainties both experimentally and computationally. There are genuine challenges to this honesty. Generally, the more work we do, the more uncertainty we unveil. This is true of experiments and computations. Think about examining replicate uncertainty in complex experiments. In most cases the experiment is done exactly once, and the prospect of reproducing the experiment is completely avoided. As soon as replicate experiments are conducted the uncertainty becomes larger. Before the replicates, this uncertainty was simply zero and no one challenges this assertion. Instead of going back and adjusting our past state based on current knowledge we run the very real risk of looking like we are moving backwards. The answer is not to continue this willful ignorance but take a mea culpa and admit our former shortcomings. These mea culpas are similarly avoided thus backing the forces of progress into an ever-tighter corner.

imagesThe core of the issue is relentlessly psychological. People are uncomfortable with uncertainty and want to believe things are certain. They are uncomfortable about random events, and a sense of determinism is comforting. As such modeling reflects these desires and beliefs. Experiments are similarly biased toward these beliefs. When we allow these beliefs to go unchallenged, the entire basis of scientific progress becomes unhinged. Confronting and challenging these comforting implicit assumptions may be the single most difficult for predictive science. We are governed by assumptions that limit our actual capacity to predict nature. Admitting flaws in these assumptions and measuring how much we don’t know is essential for creating the environment necessary for progress. The fear of saying, “I don’t know” is our biggest challenge. In many respects we are managed to never give that response. We need to admit what we don’t know and challenge ourselves to seek those answers.

Only a few centuries ago, a mere second in cosmic time, we knew nothing of where or when we were. Oblivious to the rest of the cosmos, we inhabited a kind of prison, a tiny universe bounded by a nutshell.

How did we escape from the prison? It was the work of generations of searchers who took five simple rules to heart:

  1. Question authority. No idea is true just because someone says so, including me.
  2. Think for yourself. Question yourself. Don’t believe anything just because you want to. Believing something doesn’t make it so.
  3. Test ideas by the evidence gained from observation and experiment.If a favorite idea fails a well-designed test, it’s wrong. Get over it.
  4. Follow the evidence wherever it leads. If you have no evidence, reserve judgment.

And perhaps the most important rule of all…

  1. Remember: you could be wrong. Even the best scientists have been wrong about some things. Newton, Einstein, and every other great scientist in history — they all made mistakes. Of course they did. They were human.

Science is a way to keep from fooling ourselves, and each other.

― Neil deGrasse Tyson

 

 

 

 

 

 

 

Advertisements