… Nature almost surely operates by combining chance with necessity, randomness with determinism…

― Eric Chaisson

On many occasions I’ve noted the tendency for science to see the World through a highly deterministic lens. We do this despite the World around us that includes a large degree of chance and random events. In science we might consider highly deterministic experiments to be well designed and useful. In a sense this is correct as such experiments confirm our existing theories grounded heavily in determinism. When we take this attitude into the real World of observation of nature, or engineered systeimages-2ms, the deterministic attitude runs aground. The natural World and engineered systems rarely behave in a completely deterministic manner. We see varying degrees of non-determinism and chance in how things work. Some of this is the action of humans in a system, some of it are complex initial conditions, or structure that deterministic models ignore. This variability, chance, and structure is typically not captured by our modeling, and as such modeling is limited in utility for understanding reality.

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

Classical+Newtonian+Mechanics

Determinism. universe has a starting point (Big Bang?) correct formulations for laws of nature allow histories of all particles to be traced and predicted into the future. everything is predictable, universe functions like clockwork. Free will? Sir Isaac Newton.

Mathematical models of reality are heavily grounded in a deterministic assumption. This grounding is largely the legacy of Newton whose assumptions were heavily influenced by his religious faith and an almighty God. This God controlled the universe and determined the outcomes. These beliefs ran headlong into reality in the 20th Century with quantum physics and the need for probabilities in models. The power of non-determinism for the most fundamental laws of physics was undeniable, but at larger scales determinism rules supreme. We explain that the law of large numbers pushes the laws of physics over into determinism. On the other hand, we have pervasive laws like the second law of thermodynamics that encapsulate the disorder in the World in the deterministic view. Is this sufficient to capture all of non-determinisms role? I think not. In this sense the work of Newton and 19th Century thought still controls much of science today. Almost every modeling exercise is following determinism as an unspoken underlying assumption. This happens without regard to what we see each day in the real World. The second law of thermodynamics and the power of entropy is not adequate to capture the full span of disorder’s impact on our World. This assumption does untold damage, and it is time to overthrow some aspects of determinism as it has outlived its utility.

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

― Pierre-Simon Laplace

Print

Complex systems and experiments have a great deal of non-determinism in their fundamental behavior and outcomes.   Commonly this non-determinism is completely ignored and modeled with a fully deterministic modeling approach (e.g., the second law). More to the point, a better assumption is a combination of deterministic and stochastic effects are present. The stochastic effects are largely ignored today and swept up into the deterministic model in a heavy handed one-size fits all manner. This sort of approach isn’t usually even considered as a problem because the behavior is considered totally deterministic. The consequence is an inability to consider the proper source of the behavior is misattribution. We are putting physical effects that are non-deterministic into a model that is deterministic. This should seriously limit the predictive power of our modeling.

dag006To move forward we should embrace some degree of randomness in the fundamental models we solve. This random response naturally arises from various sources. In our deterministic models, the random response is heavily incorporated in boundary and initial conditions. The initial conditions include things like texture and structure that the standard models homogenize over. Boundary conditions are the means for the model to communicate with the broader world, which has vast complexities are grossly simplified. In reality both the initial and boundary conditions are far more complex than our models currently use.

The sort of deterministic models we use today attempt to include the entire system without explicitly modeling non-deterministic aspects. These effects are incorporated into the deterministic model or end up increasing the uncertainty of the modeling effort. Our efforts could advance significantly by directly modeling the stochastic aspects. This would produce an ability to separate the modeling effects that are completely deterministic from those that are random plus the interaction between these. We might expect that producing models with an appropriate separation would make the deterministic modeling lower in uncertainty. Some amount of uncertainty in any of these systems is irreducible, and proper modeling of the non-deterministic would produce results that capture these effects properly. Instead of being irreducible this aspect would simply be part of the model, and part of the result. It would move from being uncertain to being part of the answer. We should not expect that modeling non-deterministic dynamics with deterministic models to be the best we can do.

Applying logic to potentially illogical behavior is to construct a house on shifting foundations. The structure will inevitably collapse.

― Stewart Stafford

6507058-1x1-700x700Another aspect of the complexity that current modeling ignores are the dynamics associated with the stochastic phenomena or lumps it whole cloth into the model’s closure. In a real system the stochastic aspects of the model evolve over time including nonlinear interactions between deterministic and stochastic aspects. When the dynamics are completely confined to deterministic models, these nonlinearities are ignored or lumped into the deterministic mean field. When models lack the proper connection to the correct dynamics, the modeling capability is diminished. The result is greater uncertainty and less explanation of what is happening in nature. From an engineering point of view, the problem is that the ability to explicitly control for the non-deterministic aspect of systems is diminished because its influence on results isn’t directly exposed. If the actual dynamics were exposed, we could work proactively to design better. This is the power of understanding in science; if we understand we can attempt to mitigate and control the phenomena. Without proper modeling we are effectively flying blind.

Die Quantenmechanik ist sehr achtung-gebietend. Aber eine innere Stimme sagt mir, daß das doch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.

Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the “old one.” I, at any rate, am convinced that He does not throw dice.

–Albert Einstein, in a letter to Max Born

Today’s current modeling paradigm is relentlessly deterministic. We attempt to model experiments as a single well-determined event even when significant aspects of the experiment are non-deterministic. Effectively the non-deterministic aspects are ignored or misattributed to determinism. The second experiment is then inconsistent because it is a different instance, instead of being the same deterministic case with a different stochastic forcing. If we model the stochastic element of the phenomena directly we can get to understanding its impact. With our current modeling we simply drive a fundamental misunderstanding of what is happening. We are left with models that have fundamental limitations. None of these issues is going to be handled by brute force. Neither computer power, or computational accuracy, or algorithmic efficiency will impact these problems. The answer to the issues are centered on modeling and increasing its span of physical phenomena addressed.

The impediments to changing our modeling are massive. We have a strong tendency to lump all of the non-deterministic effects into constitutive laws and closure that lends itself to relative ease of modification. Changing or expanding the governing equations in a code can be utterly daunting and usually not supported by current funding. The entire enterprise of developing new equations is difficult and risky in nature. Our system today is utterly opposed to anything risky and actively undermines attempting anything difficult at every turn. Our computational science is extremely invested in existing models, and most paths to improvement are routed through them. Increasingly we are invested in the most brutish and painfully naïve path to improvement by investing almost entirely in faster computers. The most painful aspect of this path is its lack of timeliness, the ease of creating faster computers has ended with the death of Moore’s law. Getting a faster computer is now extremely expensive and inefficient. Other paths to improvement are not favored and we have almost forgotten how to do science like that. The capper to this sad tale is the utter inability of these computers to help fix faulty models. We have lost the ability to conduct intellectually rigorous work.

030715_1359_1The sort of science needed is enormously risky. I am proposing that we have reached the end of utility for models used for hundreds of years. This is a rather bold assertion on the face of it. On the other hand, the models we are using have a legacy going back to when only analytical solutions to models were used, or only very crude numerical tools. Now our modeling is dominated by numerical solutions, and computing from the desktop (or handheld) to supercomputers of unyielding size and complexity. Why should we expect models derived in the 18th and 19th centuries to still be used today? Shouldn’t our modeling advance as much as our solution methods have. Shouldn’t all the aspects of modeling and simulation be advancing. The answer is a dismal no.

The reasons for this dismal state of affairs is somewhat reasonable. The models defined over the past few centuries defined general solution. Computing offered a path to solution that analytical methods failed to provide. As a result, we saw computing work to provide useful solutions to models that have had limited utility for a very long time. The models now being routinely solved numerically that had been unavailable for a huge amount of time. The numerical work is often done quite poorly with marginal quality control. Assessment of the quality of numerical work is usually slipshod and casual. The “eyeball” and “view graph” norm rule quantified 2panel_HRRR-radar_0uncertainty and error. Most good results using these models are heavily calibrated and lack any true predictive power. In the absence of experiments, we are generally lost and rarely hit the mark. Instead of seeing any of this as shortcomings in the models, we seek to continue using the same models and focus primarily on computing power as a remedy. This is both foolhardy and intellectually empty if not outright dishonest.

As such the evidence that our models are inadequate is overwhelming. Our response to this evidence has been virtual ignorance of this conclusion. We continue to invest in the same areas that have failed to improve results over a long period of time. We continue to sell massive computing power as the fix it all, remedy for problems. We fail to recognize that neither computing power, or solution accuracy will cure any problems if the fundamental model is flawed. Our fundamental models are flawed, and the routes taken for improving modeling and simulation will not help. If the basic model has flaws, a faster computer or a better method, a more accurate discretization, or better scaling will not help. The only cure is to fix or change the model. One of the biggest places where modeling fails is modeling the separation in deterministic and non-deterministic aspects of our models.

A simple setting and familiar setting to see how this might help is weather. If we look at any of our models, at any of our scales, it is obvious that enormous variability and details are being excluded from our modeling. One of the biggest needs of weather modeling are extreme weather events that dominate financial and political consequences of weather. Analogous issues exist in a myriad of other fields where modeling and simulation impact the science. A reasonable supposition is that interactions among the averaged over and ignored fine scale structure help produce extreme events when interacting with the large-scale weather. It is well known that large scale weather phenomena set the stage or increases the likelihood of extreme events. The actual phenomenology of extreme events depends on how the large-scale weather interacts with local detail such as the surface topology.

Analogous phenomena happen is many other fields such as material failure, and turbulence. These models are strained under the demands of the modern World and progress is desperately needed. The solutions are not being supported, instead focused on risk adverse and rather pedestrian approaches while eschewing riskier work like model creation. The focus on computing power reflects this intellectual cowardice quite acutely. Our current models are limited by their fundamental structure rather than solution methods, or computing power. Our science programs need to address these challenges in a credible manner by coupling a focus on theory with innovations in experimental science. The challenge is not refining old ideas but allowing ourselves to pursue new ones with sufficient freedom and aggression. Our greatest challenge is not the science, but rather our inability to conceive of solutions in today’s World. This work could be enormously valuable to society as a whole if we could envision it and take the risks necessary to reach success.

The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

Advertisements