God save me from idealists.
― Jim Butcher
Coming up with detailed mathematical analysis, much less the solution of (partial) differential equations is extremely difficult. In the effort to progress on this important, but critically difficult task various simplifications and idealizations can make all the difference between success and failure. This difficulty highlights the power and promise of numerical methods for solving such equations because simplifications and idealizations are not absolutely necessary for solution. Nonetheless, much of the faith in a numerical method is derived by congruence of the solution numerically with analytical solutions. This process is known as verification and paly an essential role in helping to provide evidence of the credibility of numerical simulations. Our faith in the ability of numerical simulations to solve difficult problems is thus grounded to some degree by the scope and span of our analytical knowledge. This tie is important to both recognize and carefully control because of analytical knowledge is necessarily limited in ways that numerical methods should not be.
In developing and testing computational methods, we spend a lot of time working on solving the ideal equations for a phenomenon. This is true in fluids, plasma, and many other fields. These ideal equations are usually something that comes from the age of classical physics and mathematics. Most commonly these ideal equations are associated with the names of greats of science, Newton, Euler, Poincare. This near obsession is one of the greatest dangers to progress I can think of. The focus on the ideal is the consequence of some almost religious devotion to classical ideas, and deeply flawed. By focusing on the classical ideal equations many of the important, critical and interesting aspects of reality escape attention. We remain anchored to the past in a way that undermines our ability to master reality with modernity.
Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.
― Karl R. Popper
These ideal equations are starting points for investigations of the physical world, and arose in an environment where analytical work was the only avenue for understanding. Simplicity and stripping away the complexities of reality were the order of the day. Today we are freed to a very large extent from the confines of analytical study by the capacity to approximate solutions to equations. We are free to study the universe as it actually is, and produce a deep study of reality. The analytical methods and ideas still have utility for gaining confidence in these numerical methods, but their lack of grasp on describing reality should be realized. Our ability to study the reality should be celebrated and be the center of our focus. Our seeming devotion to the ideal simply distracts us and draws attention from understanding the real World.
The more pernicious and harmful aspect of ideality was a reverence for divinity insolutions. The ideal equations are supposed to represent the perfect, and in a sense the “hand of God” working in the cosmos. As such they represent the antithesis of moderni
ty. As such they represent the inappropriate injection of religiosity into the study of reality. For this reason alone the ideal equations should be deeply suspect at a philosophical level. These sort of religious ideas should not be polluting the unfettered investigation of reality. More than this we can see that the true engine of beauty in the cosmos is removed from these equations. So much of what is extraordinary about the universe is the messiness driven by the second law of thermodynamics. This law takes many forms and always removes the ideal from the equations, and injects the hard, yet beautiful face of reality.
A thing can be fine on paper but utterly crummy in the field.
― Robert A. Heinlein
Not only are these equations suspect for philosophical reasons, they are suspect for the imposed simplicity of the time they are taken from. In many respects the ideal equations miss most of fruits of the last Century of scientific progress. We have faithfully extended our grasp of reality to include more and more “dirty” features of the actual physical World. To a very great extent the continued ties to the ideal contribute to the lack of progress in some very important endeavors. Perhaps no case more amply demonstrates this handicapping of progress as well as turbulence. Our continued insistence that turbulence is tied to the ideal nature of incompressibility is becoming patently ridiculous. It highlights that important aspects of the ideal are synonymous with the unphysical.
I have spoken out about the issues with incompressibility several times in the past (https://wjrider.wordpress.com/2014/03/07/the-clay-prize-and-the-reality-of-the-navier-stokes-equations/, https://wjrider.wordpress.com/2015/03/06/science-requires-that-modeling-be-challenged/, https://wjrider.wordpress.com/2016/04/08/the-singularity-abides/, https://wjrider.wordpress.com/2016/04/15/the-essential-asymmetry-in-fluid-mechanics/, https://wjrider.wordpress.com/2016/09/27/the-success-of-computing-depends-on-more-than-computers/). Here I will simply reiterate these points from the perspective of the concept of ideal equations. Incompressibility is a simple and utterly ideal in the sense that no nontrivial flow is exactly incompressible (). Real and nontrivial flow fields are only approximately incompressible. It is important to recognize that approximate and exactly incompressible are very different at their core. Exactly incompressible flows are fundamentally unphysical and unrealizable in the real world. Put differently, they are absolutely pathological.
An important thing to recognize in this discussion is the number of important aspects of reality that are sacrificed with incompressibility. The list is stunning and gives a hint of the depth of the loss. Gone is the second law of thermodynamics unless viscous effects are present. Gone is causality. Gone are important nonlinearities. This approximation is taken to the extreme of being an unphysical constraint that produces a deeply degenerate system of equations. Of greater consequence is the demolition of physics that may be at the heart of explaining turbulence itself. The essence of turbulence needs a singularity formation to make sense of observations. This is at the core of the Clay Prize, yet in the derivation of the incompressible equations, the natural nonlinear process for singularity formation is removed by fiat. Incompressibility creates a system of equations that is simple and yet only a shadow of the more general equations it claims to represent. I fear it is an albatross about the neck of fluid mechanics.
There are other idealities that need to be overturned. In many corners of fluid mechanics symmetries are assumed. Many scientists desire that they should be maintained under all sorts of circumstances. They rarely ask whether the symmetry is maintained in the face of perturbations from the symmetry that would reasonably be expected to exist in reality (in fact it is absolutely unreasonable to assume perfect symmetry). Some assumptions are reasonable in some situations where the flows are stable, but other cases would destroy these symmetries for any realistic flow. Pushing a numerical method to maintain symmetry under such circumstances where the instability would grow should be abhorrent and avoided. In the physical actual universe the destruction of symmetry is the normal evolution of a system and preservation is rarely observed. As such expectations of symmetry preservation in all cases define an unhealthy community norm.
A great example of this sort of dynamic occurs in modeling stars that end their lives in an explosion like type II supernovas. The classic picture was a static spherical star that burned elements in a series of concentric spheres or increasing mass as one got deeper into the star. Eventually the whole process becomes unstable as the nuclear reactions shift from exothermic to endothermic when iron is created. We observe explosions in such stars, but the idealized stars would not explode. Even if we forced the explosion, the evolution of the post-explosion could not match important observational evidence that implied deep mixing of heavy elements into the expanding envelope of the star.
It is a place where the idealized view stood in the way of progress of decades and the release of ideality allowed progress and understanding. Once these extreme symmetries were released and the star was allowed to rotate, have magnetic fields, and mix elements across the concentric spheres models and simulations started to match observations. We got exploding stars; we got the deep mixing necessary for both the explosion itself and the post explosion evolution. The simulations began to explain what we saw in nature. The process of these exploding stars is essential for the understanding of the universe because such stars are the birthplace of the matter that our World is built from. When things were more ideal the simulations failed to a very large extent.
This sort of issue appears over and over in science. Time and time again, the desire to study things in an ideal manner acts to impede the unveiling of reality. By now we should know better, but it is clear that we don’t. The idea of sustaining the ideal equations and evolution as the gold standard is quite strong. Another great example of this is the concept of kinetic energy conservation. Many flows and numerical methods are designed to exactly conserve kinetic energy. This only occurs in the most ideal of circumstances when flows have no natural dissipation (itself deeply unphysical) while retaining well-resolved smooth structure. So the properties are only seen in flows that are unphysical. Many believe that such flows should be exactly preserved as the foundation for numerical methods. This belief is somehow impervious to the observation that such flows are utterly unphysical and could never be observed in reality. It is difficult to square this belief system with the desire to model anything practical.
We need to recognize the essential tension between the need to test methods using the solution to idealized equations with the practical simulation of reality. We need to free ourselves of the limiting aspects of the mindset around the ideal equations. The important aspect of matching solutions to ideal equations must be acknowledged without imposing unphysical limits on the simulation. The imperative for numerical methods is modeling reality. To match aspects of the ideal equations solution many sacrifice physical aspect of numerical methods. Modeling reality should always be the preeminent concern for the equations and the methods for solution. Numerical methods unleash many of the constraints that analytical approaches abide by and these should be taken advantage of to a maximal degree.
Quite frequently, the way that numerical methods developers square their choices is an unfortunate separation of modeling from the numerical solutions. In some cases the choice that is followed is the philosophy where the ideal equations are solved along with the explicit modeling of any non-ideal physics. As such the numerical method is desired to be unwaveringly true to the ideal equations. Quite often the problem with this approach is that the non-ideal effects are necessary for the stability and quality of the solution. Moreover the coupling between the numerical solution and modeling is not clean, and the modeling can’t be ignored in the assessment of the numerical solution.
A great example of this dichotomy is turbulent fluid mechanics and it’s modeling. It is instructive to explore the issues surrounding the origin of the models with connections to purely numerical approaches. There is the classical thinking about modeling turbulence that basically comes down to solving the ideal equations as perfectly as possible, and modeling the entirety of turbulence with additional models added to the ideal equations. It is the standard approach and by comparison to many other areas of numerical simulation, a relative failure. Nonetheless this approach is followed with almost a religious fervor. I might surmise that the lack of progress in understanding turbulence is somewhat related to the combination of adherence to a faulty basic model (incompressibility) and the solution approach that supposes that all the non-ideal physics can be modeled explicitly.
It is instructive in closing to peer more keenly at the whole turbulence modeling problem. A simple, but very successful model for turbulence is the Smagorinsky model originally devised for climate and weather modeling, but forming the foundation for the practice of large eddy simulation (LES). What is under appreciated about the Smagorinsky model is its origins. This model was originally created as a way of stabilizing shock calculations by Robert Richtmyer and applied to an ideal differencing method devised by John Von Neumann. The ideal equation solution without Richtmyer’s viscosity was unstable and effectively useless. With the numerically stabilizing term added to the solution, the method was incredibly powerful and forms the basis of shock capturing. The same term was then added to weather modeling to stabilize those equations. It did just that and remarkably it suddenly transformed into a “model” for turbulence. In the process we lost the role it played for numerical stability, but also the strong and undeniable connection between the entropy generated by a shock and observed turbulence behavior. This connection was then systematically ignored because the unphysical incompressible equations we assume turbulence is governed by do not admit shocks. In this lack perspective we find the recipe for lack of progress. It is too powerful for a connection not to be present. Such connections creates issues that undermine some core convictions in the basic understanding of turbulence that seem too tightly held to allow the lack of progress to question.
We cannot become what we need by remaining what we are.
― John C. Maxwell