Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

In modeling and simulation numerical error is an extremely important yet generally csm_group1_2c3e352676unsatisfactorily understood thing. For general nonlinear problems dominating the use and utility of high performance computing, the state of affairs is quite incomplete. It has a central role in modeling and simulation making our gaps in theory, knowledge and practice rather unsettling. Theory is strong for linear problems where solutions are well behaved and smooth (i.e., continuously differentiable, or a least many derivatives exist). Almost every problem of substance driving National investments in computing is nonlinear and rough. Thus, we have theory that largely guides practice by faith rather than rigor. We would be well served by a concerted effort to develop theoretical tools better suited to our reality.

Sometimes a clearly defined error is the only way to discover the truth

― Benjamin Wiker

We have a fundamental existence theory for convergent solutions defined by Lax’s early work (the fundamental theorem of numerical analysis). It is quite limited, rigorously applying to linear differential equations, yet defining basic approaches to numerical approximations for models that are almost invariably nonlinear. The theorem states that when a stable approximation is consistent (approximates the differential equation properly), it will converge to the correct solution. By convergent we meaPeter_Laxn that the solution approaches the exact solution is the manner of approximation grows closer to a continuum, which is associated with small discrete steps/mesh and more computational resource. This theorem provides the basis and ultimate drive for faster, more capable computing. We apply it most of the time where it is invalid. We would be greatly served by having a theory that is freed of these limits. Today we just cobble together a set of theories, heuristics and lessons into best practices and we stumble forward.

Part of making use of this fundamental theorem is producing a consistent approximation to the model of choice. The tool for accomplishing this is a thing like Taylor series, maxresdefaultpolynomials and finite elements. All of these methods depend to some degree on solutions being well behaved and nice. Most of our simulations are neither well behaved nor nice. We assume an idealized nice solution then approximate using some neighborhood of discrete values. Sometimes this is done using finite differences, or cutting the world into little control volumes (equivalent in simple cases), or creating finite elements and using variational calculus to make approximations. In all cases the underlying presumption is smooth, nice solutions while most of the utility of approximations violates these assumptions. Reality is rarely well behaved or nice, so we have a problem. Our practice has done reasonably well and taken us far, but a better more targeted and useful theory might truly unleash innovation and far greater utility.

The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error.

― Bertolt Brecht

05 editedWe don’t really know what happens when the theory falls apart, and simply rely upon bootstrapping ourselves forward. We have gotten very far with very limited theory, and simply moving forward largely on faith. We do have some limited theoretical tools, like conservation principles (Lax-Wendroff’s theorem), and entropy solutions (converging toward solutions associated with viscous regularization consistent with the second law of thermodynamics). The thing we miss is general understanding of what is guiding accuracy and defining error in these cases. We cannot design methods specifically to produce accurate solution in these circumstances and we are guided by heuristics and experience rather than rigorous theory. A more rigorous theoretical construct would provide a springboard for productive innovation. Let’s look at a few of the tools available today to put things in focus.

One of the first things one encounters in putting together discrete approximations in realistic circumstances is a choice. For nonlinear features leading to general and rough solutions, one can decide to track features in the solution explicitly. The archetype of this is shock tracking where the discrete evolution of a shock wave is defined explicitly in t6767444295_259ef3e354he approximation. In essence the shock wave (or whatever wave is tracked) becomes an internal boundary condition allowing regular methods to be used everywhere else. This typically involves the direct solution of the Rankine-Hugoniot relations (i.e. the shock jump conditions, algebraic relations holding at a discontinuous wave). The problems with this approach are extreme, including unbounded complexity if all waves are tracked, or with solution geometry in multiple dimensions. This choice has been with us since the dawn of computation including the very first calculations at Los Alamos that used this technique, but it rapidly becomes untenable.

john-von-neumann-2To address the practical aspects of computation shock capturing methods were developed. Shock capturing implicitly computes the shock wave on a background grid through detecting its presence and adding a physically motivated dissipation to stabilize its evolution. This concept has made virtually all of computational science possible. Even when tracking methods are utilized the explosion of complexity is tamed by resorting to shock capturing away from the richtmyer_robert_b1dominant features being tracked. The origin of the concept came from Von Neumann in 1944, but lacked a critical element for success, dissipation or stabilization. Richtmyer added this critical element with artificial viscosity in 1948 while working at Los Alamos on problems whose complexity was advancing beyond the capacity of shock tracking to deal with. Together Von Neumann’s finite differencing scheme and Richtmyer’s viscosity enabled shock capturing. It was a proof of principle and its functionality was an essential springboard for others to have faith in computational science.

What one recognizes is that when dealing with shock wave physics must be added to the discrete representation. This happens explicitly in tracking where the shock itself becomes as discrete element or implicit with shock capturing where the approximation is adapted using the physics of shocks. Of course, shock capturing is useful for more than just shocks. It can be used to stabilize the computation of any feature. The overall methodology has some additional benefits not immediately recognized by its originators. For computing turbulence without fully resolving features shock capturing methods are essential (i.e., not DNS, but DNS can be criticized in its practice). Large eddy simulation was born out of adding the original Richtmyer-Von Neumann viscosity to weather modeling, and resulted in the creation of the Smagorinsky eddy viscosity. Other shock capturing methods developed for general purposes have provided the means for implicit Large Eddy Simulation. These methods all have the same origin, and rely upon the basic principles of shock capturing. The fact that all of this has the same origin almost certainly has a deep meaning that is lost in most of today’s dialog. We would be Global_Atmospheric_Modelwell served by aggressively exploring these connections in an open-minded and innovative fashion.

One of the key things about all of this capability is the realization of how heuristic it is at its core. Far too much of what we currently do in computational science is based upon heuristics, and experience gained largely through trial and error. Far too little is based upon rigorous theory. The advancement of our current approaches through theory would be a great service to the advancement of the field. Almost none of the current efforts are remotely associated with advancing theory. If one gets down to brass tacks about the whole drive for exascale, we see that it is predicated on the concept of convergence whose theoretical support is extrapolated from circumstances that don’t apply. We are really on thin ice, and stunningly unaware of the issues. This lack of awareness then translates to lack of action, lack of priority, lack of emphasis and ultimately lack of money. In today’s world if no one pays for it, it doesn’t happen. Today’s science programs are designed to be funded, rather than designed to advance science. No one speaks out about how poorly thought through our science programs are; they simply are grateful for the funding.Titan-supercomputer

When I was a kid, they had a saying, ‘to err is human but to really fuck it up takes a computer.’

― Benjamin R. Smith

There are a host of technologies and efforts flowing out from our current efforts that could all benefit from advances in theory for numerical approximation. In addition to the development of larger computers, we see the application of adaptive mesh refinement (AMR) to define enhanced resolution. AMR is even more highly bootstrapped and leveraged in terms of theory. By the same token, AMR’s success is predicated on best practices and experience from a wealth of applications. AMR is an exciting technology that produces stunning results. Better and more appropriate theory can turn these imagesresults from the flashy graphics AMR produces to justifiable credible results. A big part of moving forward is putting verification and validation into practice. Both activities are highly dependent on theory that is generally weak or non-existent. Our ability to rigorously apply modeling and simulation to important societal problems is being held back by our theoretical failings.

Another area with critical importance and utter lack of support is subgrid closure modeling especially where it depends on the mesh scale itself. The general thinking about closure modeling is completely haphazard and heuristic. The combination of numerical modeling and closure at the mesh scale is poorly thought out, and generally lacking any theoretical support. Usually the closure models are tied directly to the mesh scale, yet numerical methods rarely produce good solutions on the smallest mesh, but rather over a number of mesh cells (or elements). We rarely think about we defined or resolved solution structures and how it connects to modeling. Instead models are thought of solely geometrically in terms of scale and tied to the mesh scale. As a result we don’t have consistency between our mesh, numerical solution and the resolution-fidelity of the numerical method. Often this leaves the modeling in the code as being completely mesh-dependent, and produces no chance of mesh independence.

dag006A big issue is a swath of computational science where theory is utterly inadequate much of it involving chaotic solutions where there is extreme dependence on initial conditions. Turbulence is the classical problem most closely related to this issue. Our current theory and rigorous understand is vastly inadequate to spur progress. In most cases we are let down by both the physics modeling, mathematical and numerical theory. In every case we have weak to non-existent rigor leading to heuristic filled models and numerical solvers. Extensions of any of this work are severely hampered by the lack of theory (think higher order accuracy, uncertainty quantification, optimization,…). We don’t know how any of this converges, we just act like it does and use it to justify most of our high performance computing investments. All of our efforts would be massively assisted by almost any progress theoretically. Most of the science we care about is chaotic at a very basic level and lots of interesting things are utterly dependent on understanding this better. The amount of focus on this matter is frightfully low.

My overall view is that the lack of investment and attention to our theoretical shortcomings is a significant burden. The flipside is the loss of a massive opportunity to make some incredible advances. Instead of solving a whole new class of problems powered by deeper understanding of physics and mathematics, we are laboring under vast gaps. This lowers the effectiveness of everything we do, and every dollar we spend. While a focus on advancing theory and understanding is quite risky, the benefits are extreme. If we are not prepared to fail, we will not succeed.

Success is not built on success. Not great success. Great success is built on failure, frustration, even catastrophe.

— Sumner Redstone

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

Von Neumann, John. “Proposal and analysis of a new numerical method for the treatment of hydrodynamical shock problems.” The collected works of John von Neumann 6 (1944).

Richtmyer, R. D. “Proposed numerical method for calculation of shocks.” LANL Report, LA 671 (1948): 1-18.

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21, no. 3 (1950): 232-237.

Mattsson, Ann E., and William J. Rider. “Artificial viscosity: back to the basics.” International Journal for Numerical Methods in Fluids 77, no. 7 (2015): 400-417.

Richtmyer, Robert D., and Keith W. Morton. “Difference methods for initial-value problems.” Malabar, Fla.: Krieger Publishing Co.,| c1994, 2nd ed. (1994).

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. The basic experiment.” Monthly weather review 91, no. 3 (1963): 99-164.

Smagorjnsky, Joseph. “The beginnings of numerical weather prediction and general circulation modeling: early recollections.” Advances in Geophysics 25 (1983): 3-37.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

 

 

 

Advertisements