The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

― Nick Trefethen

Modern modeling and simulation is viewed as a transformative technology for science and engineering. Invariably the utility of modeling and simulation is grounded on the solution of models via numerical approximations. **The fact that numerical approximation is the key to unlocking its potential seems largely lost in the modern perspective, and engaged in any increasingly naïve manner**. For example much of the dialog around high performance computing is predicated on the notion of convergence. In principle, the more computing power one applies to solving a problem, the better the solution. This is applied axiomatically and relies upon a deep mathematical result in numerical approximation. This heritage and emphasis is not considered in the conversation to the detriment of its intellectual depth.

Where all think alike there is little danger of innovation.

― __Edward Abbey__

At this point, the mathematics and specifics of numerical approximation is then systematically ignored by the dialog. The impact of this willful ignorance is felt across the modeling and simulation world, a general lack of progress and emphasis on numerical approximation is evident. We have produced a situation where the most valuable aspect of numerical modeling is not getting focused attention. People are behaving as if the major problems are all solved and not worthy of attention or resources. **The nature of the numerical approximation is the second most important and impactful aspect of modeling and simulation work. Virtually all the emphasis today is on the computers themselves based on the assumption of their utility in producing better answers. The most important aspect is the modeling itself; the nature and fidelity of the models define the power of the whole process**. Once a model has been defined, the numerical solution of the model is the second most important aspect. The nature of this numerical solution is most dependent on the approximation methodology rather than the power of the computer.

The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.

― __Anthony Jay__

People act as if the numerical error is so small as not to be important on one hand, while encouraging great focus on computing power where the implicit reasoning for the computing power is founded on reducing numerical error. **To make matters worse with this corrupt logic, the most effective way to reduce numerical error is being starved for attention and resources having little or no priority. The truth is that numerical errors are still too large, and increasing computing power is lousy way and inefficient to make them smaller**. We are committed to a low-risk path that is also highly inefficient because the argument is accessible to the most naïve people in the room.

What is important is seldom urgent and what is urgent is seldom important.

― __Dwight D. Eisenhower__

Another way of getting to the heart of the issue is the efficacy of using gains in computer power to get better solutions. Increases in computing power are a terrible way to produce better results; it is woefully inefficient. One simply needs to examine the rate of solution improvement based on scaling arguments. **First, we need to recognize that practical problems converge quite slowly in terms of the application of enhanced computational resources**. For almost any problem of true real world applicability, high-order convergence (higher than first-order) is never seen. Generally we might expect solutions to improve at first-order with the inverse of mesh size. If we look at three dimensional, time dependent problems and we want to halve the numerical error, we need to apply at least 16 times the computing power. Usually convergence rates are less than first order, so the situation is actually even worse. *As a result we are investing an immense amount in progressing in an incredibly inefficient manner, and starving more efficient means of progress*. To put more teeth on the impact of current programs, the exascale initiative wants to compute things fifty times better, which will only result is reducing errors by slightly more than one half. So we will spend huge effort and billions of dollars in making numerical errors smaller by half. What an utterly shitty return on investment! This is doubly shitty when you realize that so much more could be done to improve matter by other means.

The first thing we need to recognize for progress is relative efficacy of different modes of investment. The most effective way to progress in modeling and simulation are better models. Better models require work on theory and experiment with deeply innovative thinking based on inspiration and evidence of limitations of current theory and modeling. For existing and any new models the next step is solving the models numerically. This involves detailed and innovative numerical approximations of the models. **The power of modeling and simulation with computers is predicated on the ability to solve complex models that cannot be understood analytically (or analytically without severe restrictions or assumptions). The fidelity of the numerical approximations is the single most effective way to improve results once modeling errors have been addressed**. Numerical approximations can make a huge difference in the accuracy of simulations far more effectively than computer power.

Don’t tell me about your effort. Show me your results.

― __Tim Fargo__

*So why are we so hell bent on investing in a more inefficient manner of progressing? Our mindless addiction to Moore’s law providing improvements in computing power over the last fifty years for what in effect has been free for the modeling and simulation community.*

**Our modeling and simulation programs are addicted to Moore’s law as surely as a crackhead is addicted to crack**. Moore’s law has provided a means to progress without planning or intervention for decades, time passes and capability grows almost if by magic. The problem we have is that Moore’s law is dead, and rather than moving on, the modeling and simulation community is attempting to raise the dead. By this analogy, the exascale program is basically designed to create zombie computers that completely suck to use. They are not built to get results or do science, they are built to get exascale performance on some sort of bullshit benchmark.

This gets to the core of the issue, our appetite for risk and failure. Improving numerical approximations is risky and depends on breakthroughs and innovative thinking. Moore’s law has sheltered the modeling and simulation community from risk and failure in computing hardware for a very long time.** If you want innovation you need to accept risk and failure; innovation without risk and failure simply does not happen**. We are intolerant of risk and failure as a society, and this intolerance dooms innovation literally strangling it to death in its crib. Moore’s law allowed progress without risk, as if it came for free. The exascale program will be the funeral pyre for Moore’s law and we are threatening the future of modeling and simulation with our unhealthy addiction to it.

If failure is not an option, then neither is success.

― __Seth Godin__

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― __Paulo Coelho__

The key thing to realize about this discussion is that improving numerical approximations is risky and highly prone to failure. You can invest in improving numerical approximations for a very long time without any seeming progress until one gets a quantum leap in performance. The issue in the modern world is the lack of predictability to such improvements. **Breakthroughs cannot be predicted and cannot be relied upon to happen on a regular schedule. The breakthrough requires innovative thinking and a lot of trial and error. The ultimate quantum leap in performance is founded on many failures and false starts**. If these failures are engaged in a mode where we continually learn and adapt our approach, we eventually solve problems. The problem is that it must be approached as an article of faith, and cannot be planned. Today’s management environment is completely intolerant of such things, and demands continual results. The result is squalid incrementalism and an utter lack of innovative leaps forward.

Civilizations… cannot flourish if they are beset with troublesome infections of

mistakenbeliefs.

― Harry G. Frankfurt

*What is the payoff for methods improvement?*

If we improve a method we can achieve significantly better results without a finer computational mesh. This results in a large saving in computational cost as long as the improved method isn’t too expensive. As I mentioned before one needs 16 times the computational resources to knock error down by half for a 3-D time dependent calculation**. If I produce a method with half the error, it can be more efficient if it is less than 16 times as expensive**. In other word, the method can use 16 times the computational resource and still be more efficient. This is a lot of headroom to work with!

The most dangerous ideas are not those that challenge the status quo. The most dangerous ideas are those so embedded in the status quo, so wrapped in a cloud of inevitability, that we forget they are ideas at all.

― __Jacob M. Appel__

For some cases the payoff is far more extreme than these simple arguments. The archetype of this extreme payoff is the difference between first and second order monotone schemes. For general fluid flows, second-order monotone schemes produce results that are almost infinitely more accurate than first-order. **The reason for this stunning claim are acute differences in the results comes from the impact of the form of the truncation error expressed via the modified equations (the equations solved more accurately by the numerical methods)**. For first-order methods there is a large viscous effect that makes all flows laminar. Second-order methods are necessary for simulating high Reynolds number turbulent flows because their dissipation doesn’t interfere directly with the fundamental physics.

As technology advances, the ingenious ideas that make progress possible vanish into the inner workings of our machines, where only experts may be aware of their existence. Numerical algorithms, being exceptionally uninteresting and incomprehensible to the public, vanish exceptionally fast.

― Nick Trefethen

We don’t generally have good tools for numerical error approximation in non-standard (or unresolved) cases. One digestion of one of the key problems is found in Banks, Aslam, Rider where sub-first-order convergence is described and analyzed for solutions of a discontinuous problem for the one-way wave equation. The key result in this paper is the nature of mesh convergence for discontinuous or non-differentiable solutions. **In this case we see sub-linear fractional order convergence. The key result is a general relationship between the convergence rate and the formal order of accuracy for the method, , which is .** This comes from the analysis of the solution to the modified equation including the leading order truncation error. For nonlinear discontinuous solutions, the observed result is first-order where one establishes a balance between the regularization and the self-steepening in shock waves. At present there is no theory of what this looks like theoretically. Seemingly this system of equations could be analyzed as we did for the linear equations. Perhaps this might provide guidance for numerical method development. It would seemingly be worthy progress if we could analyze such systems more theoretically providing a way to understand actual accuracy.

Another key limitation of existing theory is chaotic solutions classically associated with turbulent or turbulent-like flows. These solutions are extremely (perhaps even infinitely) sensitive to initial conditions. It is impossible to get convergence results for point values, and the only convergence is for integral measures. **These measures are generally convergent very slowly and they are highly mesh-dependent**. This issue is huge in high performance computing. One area of study is *measure-valued solutions* where convergence is examined statistically. This is a completely reasonable approach for convergence of general solutions to hyperbolic PDE’s.

The much less well-appreciated aspect comes with the practice of direct numerical simulation of turbulence (DNS really of anything). **One might think that having a DNS would mean that the solution is completely resolved and highly accurate. They are not!** Indeed they are not highly convergent even for integral measures. Generally speaking, one gets first-order accuracy or less under mesh refinement. The problem is the highly sensitive nature of the solutions and the scaling of the mesh with the Kolmogorov scale, which is a mean squared measure of the turbulence scale. Clearly there are effects that come from scales that are much smaller than the Kolmogorov scale associated with highly intermittent behavior. To fully resolve such flows would require the scale of turbulence to be described by the maximum norm of the velocity gradient instead of the RMS.

If you want something new, you have to stop doing something old

― __Peter F. Drucker__

When we get to the real foundational aspects of numerical error and limitations, we come to the fundamental theorem of numerical analysis. **For PDEs it only applies to linear equations and basically states that consistency and stability is equivalent to convergence**. Everything is tied to this. Consistency means you are solving the equations in a valid and correct approximation, stability is getting a result that doesn’t blow up. What is missing is the theoretical application to more general nonlinear equations along with deeper relationships to accuracy, consistency and stability. This theorem was derived back in the early 1950’s and we probably need something more, but there is no effort or emphasis on this today. We need great effort and immensely talented people to progress. While I’m convinced that we have no limit on talent today, we lack effort and perhaps don’t develop or encourage the talent to develop appropriately.

Beyond the issues with hardware emphasis, today’s focus on software is almost equally harmful to progress. Our programs are working steadfastly on maintaining large volumes of source code full of the ideas of the past. Instead of building on the theory, methods, algorithms and idea of the past, we are simply worshiping them. This is the construction of a false ideology. **We would do far greater homage to the work of the past if we were building on that work. The theory is not done by a long shot.** Our current attitudes toward high performance computing are a travesty, and embodied in a national program that makes the situation worse only to serve the interests of the willfully naive. We are undermining the very foundation upon which the utility of computing is built. We are going to end up wasting a lot of money and getting very little value for it.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― __Ta-Nahisi Coates__

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Fjordholm, Ulrik S., Roger Käppeli, Siddhartha Mishra, and Eitan Tadmor. “Construction of approximate entropy measure-valued solutions for hyperbolic systems of conservation laws.” Foundations of Computational Mathematics (2015): 1-65.

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

David Lloyd-Jones

said:One is reminded that the improvement in horseflesh times over eight furlongs over the past century is zilch.

Bill Rider

said:What is the potential here? Seriously?

Brian Taylor

said:If you’re going to mention the benefits of improved numerical methods over brute force increases in grid resolution, you could do worse than to discuss Thornber’s low Mach number correction, e.g. , or the benefits of using the Piecewise Parabolic Boltzmann method for mixing problems, e.g. .