Recently, I wrote about the priorities in code development putting accuracy and efficiency last in the list of priorities ( Part of the not very implied critique associated with this is that the relative emphasis in development is very close to the opposite of my list. High performance computing and applied mathematics today is mostly concerned with efficiency (first) and accuracy (second). I believe these priorities do us a disservice, represent a surplus of hubris and fail to recognize some rather bold unfinished business with respect to stability theory.

All that it is reasonable to ask for in a scientific calculation is stability, not accuracy.

–Nick Trefethen

Regions_02I thought about what I wrote a few weeks ago, and realized that when I state robust, I mean almost the same thing as stable. Well almost the same is not the same. Robust is actually a stronger statement since it implies that the answer is useful in a sense. A stable calculation can certainly produce utter and complete gibberish (it may be even more dangerous to produce realistic-looking, but qualitatively/quantitatively useless results). I might posit that robustness could be viewed as a stronger form of stability that provides a guarantee that the result should not be regarded as bullshit.

stability-3.hiresPerhaps this is the path forward I’m suggesting. The theory of PDE stability is rather sparse and barren compared to ODE theory. PDE stability is really quite simple conceptually, while ODE stability theory is rich with detail and nuance. One has useful and important concepts such as A-stability, L-stability and so on. There are appealing concepts such as relative-stability and order stars, which have no parallel in PDE stability. I might be so bold as to suggest that PDE stability theory is incomplete and unfinished. We have moved toward accuracy and efficiency and never returned to finish the foundation they should be built upon. We are left with a field that has serious problems with determining quality and correctness of solutions (,

Maybe a useful concept would be robust stability. What are the conditions where we can expect the results to be physical and nonlinearly stable? Instead the concept of robustness often gets a bad name because it implies tricks and artifices used to produce results securely. A key point is that robustness is necessary for codes to do useful work, yet doing the work of making methods robust is looked down upon. Doing this sort of work successfully resulted in the backhanded compliment/slight being thrown my way:

you’re really good at engineering methods.

Thanks, I think. It sounds a lot like,

you’re a really good liar

In thinking about numerical methods perhaps the preeminent consideration is stability. As I stated, it is foundational for everything. Despite its centrality to the discussion today, stability is a rather later comer to the basic repertoire of the numerical analyst only being invented in 1947 while many basic concepts and methods precede it. Moreover its invention in numerical analysis is extremely revealing about the fundamental nature of computational methods. Having computers and problems to solve with them drives the development of methods.Eniac

Recently I gave a talk on the early history of CFD ( and did a bit of research on the origin of some basic concepts. One of my suppositions was that numerical stability theory for ODEs must have preceded that for PDEs. Instead this was not true! PDEs came first. The reason for this is the availability and use of automatic computation (i.e., computers). Because of the applications of PDEs to important defense work during and after World War 2, the problem of stability was confronted. Large-scale use of computers for integrating ODEs didn’t come along until a few years later. The origins of stability theory and its recognition are related in a marvelous paper by Dahlquist [Dahlquist], which I wrote about earlier ( There I expressed my annoyance at the style of mathematics papers that obscures the necessary human element in science in what I believe to be a harmful manner. The lack of proper narrative allows the history and impact of applied math to be lost in the sands of time.

JohnvonNeumann-LosAlamosThe PDE stability theory was first, and clearly articulated by John Von Neumann and first communicated during lectures in February 1947, and in a report that same year [VNR47]. These same concepts appeared in print albeit obliquely in Von Neumann and Goldstine [VNG47], and Crank-Nicholson’s classic [CN47]. Joe Grcar gives a stunning and full accounting of the work of Von Neumann and Goldstine and its impact on applied mathematics and computing in SIAM Review [Grcar]. Since Von Neumann had access to and saw the power of computing, he saw stability issues first hand, and tackled them. He had to, it bit him hard in 1944 [MR14]. His stability analysis methodology is still the gold standard for PDEs (

Another theme worth restating is the roll of (mis-)classification of the early reports had on muddying the history. LA-657, which was the report on the first mention of stability in numerical analysis was classified until 1993 even though the report is clearly unclassified ( As it turned out the official unveiling of the ideas regarding stability of PDEs came out in two papers in 1950 [VNR50,CFVN50].

germundAs Dahlquist relays the PDE world had a head start, and other important work was conducted perhaps most significantly the equivalence theorem of Lax [LaxEquiv]. This theorem was largely recreated independently by Dahlquist two or three years later (he reports that Lax gave the theory in a seminar in 1953). The equivalence theorem states that the combination of stability and consistency is equivalent to convergence. Being rather flip about this stability means getting an answer and consistency means solving the right problem.


From there the ODE theory flowered and grew to the impressive tapestry we have today. A meaningful observation is that we have a grasp of the analytical theory for the solution of ODEs that eludes us today with PDEs. Perhaps the PDE theory would flow like water from a breaking dam were such an analytical theory available. I’m not so sure. Maybe the ODE theory is more of the consequence of the efforts a few people or a culture that was different from the culture responsible for PDEs. Its worth thought and discussion.

The investigator should have a robust faith – and yet not believe.

 Claude Bernard

[LaxEquiv] Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on Pure and Applied Mathematics 9, no. 2 (1956): 267-293.

[VNG] Von Neumann, John, and Herman H. Goldstine. “Numerical inverting of matrices of high order.” Bulletin of the American Mathematical Society 53, no. 11 (1947): 1021-1099.

[Dahlquist] Dahlquist, Germund. “33 years of numerical instability, Part I.” BIT Numerical Mathematics 25, no. 1 (1985): 188-204.

[CN47] Crank, John, and Phyllis Nicolson. “A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type.” In Mathematical Proceedings of the Cambridge Philosophical Society, vol. 43, no. 01, pp. 50-67. Cambridge University Press, 1947.

[CFVN50] Charney, Jules G., Ragnar Fjörtoft, and J. von Neumann. “Numerical integration of the barotropic vorticity equation.” Tellus 2, no. 4 (1950): 237-254.

[VNR50] VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21, no. 3 (1950): 232-237.

[VNR47] VonNeumann, John, and Robert D. Richtmyer. “On the numerical solution of partial differential equations of parabolic type.” Los Alamos Scientific Laboratory Report, LA-657, December 1947.

[Grcar] Grcar, Joseph F. “John von Neumann’s analysis of Gaussian elimination and the origins of modern Numerical Analysis.” SIAM review 53, no. 4 (2011): 607-682.

[MR14] Mattsson, Ann E., and William J. Rider. “Artificial viscosity: back to the basics.” International Journal for Numerical Methods in Fluids (2014). DOI 10.1002/fld.3981