I’ll post a footnote to my thoughts of a week ago (it wasn’t my plan A). It comes from trying to piece together the early history of CFD, my frustration with the lack of detail and context associated with the scientific and mathematical writing. Upon reflection I think it is actually a deeper problem with deeper consequences.

A generation which ignores history has no past — and no future.” ― Robert A. Heinlein

The title is the proverbial chicken and egg question, but for each case there is a definite chicken or egg answer. The problem is that far too often the literature does not contain the information by whichthe answer may be determined. Our community history is thus lost and lessons of how knowledge was obtained are not passed along.

“We make our own monsters, then fear them for what they show us about ourselves.” ― Mike Carey & Peter Gross

The question in the title relates to how advances in computational science are related to the math used to explain them. The core of the question is whether the method is first demonstrated, or a problem is first found before the mathematical analysis explains why. My general belief is that most of the time the computed results precede the math. The math provides rigor, explanation and bounds for applying techniques. This reflects upon our considerations of where the balance of effort should be placed in driving innovative solutions. Generally speaking, I would posit that the computational experimentation should come first, followed by mathematical rigor, followed by more experimentation, and so on… This structure is often hidden by the manner in which mathematics is presented in the literature.

In developing the history of CFD I am trying to express a broader perspective than currently exists on the topic. Part of the perspective is defining the foundation that existed before computational science was even a conceptual leap in Von Neumann’s mind. I knew that a number of numerical methods existed including integration of ODE’s (the work of Runge, Kutta, Adams, Bashforth, etc…). One of Von Neumann’s great contributions to numerical methods was stability analysis, and now I’m convinced it was even greater than I had imagined.

I had incorrectly assumed that ODE stability preceded Von Neumann’s work, but instead it came in the wake of it. To me, this is utterly remarkable because ODE theory is much simpler. Note that a few weeks ago I used it to introduce analysis of methods, and Von Neumann’s stability technique, but instead the more difficult thing was done first.

Think about it. None of the precursors to the modern era in ODE integration had explored the time stability of the methods. The issue was clearly present and surely observed. It took the availability of (mechanical) computers to generate the impetus to study the topic. Perhaps the human computing of the earlier era was too dubious for the instability to warrant a deeper mathematical investigation. The problem is that the writing about the topic shines little or no light on the reasoning. None. This comes down to the style of the writing, which provides no context for the work; instead it hops right into the math. Any context in the literature seems to only come when the work is completed and the author is famous (and old). Then the work is discussed in a historical overview, which provides details that are completely absent from the earlier (technical) works. If the author dies early (e.g. Von Neumann) no such retrospective is available.

“For balance to be restored, lessons must be learned.” ― Sameh Elsayed

The true reasoning and inspiration for many of the great works of numerical mathematics is hidden by the accepted practices of the field. This is counter-productive and antithetical to pedagogical discourse. Too often in the modern literature work is done without any reason to believe it will manifest any utility in actual computing. In many cases this is indeed the case. In my opinion the literature has moved away from numerical analysis that should show numerical utility in the past several decades. In reading the older ODE literature I see that this is an amplification of previous tendencies.

This personally infuriates me because I often find no reason to actually digest the detailed mathematics without some sense that it will be useful. It also encourages the publication of results that have no practical value. This frustrating state of affairs is at the core of my comments last week, which, in hindsight, may have been aimed at the wrong target.

What is lost from the literary record is profound. Often the greatest discoveries in applied math come trying a well-crafted heuristic on a difficult problem and finding that it works far better than could be expected. The math then comes in to provide an ordered structural explanation for the empirical observation. Lost in the fray is the fact that the device was heuristic and perhaps a leap or inspiration from some other source. In other cases progress comes from a failure or problem with something that should work. We explain why it doesn’t in a rigorous fashion with a barrier theorem. These barrier theorems are essential to progress. The math then forms the foundation for the next leap. The problem is that the process is undocumented and this ill prepares the uninitiated for how to make the next leap. Experimentation and heuristic is key, and often the math only follows.

Worse yet, this tendency is only getting more acute. I’m not sure why the literature is like this. Is it that people are too insecure to admit the pedestrian events that led to creation? Do parts of the work just seem to be to close to engineering? I think these tendencies lead to bigger problems than simply historical inaccuracy and incompleteness; they lead to less progress and less innovation. This tendency is actually holding the field of numerical methods for scientific computing back.

I’ve noted a general lack of progress with algorithms in the last 20-30 years. Perhaps part of the issue is related to the lack of priority given to simply experimenting with methods and trying things, then doing the math. Instead there is too much just doing math, or even worse only doing the methods that produce the math you already know how to do. We need methods that work, and invent math that explains the things that work. A more fruitful path would involve working hard to solve problems that we don’t know how to attack, finding some fruitful avenues for progress, and then trying to systematically explain progress. Along the way we might try being a bit more honest about how the work was accomplished.

“In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. Of course, you seldom, if ever, see the pure state.” – Richard Hamming

Pingback: Robustness is Stability, Stability is Robustness, Almost | The Regularized Singularity