Linearity Breeds Contempt

— Peter Lax

The development of high-resolution numerical methods for hyperbolic conservation laws has been an outstanding achievement for computational physics. These methods have provided an essential balance of accuracy (fidelity or resolution) with physical admissibility with computational tractability. While these methods were a tremendous achievement, their progress has stalled in several respects. After a flurry of development, the pace has slowed and adoption of new methods into “production” codes has come to a halt. There are good reasons for this worth exploring if we wish to see if progress can be restarted. This post builds upon the two posts from last week, which describes tools that may be used to develop methods.

Things have stalled for the very reasons that the adoption was so rapid and complete. The sort of radical success that the high-resolution (monotonicity-preserving) methods experienced may be difficult to impossible to replicate with new methods. The biggest thing that the monotonicity-preserving methods did was change the fundamental nature of the numerical viscosity from a form that inherently computed laminar, syrupy-looking flows to a numerical viscosity that allowed flows to take on an energetic character that gives fluid dynamics is power and beauty.

Why did the resolution simply stop at the rise from first- to second-order methods? One reason is probably deeply connected to the fundamental nature of nonlinear fluid systems and their inherent dissipation when viscosity is small. As it turns out the canonical small viscosity structures in flows, shocks and turbulence exhibit the same general form for the scaling of dissipation in these cases. Both show a dissipation that is proportional to the change in velocity in the direction of the flow cubed (the normal or longitudinal velocity). The fundamental numerical error for a second-order method in conservation (or control volume) form produces the correct asymptotic nonlinear dissipation of flow energy into heat. The monotonicity-preserving numerical methods allowed this term to achieve prominence in solutions helping to produce more physical appealing solutions.

The issue is that newer methods need to cure smaller ills in the current solutions. Sometimes monotonicity is not the right thing to enforce, and extreme values in the solution are damped severely, and shapes are greatly distorted. These problems are important, but not nearly as profound as the set of issues that monotonicity-preserving methods cured. We cannot expect the same level of transformative solution improvement. The case for adopting any newer methods is bound to be more subtle and less clear cut. Existing approaches that improve on monotonicity preservation are generally quite expensive without providing a commensurate improvement in solution quality.

Moreover the improvements in the solution from the new methods are generally highly subjective and lack a quantitative edge. At best, the case for using them is highly subjective and far from clear-cut. Adding to the problems are two key aspects in numerical analysis that further cloud and undermine the case for advancing. First, solutions are inherently lacking sufficient smoothness to give high-order accurate methods the full benefit of their properties. Secondly, the stability of the newer methods is substantially less than the adopted generation of methods, which provide almost as much robustness as the first-order methods they easily displaced. The newer methods are far too fragile for production codes. Part of the reason for this is the tendency to eschew every using the first-order approximation even locally. Thirdly the nonlinear stability principle associated with the newer methods is far weaker than monotonicity preservation and sometimes even precipitates new modes of instability in the solution.

Three strikes and you’re out

— A rule in Baseball, the “American Pastime”

Now we can revisit the three main issues for the new methods and provide suggestions that may allow progress to begin anew. The three shortcomings I identified above are the following:

- Lack of smoothness, and full utility for high-order accuracy
- Removal of the first-order method as a robustness mechanism
- A much weaker nonlinear stability principle, and new potential instabilities

It might be useful to revisit some old advise from Bell, Colella and Trangenstein [BCT89] before unveiling my suggestions on principles to use in developing high resolution methods using monotoncity preservation,

- Use a good foundational high-order method, upstream centered, or high-order centered
- Use an entropy satisfying Riemann solver
- Add additional dissipation at strongly nonlinear or degenerate discontinutities.

These maxims are closely related to the reasons why this class of methods has been so successful:

- Solutions are monotonicity preserving without inducing linear laminar viscosity and replacing it with a hyper viscosity that provides more energetic physically meaningful solutions,
- It unveils the prominence of the control volume term, which provides an asymptotically appropriate stabilizing nonlinear dissipation term,
- When the solution is significantly under-resolved or in-danger of unphysical solutions use a first-order methods as a “safety net”.

Putting all of these considerations into account and then thinking about how to move past the current state-of-the-art. We need to account for the weaknesses of current methods and advance beyond monotonicity preservation into account, and other advice, I can write down my three suggestions (keeping BCT’s advise as a base):

- Base the nonlinear stability principle on monotonicity-preservation with detection of extrema and careful relaxation of those conditions
- Continue to use the high-order base method unless it produces a distinct monotonicity violating solution.
- Careful analysis of strongly nonlinear discontinuities to assure that the approximation is locally nonlinearly stable. If the solution is highly under-resolved give up high-order accuracy and degenerate to first order.

Next week I’ll unveil some new analysis that sheds light on how the methods work based on modified equation analysis. It also points to some details of what might work in the future.

[BCT89] Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82.2 (1989): 362-397.

[Hersh] Hersh, Reuben. Peter Lax, Mathematician: An Illustrated Memoir. Vol. 88. American Mathematical Soc., 2014.

[Lax1978] Lax, Peter D. “Accuracy and resolution in the computation of solutions of linear and nonlinear equations.” Selected Papers Volume I (2005): 184-194.

Pingback: Getting Real About Computing Shock Waves: Myth versus Reality | The Regularized Singularity