A method which can solve this problem well should be able to handle just about anything which can arise in one-dimensional pure hydrodynamic flow. PPM is such a scheme.
– P.R. Woodward
Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.
This is one of the most important methods in the early history of the revolutionary developments for solving hyperbolic PDEs in the 1980’s. For a long time this was one of the best methods available to solve the Euler equations. It still outperforms most of the methods in common use today. For astrophysics, it is the method of choice, and also made major inroads to the weather and climate modeling communities. In spite of having over 4000 citations, I can’t help but think that this paper wasn’t as influential as it could have been. This is saying a lot, but I think this is completely true. This partly due to its style, and relative difficulty as a read. In other words, the paper is not as pedagogically effective as it could have been. The most complex and difficult to understand version of the method is presented in the paper. The paper could have used a different approach to great effect by perhaps providing a simplified version to introduce the reader and deliver the more complex approach as a specific instance. Nonetheless, the paper was a massive milestone in the field.
It was certainly clear that high-order schemes were not necessarily bringing greater accuracy so physics would have to step in to shore up the failing numerics.
– Jay Boris
Part of the problem with the paper is the concise and compact introduction to the two methods used in the accompanying review article, PPMLR and PPMDE. The LR stands for Lagrange-Remap where the solution is solved on a Lagrangian grid and then remapped back to the original grid for an utterly Eulerian solution. Both the Lagrangian and Eulerian grids are unevenly spaced, and this results in far more elaborate formulas. As a result it is hard to recognize the simpler core method lurking inside the pages of the paper. The DE stands for direct Eulerian, which can be very simple for the basic discretization. Unfortunately, the complication for the DE flavor of PPM comes with the Riemann solver, which is far more complex in the Eulerian frame. The Largangian frame Riemann solver is very simple and easy to evaluate numerically. Not so for the Eulerian version, which has many special cases and requires some exceedingly complex evaluations of the analytical structure of the Riemann solution. Advances that occurred later greatly simplified and clarified this presentation. This is a specific difficulty of being an early adopter of methods, the clarity of presentation and understanding is dimmed by purely narrative effects. Many of these shortcomings have been addressed in the recent literature discussed below.
The development of the PPM gas dynamics scheme grew out of earlier work in the mid 1970s with Bram van Leer on the MUSCL scheme. The work of Godunov inspired essential aspects of MUSCL.
– Paul R. Woodward
The paper had a host of interesting and important sub-techniques for solving hyperbolic PDEs. Many of these “bells” and “whistles” are not part of the repertoire for most methods today. The field actually suffers from some extent by not adopting most of these strategies for attacking difficult problems. It is useful to list the special approaches along with a description and context that might make them easier to adopt more broadly (https://wjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://wjrider.wordpress.com/2017/06/30/tricks-of-the-trade-making-a-method-robust/, https://wjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/). The paper is written in such a way that these algorithms seem specifically tailored to PPM, but they are far broader in utility. Generalizing their use more broadly would serve the quality of numerical solutions immensely. To a large extent Phil Colella extended many of these techniques to piecewise linear methods that form the standard approach in production codes today.
- Shock flattening – Shocks are known to be horrifically nonlinear and difficult both forgiving and brutal. This technique acknowledges this issue by blending a bit of safe first order method with the nonlinearly adaptive high-order methods when a strong shock is encountered. The principle is to use a bit more first-order when the shock is strong because oscillations can escape. For weak shocks this is unnecessary. Rather than penalize the solution everywhere the method is made locally more dissipative where the danger is the greatest.
- Contact steepening – contact discontinuities will smear out without limit if dissipation is applied to them. In other words, errors made in their solution are with you forever. To keep this from happening, the amount of dissipation applied at these waves is minimized. This sort of technique must be applied with great caution because at a shock wave this is exceedingly dangerous. Additionally, the method to limit the dissipation can produce a very good interface tracking method that is far simpler than the elaborate methodology using interface geometry. It is a useful pragmatic way to move interfaces with little dissipation along with relative simplicity. This basic approach is the actual interface tracking method in many production codes today although few use methods as elaborate or as high quality as that used in the original PPM.
- Extra dissipation – Monotonicity preservation and Riemann solvers are two elaborate ways of producing dissipation while achieving high quality. For very nonlinear problems this is not enough. The paper describes several ways of adding a little bit more, one of these is the shock flattening, and another is an artificial viscosity. Rather than use the classical Von Neumann-Richtmyer approach (that really is more like the Riemann solver), they add a small amount of viscosity using a technique developed by Lapidus appropriate for conservation form solvers. There are other techniques such as grid-jiggling that only really work with PPMLR and may not have any broader utility. Nonetheless, there may be aspects of the thought process that may be useful.
- High-order edges – One of PPM’s greatest virtues is the use of formally higher order principles in the method. Classic PPM uses fourth-order approximations for its edge values. As a result, as the Courant number goes to zero, the method becomes formally fourth-order accurate. This is a really powerful aspect of the method. It is also one of the clear points where the method can be generalized. We can use whatever high-order edge value we like for PPM. One of the maxims to take from this approach is the power of including very high-order discretizations even with otherwise lower order approximation methods. The impact of the high-order is profoundly positive.
- Steepened edge values – For horrible nonlinear problems, the simple use of high-order differencing is not advisable. The nature of the high-order approximation can be decomposed into several pieces, and the approximation can be built more carefully and appropriately for complex problems. In this way, the high order edge values are a bit hierarchical. This is partially elaboration, but also reflects a commitment to quality that is imminently laudable.
Generalized Monotonicity – PPM uses a parabola and as a result the limiters so well-known don’t work to provide monotone results. As a result, the limiter for PPM takes two steps instead of the single step needed for a linear profile. I don’t like the original presentation in the paper and recast the limiter into an equivalent algorithm that uses two applications of the median function per edge. The first step makes sure the edge value being used is bounded by the cell averages adjacent to it. The second step asks whether the parabola is monotone in the cell and limits it to one that is by construction should it not be (https://wjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/, https://wjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/ https://wjrider.wordpress.com/2015/08/06/a-simple-general-purpose-limiter/, https://wjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/, https://wjrider.wordpress.com/2015/08/07/edge-or-face-values-are-the-path-to-method-variety-and-performance/ ).
Before launching into a systematic description of the PPM algorithm, it is worthwhile to first explain the goals and constraints that have influenced its design. These are:
- Directional operator splitting.
- Robustness for problems involving very strong shocks.
- Contact discontinuity steepening.
- Fundamental data in the form of cell averages only.
- Minimal dissipation.
- Numerical errors nevertheless dominated by dissipation, as opposed to dispersion.
- Preservation of signals, if possible, even if their shapes are modified, so long as they travel at roughly the right speeds.
Minimal degradation of accuracy as the Courant number decreases toward 0.
– Paul R. Woodward
Over time PPM has mostly been interpreted monolithically as opposed to some basic principles. PPM is really a wonderful foundation with the paper only providing a single instantiation of a panoply of powerful methods. This aspect has come to the fore more recently, but would have served the community better far earlier. Some of these comments are the gift of 2020 hindsight. A great deal of the pedagogical clarity with regard to Godunov-type methods is the result of its success, and only came to common use in the late 1980’s, if not the 1990’s. For example, the language to describe Riemann solvers with clarity and refinement hadn’t been developed by 1984. Nevertheless, the monolithic implementation of PPM has been a workhorse method for computational science. Through Paul Woodward’s efforts it is often the first real method to be applied to brand new supercomputers, and generates the first scientific results of note on them.
The paper served as a companion to the adjacent paper that reviewed the performance of numerical methods for strong shocks. The review was as needed as it was controversial. The field of numerical methods for shock waves as set to explode into importance and creative energy. The authors Phil Colella and Paul Woodward would both play key roles in the coming revolution in methods. Woodward had already made a huge difference by spending time in Europe with Bram van Leer. Paul helped Bram with implementing advanced numerical methods using methodologies Paul learned at the Livermore Labs. Bram exposed Paul to his revolutionary ideas about numerical methods chronicled in Bram’s famous series of papers (https://wjrider.wordpress.com/2014/01/11/designing-new-schemes-based-on-van-leers-ideas/, https://wjrider.wordpress.com/2014/01/06/van-leers-1977-paper-paper-iv-in-the-quest-for-the-ultimate/, https://wjrider.wordpress.com/2014/01/05/review-of-the-analysis-of-van-leers-six-schemes/). The ideas therein were immensely influential in changing how hyperbolic equations were solved.
One of the great successes in numerical methods for hyperbolic conservation laws has been the use of nonlinear hybridization techniques, known as limiters, to maintain positivity and monotonicity in the presence of discontinuities and underresolved gradients.
– Michael Sekora and Phil Collela
Bram’s ideas created a genuine successor to Godunov’s method. The methods he created were novel in producing a nonlinearly adaptive numerical method where the method would adapt locally to the nature of the solution. This overcame the limitations of Godunov’s theorem regarding the accuracy of numerical methods for hyperbolic equations. Bram’s ideas were geometric in nature, and reflected the approach of the physicist. Paul being a physicist gravitated into the same view, and added a genuine does of pragmatism. Bram also wasn’t the first person to overcome Godunov’s theorem. He may have actually been the third (or fourth). The first is most likely to have been Jay Boris who invented the flux-corrected transport (FCT) method in 1971. In addition, Kolgan in the Soviet Union and Ami Harten might lay claims to overcoming Godunov’s barrier theorem. Some of these different methods played a role in the comparison in the review article by Woodward and Colella. In the light of history many of the differences in the results were more due to the approaches to systems of equations and related difficulties than the nonlinearly adaptive principles in the methods.
The strong, fluid-dynamic shock problem had become the number one computational roadblock by the fall of 1970 so I was urged to concentrate on the problem full time, finally developing the FCT convection algorithm in the winter.
– Jay Boris
In totality, the methods developed by three or four men in the early 1970’s set the stage for revolutionary gains in method performance. At the time of the developments, the differences in the methods were fiercely debated and hotly contested. The reviews of the papers were contentious and resulted in bitter feelings. Looking back with the virtues of time and perspective several things stand out. All the methods represented a quantum leap in performance, and behavior over the methods available prior. The competition and ideas so hotly contested probably helped to spur developments, but ultimately became counter-productive as the field matured. It seems clear that the time was ripe for the breakthrough. There was a combination of computers, mathematics and applications that seeded the developments. For the same basic idea to arise independently in a short period of time means the ideas were dangling just out of reach. The foundations for the breakthrough were common and well-known.
Paul Woodward is an astrophysicist, and PPM found its most common and greatest use in his field. For a long time the nature of PPM’s description meant that the exact versions of the method described in the canonical 1984 paper were the exact method used in other codes. Part of this results from PPM being a highly tuned, high-performance method with a delicate balance between high-resolution methodology and various safety measures needed for difficult highly nonlinear problems. In a manner of speaking it is a recipe that produces really great results. Imagine PPM as something akin to the Toll House chocolate chip cookie recipe. The cookies you get by following the package exactly are really, really good. At the same time, you can modify the recipe to produce something even better while staying true to the basic framework. The basic cookies will get you far, but with some modification you might just win contests or simply impress your friends. PPM is just like that.
At this point I’ve said quite little about the method itself. The core of the method is a parabolic representation of the solution locally in a cell. The method is totally one-dimensional in nature. This parabola is determined by the integral average in a cell and the point values of the quantity at the edge of the cell. What is not so widely appreciated is the connection of PPM to the fifth scheme in Van Leer’s 1977 paper. This method is interesting because the method evolves both cell averages like any finite volume code, and the point values at the cell boundary. It is compact and quite supremely accurate compared with other third-order methods. The PPM is a way of getting some of the nice properties of this method from a finite volume scheme. Rather than evolve the point values on the edge, they are recovered from the finite volumes.
Rather than belabor the technical details of PPM, I’ll point to the recent trends that have extended the method beyond its classical form. One of the original authors has used the parabola to represent valid extrema in the solution rather than damping them by forcing monotonicity. I did the same thing in my own work largely paralleling Phil’s work. In addition, the change in the high-order edge reconstruction has been recognized and implemented to good effect by both Phil, Paul, myself and others. The connection of Riemann solvers has also been generalized. All of this reflects the true power of the method when projected onto the vast body of work that arose after the publication of this paper. Even today PPM remains one of the very best methods in existence especially with the modifications recently introduced.
Personally, I’ve come to know both Phil and Paul personally and professionally. In the numerical solution of hyperbolic PDEs both men have played a significant personal role and witnessed history being made. They helped make CFD what it is today. It’s always an interesting experience to read someone’s work then come to know the person. A big part of a deeper appreciation is finding out the underlying truths of the paper. You start to realize that the written, published record is a poor reflection of the real story. Some of this comes through the hard work of reading and re-reading a paper, then deriving everything in it for yourself. A deeper appreciation came from expressing the same method in my own language and mathematics. Finally taking each of these expressions into conversations with the authors who clarified most of the remaining questions. The academic literature is a scrubbed and largely white-washed reflection of reality. What we are allowed to read and see is not the truth, but an agreed upon distortion.
When the numerics fails, substitute the physics.
– Steve Zalesak
the scientists who use such algorithms must have both input to and knowledge of their design. There may come a day when we no longer hold to this view, when the design of such algorithms can be left to expert numerical analysts alone, but that day has not yet arrived.
– Steve Zalesak
Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.
Carpenter Jr, Richard L., Kelvin K. Droegemeier, Paul R. Woodward, and Carl E. Hane. “Application of the piecewise parabolic method (PPM) to meteorological modeling.” Monthly Weather Review 118, no. 3 (1990): 586-612.
Woodward, Paul R. “Piecewise-parabolic methods for astrophysical fluid dynamics.” In Astrophysical Radiation Hydrodynamics, pp. 245-326. Springer Netherlands, 1986.
Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.
Plewa, Tomasz, and Ewald Mueller. “The consistent multi-fluid advection method.” arXiv preprint astro-ph/9807241 (1998).
Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.
Van Leer, Bram. “Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection.” Journal of computational physics 23, no. 3 (1977): 276-299.
Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.
Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.
Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.
Rider, William J. “Reconsidering remap methods.” International Journal for Numerical Methods in Fluids 76, no. 9 (2014): 587-610.
Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.
J. P. Boris “A Fluid Transport Algorithm That Works,” Proceedings of the seminar course on computing as a language of physics, 2-20 August 1971, InternationalCentre for Theoretical Physics, Triest, Italy.