Some of these options were profitably framed a long time ago in one of Van Leer’s papers (the fourth of his series published in 1977). In that paper he described two methods that came into widespread use, the second-order accurate method that is lives on as the origin of much of the limiter work in the 1980’s, and is the most common modern approach used in aerodynamic codes, unstructured codes, and for remap in ALE codes. He also introduced the base method that became the PPM method. Ironically, these two methods were the worst of the six methods introduced. Van Leer labeled them as schemes 1 and 4. Van Leer’s scheme 1 is called the “piecewise linear method” and the scheme 4 is the “piecewise parabolic method (PPM).” The polynomial approximation used in scheme 1 uses the integral average of cell value, and the derived gradient (or slope) to define the linear approximation. Other linear approximations could be developed using biased approximation for the slope, but these methods would not be considered upstream centered. The PPM approximation is built defining a parabola that reconstructs the integral of a cell and its two neighbors. The more famous PPM method uses a polynomial matching the cell average, and high-order approximations to the values at the edge of the cell.

Two other classes of methods were provided. One of these is based on the moments of a solution (the zeroth moment is the cell average value). These methods are equivalent to the discontinuous Galerkin (DG) method, which is the focus of copious research today, and one of the key approaches being explored to deal with the upcoming changes in computing architecture. This is enabled by the scheme being developed to carry extra degrees of freedom in each computational cell (i.e., the moments, a single moment gives a linear basis, a second moment a quadratic basis). Van Leer labeled them as schemes 3 and 6. Scheme 6 was not described in details, but has been developed as the PPB (piecewise parabolic Boltzmann) scheme by Paul Woodward, and as a quadratic finite element method with the DG research. The polynomial derivation will match the cell average and the first and/or second moment of the distribution.

It is notable that the bane of DG methods is the development of nonlinear limiters that do not destroy the increases in accuracy fidelity that the basic (linear) DG methods provide. Thus far it is the development of effective limiters for DG that has most greatly “limited” their adoption by the community more broadly. This is coupled to the next issue related to stability.

The methods in Van Leer’s paper used characteristic information to enhance their stability resulting in a Courant condition that was not restrictive. Most DG methods developed today rely upon method-of-lines approaches, which means the spatial part of the operator is discretized, and the time operator is done using an ODE method. This approach when using explicit methods results in a very restrictive Courant condition. A challenge is to produce methods of this sort will greater stability. Selectively reintroducing the characteristic information may be an answer.

The third class of approaches in Van Leer’s paper is significantly under-developed and deserves attention. Moreover, I believe it can be profitably being extended to encompass even broader design principles. As briefly commented on above these methods utilize point values of the quantity being integrated and derivative values. In order to capture shocks, the integral of the quantity (i.e., the cell-centered value) is also integrated using the familiar control volume form. Thus these methods can be described as combinations of finite volume and finite difference methods. Van Leer’s Scheme 2 uses the integral average and a cell-centered derivative to define a linear polynomial. The difference between this method and Scheme 1 is that Scheme 2’s derivative is evolved in time using the derivative of the PDE. Scheme 5 is different. The parabola is defined by the integral average and the edge values. The difference between this method and the classic PPM method is that the edge values are evolved in time by the continuous version of the PDE. By continuous version of the PDE I mean the strong differentiable form of the PDE and not the weak form used to integrate the integral average values or moments.

There is no reason, that this approach can’t be taken even further to include moments as well. This provides the capability to design methods with tailored performance, both in terms of accuracy and memory use. The key is to provide the capability to introduce appropriate nonlinear stability mechanisms of equivalent impact to those common with Van Leer’s piecewise linear or parabolic methods. For example, a cubic polynomial could be derived using the integral average, the first moment and the cell-edge values. A suggestion made by Phil Roe is to use the integral average with the cell-edge and cell-edge-derivative values to define a quartic polynomial and thus a fifth-order method.

Stability and accuracy analysis for these schemes becomes necessarily complex. Using tools such as Mathematica renders the problem tractable. I will note that Van Leer analyzed the method without such help, and this makes the analysis contained in his paper all the more masterful.

Pingback: The Piecewise Parabolic Method (PPM) | The Regularized Singularity