A Basis for Reconsidering Remap Methods

William J. Rider

Sandia National Laboratories, Albuquerque 

Abstract

Methods for discretizing remap are based on algorithms developed for hyperbolic conservation laws.  Since its introduction in 1977 Van Leer’s monotonicity-preserving piecewise linear method and its extensions have been ubiquitous in remap. In that 1977 paper Van Leer introduced another five algorithms, which largely have not been used for remap despite the observation that the piecewise linear method had the least favorable properties. This adoption parallels the algorithmic choices in other related fields.  Two factors have led to the lack of attraction to the five algorithms: the simplicity and effectiveness of the piecewise linear method, and complications in practical implementation of the other methods. Plainly, stated Van Leer’s piecewise linear method enabled ALE methods to move forward by providing a high resolution, monotonicity-preserving remap.  As a cell-centered scheme, the extension to remap was straightforward.  Several factors may be conspiring to reconsider these methods anew: computing architectures are more favorable towards floating point intensive methods, lacking data movement, and 30 years of experience in devising nonlinear stability mechanisms (e.g., limiters).  In particular, one of the method blends characteristics of finite volume and finite difference methods together in an ingenious manner that has exceptional numerical properties, and should be considered a viable alternative to the ubiquitous piecewise linear method.

 Introduction

In the numerical solution of hyperbolic PDEs, a series of papers by Bram Van Leer is considered seminal.  It was a five paper series with the august title “Towards the Ultimate Differencing Scheme…” In the fourth paper of the series a whole set of different methods were proposed (six in total).  Each of the methods is based on piecewise polynomial interpolation using the degrees of freedom to define the polynomials.   A first order polynomial leads to a second-order method, and a second-order polynomial leads to a third order method because of the integration involved in the solution.  Furthermore, each method developed was fundamentally upwind biased.  The broad class of methods can be described as being upstream centered, which is a centered basis for the polynomial with an upwind bias for information propagation.  This class of methods is well known to be advantageous mathematically.

Subsequently most of the development of numerical methods was based upon two of these methods.  The starkest aspect of this thread is that the two base methods were actually the least attractive from their fundamental numerical properties (accuracy, dissipation and propagation or phase error).  The reason for this is related to the more elaborate techniques necessary for nonlinear stability of solutions and practical use.  These techniques go by the name of “limiters” that serve to keep solutions physical and properly bounded. 

The relatively unexplored methods are interesting for several reasons.  Some of the basic design principles are quite different.  The favored methods are all based on integrating over control volumes, implying the use of conservation form (as essentially dictated to the community by Lax-Wendroff’s famous theorem).  A couple of the methods use other integrated quantities such as moments of a solution where the first moment is the integral of the quantity multiplied by a linear function, the second moment, the integral of the quantity multiplied by a quadratic function and so on.  The evolution equations for the moments is derived using the same procedure, and naturally invokes the weak form of the PDE.  The other form uses the differential form of the PDEs and involves introducing point quantities, such as the solution at a point, or derivatives of the solution at a point.  One of the key aspects of this development is to make sure the quantities are all dependent upon each other so that degrees-of-freedom are well coupled.

The present day interest should be driven by the needs of modern computers that are already far different than the previous generations, with massive and pervasive levels of parallelism.  New computers will greatly penalize memory references and communication compared to floating point operations.  In other words, once you’ve got something in your computer memory, you will be greatly rewarded for doing lots of floating point operations.  This character will offer benefits to methods having their data local and high-order accuracy requiring many operations.  The key for all these methods is to develop robust and sensible stability enhancing techniques that do no undermine the fundamental properties of the methods they are applied to. 

As note, by far, the greatest amount of effort for solving hyperbolic PDEs* has gone into cell-centered schemes due to their intrinsic simplicity and connection to existing computational approaches.  This work has been enormously successful over the past thirty some odd years.  Starting in the early 1970’s key concepts were added to the repertoire of scientist that allowed one to simultaneously achieve the physically appealing results provided by first-order method such as upwind differencing with the accuracy of second-order methods such as Lax-Wendroff.  These techniques were probably first introduced in the form of flux-corrected transport (FCT) by Jay Boris (and later in collaboration with David Book), but also independently by Bram Van Leer in his “Toward the Ultimate” series of papers.  There is also the compelling case of a third invention of these methods by Kolgan in the Soviet Union.  Kolgan died before receiving any attention and his invention was only recently uncovered.  A fourth stream of development looks similar in the passage of time is due to the Israeli mathematician, Ami Harten, who figures prominently in our tale later.

These methods are known as monotonicity preserving where the numerical solution of the PDE does not produce any new minimum or maximum values.  This is a property of the simplest form of the equation used extensively in the development of numerical methods.   Upwind differencing is the archetypical monotonicity-preserving linear scheme, and Lax-Friedrichs is another example.  Monotone schemes are first-order accurate.  The work of Boris, Van Leer and Kolgan all sought to produce monotonicity preserving methods that were not limited to first-order accuracy.

All three developments were made with knowledge of Godunov’s theorem that disallows the possibility of developing a linear method that at once second-order accurate and monotone.   The key to the theorem is the word linear.  The key advancement made was to introduce nonlinearity into the method even in the case of solving a linear PDE.  What is a linear scheme?  A linear method uses the same finite difference stencil (or formula) everywhere, while a nonlinear scheme adapts the finite difference scheme to the solution itself.  In other words, the scheme detects the nature of the solution and adapts the finite difference scheme to optimally solve it. 

At a detail level, the developments were much different, but the conceptual framing is nearly identical.  FCT uses a sequential process where the limiter adds back as little of the first-order monotone dissipation as possible to still produce monotone results.  Both Van Leer and Kolgan developed second-order extensions of Godunov’s method**.

In the field of hyperbolic PDEs nonlinear scheme changed everything?  In many fields of computational science this development was like an event horizon.  These nonlinear differencing schemes were an enabling technology for many numerical simulations, and resulted in an order of magnitude or more increase in the effective accuracy of simulations.

From this foundation there was a flurry of research activity in the 1980’s and 1990’s developing, extending and using this broad class of methods.  Examples abound in the literature and the success of these approaches in undeniable.  Several of the ore successful methods from the line of work, are the piecewise parabolic method (PPM), and the weighted essentially non-oscillatory (WENO) methods, which dominate modern use of these methods.  Both of these approaches have the appeal of utilizing elements of method design that are higher order accurate and produce results of significantly higher fidelity than the original nonlinear methods.  For example, PPM dominates the field of astrophysical simulation and WENO dominates mathematical research and related simulations.  Both methods are based on developing a numerical solution for variables all defined at the center of control volumes.

A clear downside to this approach is the need to communicate the information of many control volumes to update the values for any other control volume.  This communication is becoming increasingly problematic on modern computing platforms because of the cost of communication relative to floating-point operations, or memory access.  In order to get the most out of modern computers both communication of information, and memory access must be carefully managed.  Relatively speaking, floating point operations are “free,” which more accurately is that they are not the rate limiting aspect of computing.   Therefore we can stand to have algorithms that are very floating point intensive as long as they have relatively low communication intensity, and memory requirements.  The key is striking the right sort of balance.  Because of this issue it is time to rethink the design of algorithms in a major way.  Recent results at a conference stunned me by showing (for the first time I can remember), new computing architectures actually providing worse performance.  The new chips were better for many things, but not scientific computing.  It has the potential to be a true “wake-up call.”

Some of Van Leer’s methods have received attention and extension, most notably the PPM method developed by Colella and Woodward.  This method is extensively used in astrophysics, but not significantly in remap methods although a very close variant to Van Leer’s approach (scheme IV) as been adopted by ALEGRA.    The PPM method can be thought of as a reasonable cell-centered approach to scheme V where the edge-centered variables are replaced with temporary high-order variables. Perhaps the most extensive development of this method was undertaken by Eymann in conjuction with Roe.  This extension provided a number of advances that may prove useful in the setting of remap.  More direct implementations of scheme V have been followed by Guinot, and Popov (with Norman).  These approaches highlight the positive aspects of the method as we will reiterate below, but thus far have not achieved widespread adoption.  Zeng’s hybrid finite-difference-finite volume method is yet another closely related scheme that may include extensions of the approach important for practical adoption.

Discontinuous Galerkin methods have received the most attention although the direct descendents of Van Leer’s approach are far more limited with the line of work arising from Cockburn and Shu being widest in adoption.  Woodward has developed Van Leer’s sixth method in detail despite its sparseness of detail in the 1977 paper.

*Hyperbolic PDE methods can also be described as solving an advection equation, convection or remapping values from one grid to another.  Where I work, remap is the most common term.  Remap can also be done purely geometrically, and the advection is applied using a mathematical transformation to achieve equivalence.

**Godunov’s work was done largely in isolation to the progress made in the West.  He produced the theorem mentioned above, which is fundamental, and a unique numerical method for solving systems of hyperbolic PDEs.  This method is a fairly pure extension of first-order upwinding to systems.  Godunov used the Riemann problem to provide upwinding to the system, a truly inspired choice.  The Riemann problem is the analytical solution to the pure initial value problem of two semi-infinite states interacting.   In this solution we must note that the number of independent modes of transport, or waves is equal to the number of equations in a system.  The magnitude of the wave’s speed can vary significantly and differ in direction.   This muddies the meaning of upwind in the classical sense.  The Riemann solution is used to sort this out in a physically appealing way.

Advertisements