Here is the answer in a nutshell; people use an inappropriate approach and test problems to characterize their methods.

The reason I started doing work in verification was directly related to my work in developing better numerical methods.  In the early 90’s I was developing an improved implementation of a volume tracking method (or VOF where an interface is tracked through volumes, and the interface is reconstructed geometrically).   The standard implementation of the method resulted in horrific spaghetti code, and made subsequent development and debugging a nightmare.  I had developed a better, more object-oriented implementation and was eager to share it.  In the course of doing this implementation work, I engaged in extensive testing to put the method through its paces.  I felt that the standard tests were insufficiently taxing and provided poor code coverage, so I came up with several alternative test problems.  Existing tests involved translating objects on a grid, or engaging in solid body rotation.  Difficulty was defined by the complexity of the shape being moved.  My new problems used time dependent flowfields that had non-zero vorticity and could be reversed in time allowing exact error assessment.  These new problems were a hit and have largely replaced the earlier (too) simple tests.

Now fifteen years on the paper is very highly cited, but mostly because of the better tests, not because of the reason the paper was actually written.  I’d say about 75-90% of the citations are primarily for the tests, not the methodology.  There is a lesson in this that is positive; people want to test their methods rigorously.  For myself, I was a verification devotee thereafter largely because of the experience.

In terms of numerical methods and physics my greatest interest is the solution of hyperbolic PDEs.  I find the mathematics and physics of these equations both engaging on multiple levels, and the numerical methods to solve these equations spell-bindingly interesting.  It is also an area of numerical methods where verification is not the standard practice; at least for the application of the equations to full nonlinear PDEs that characterize the application of the method.  This isn’t entirely fair; for simple linear equations with smooth solutions or nonlinear equations with discontinuities verification is commonplace and expected.  Under these limited conditions authors regularly verify their methods and present results in virtually every paper.  When a discontinuity forms accuracy in the sense of high-order convergence is lost; solutions are limited to first-order convergence or lower.  Under these conditions verification is not done as a matter of course, in fact, it is exceedingly rare to the point of almost being unheard of.

This is extremely unfortunate.  Verification is not only about order of accuracy; it is also about estimating the magnitude of numerical error.

The standard approach when smoothness is not present is to plot results at some resolution against an exact solution and show these results graphically.  The main reason to show the results is to demonstrate a lack of oscillatory wiggles near the discontinuity.  The error itself is easily (almost trivial) computable, but almost never presented. The basis of this reasoning is that all results are basically first-order accurate, and the error isn’t important.  Implicit in this judgment is the belief that order of accuracy matter, and by virtue of this numerical error magnitude is unimportant.   Unfortunately, this point of view is not entirely correct, and misses a key aspect of method character.

When all solutions are first-order accurate there are actually significant differences in the level of numerical error.  Under mesh refinement these differences can become quite significant in terms of the quality of solution.   Consider the following example with regard to the quality of solution, for first-order a factor of two increases in mesh resolution results in a halving of numerical error.  If instead you changed the numerical method to halve the error on the original grid, the savings in compute time could be large.  A halving of the mesh results in the computational work increasing by a factor of four in one dimension (assuming the Courant number is held constant).  In two dimensions the increase in effort is a factor of eight and a factor of sixteen in three dimensions.  If the more accurate numerical method does not require four or eight or sixteen times the effort, it will be a win.    More accurate methods are most computationally intensive, but rarely so much that they aren’t more efficient than lower order methods. The cases where these dynamics play out are the closest to actual applications where the problems are discontinuous and never well behaved mathematically. Indeed it is this character that explains the rapid adoption of the second-order MUSCL (Van Leer differencing for you weapons’  lab folks) over the first-order methods.

The differences are not geometric as they would be with high-order accuracy and smooth solutions, but quite frankly no one really gives a damn about smooth solutions where high-order accuracy is achievable.  They are literally only of academic interest.   Seriously.  So why, isn’t more rigorous error estimation on more difficult problems the status quo?  Fundamentally people simply haven’t come to terms with the inevitable loss of accuracy for real problems, and the consequences this has on how methods should be designed and evaluated.

Despite this, the community solving hyperbolic PDEs believes that verification is being routinely done.  It is.  It is being done on cases that matter little, and only reflect on the order of accuracy of the method in cases no one cares about.  For people doing applications work, it makes verification seem like a pitiful and useless activity.  My suggestions that verification is done for error characterization and estimation is not considered useful.  I looked back through three and a half decades of the literature at results for the venerable Sod’s shock tube problem (en.wikipedia.org/wiki/Sod_shock_tube), and could find no examples of accuracy being quoted with computed results.  Sod showed runtime in his 1978 paper.  This implies that the accuracy of the results is comparable and the computational cost at fixed resolution only matters.  This seems to be an article of faith by the community, and it is wrong!

The fact is that both speed and accuracy matter, and results should be presented thusly.

There are large differences in the accuracy between methods.  If one takes the measure of accuracy of computational effort required to achieve constant accuracy one can find a factor of 30 difference in the efficiency of different approaches (http://www.sciencedirect.com/science/article/pii/S0021999107000897).  This difference in efficiency is in one dimension with the potential for dramatically larger differences in two or three dimensions.  Yet this community almost systematically ignores this issue, and verification of methods for practical problems.

This is a long-winded way of saying that there are a lot of cases where people prefer to use methods that are worse in performance.  Part of this preference is driven by the acceptance of the sort of verification practice discussed above.  An example would be WENO (http://www.sciencedirect.com/science/article/pii/S0021999196901308) where elegance and the promise of high-order would seem to drive people, despite its low resolution.  WENO stands for weighted essentially non-oscillatory, which blends order of accuracy preservation with lack of Gibbs oscillations in an elegant almost algebraic algorithm.  Most of the development of WENO since its inception has revolved around accuracy, which is compounded by computing results for these linear problems.   The focus of order-of-accuracy results in the method being quite poor at resolving discontinuities with performance at those points of the solution being on par with the minmod second-order method (the worst and most dissipative of the second-order TVD methods!).  Instead of improving WENO for practical problems and focusing on its efficiency, the mathematical community has focused on its ability to achieve formally high-order accuracy in situations no one really cares about.

This might explain the lack of penetration of this class of method into practical computations in spite of thousands of citations. It is an utterly absurd state of affairs.

MUSCL (http://en.wikipedia.org/wiki/MUSCL_scheme) is an example of much more successful method, and as I showed with Jeff Greenough a well-written MUSCL code kills a WENO code in actual efficiency (http://www.sciencedirect.com/science/article/pii/S0021999103005965).  This was just for one dimension and the gains for MUSCL over WENO in multiple dimensions are as great as expected.  It isn’t a surprise that WENO has not been as successful for applications as MUSCL.  A change in focus by the community doing WENO development might serve the method well.

PPM (http://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf) is much better than either WENO or MUSCL.  The gains in efficiency are real (about a factor of two over MUSCL in 1-D).  Part of this resolution is due to the difference between a linear and parabolic local representation of the solution, and the parabolic profile’s capacity to represent local extrema.  PPM can also exhibit bona fide high-order accuracy through the selection of high-order values for the edge values used in determining the parabola.  The bottom line is that the PPM method has some intrinsic flexibility that can be ruthlessly exploited.  PPM is widely used by astrophysicists for many applications.  Beyond astrophysics it has been less successful and perhaps that should be studied.

WENO is elegant.  It is a beautiful method, and I loved coding it up.  I would have to say that my WENO code is extremely attractive, just as the mathematics underpinning these methods are appealing.  This appealing veneer can be superficial and hides the true performance of the method.  It is sort of like a beautiful person with a horrible personality or weak intellect; the attractive package hides the less than attractive interior.  This isn’t to say that the method can’t be improved in terms of efficiency. Indeed through taking various lessons learned from MUSCL and PPM and applying them systematically to WENO, the method could be dramatically improved.  In doing this some of the properties of WENO for smooth hyperbolic PDEs are undermined, while the properties for discontinuous problems are improved.  It could be done quite easily.

Ultimately the community has to decide where it wants impact.  Are hyperbolic PDEs important for applications?

Yes, and completely beyond a shadow of doubt.

Why does the published literature act the way it does? How has the community evolved to where the actual performance on more application-relevant testing matters less than beauty and elegance?

I think this whole issue says a great deal about how the applied mathematics community has drifted away from relevance.  That is a topic for another day, but keenly associated with everything discussed above.

Thanks for listening, happy New Year.

Advertisements