Verification is an important activity for numerical computation that is too seldom done. Almost anybody with a lick of common sense will acknowledge how important it is and vital to achieve. On the other hand very few actually do it.
It is where arrogant hubris intersects laziness. It is where theory goes to die. Verification produces an unequivocal measure of your work. Sometimes the result you get is unpleasant and sends you back to the drawing board. It basically checks whether you are full of shit or not. Most people who are full of shit would rather not admit it.
What is actually worse is to do verification of someone else’s work. This puts one in the position of telling someone else that they are full of shit. This never makes you popular or welcome. For this reason I’ve compared V&V work to the “dark side” of the force, and by virtue of this analogy, myself to a Sith lord. At least it’s a pretty cool bad guy.
Areas where I’ve worked a good bit are methods for shock physics. There is a standard problem almost everyone working on shock physics uses to test their method, Sod’s shock tube. Gary Sod introduced his problem is a 1978 paper in the Journal of Computational Physics to compare existing methods. By modern standards the methods available in 1978 were all bad. This was only presented as comparisons between the numerical solution, and the exact solution graphically. This is important to be sure, but so much more could be done. Sod’s problem is a Riemann problem, which can be solved and evaluated exactly (actually solved accurately using Newton’s method). The cool thing with an exact solution is that you can compute the errors in your numerical solution. Unfortunately Sod didn’t do this.
Instead he reported run time for the methods, as if to say, “all of these suck equally, so what matters is getting the bad answer the fastest”. A more technical response to this query is that shock captured solutions to problems with discontinuous solutions (Sod’s problem, has a rarefaction, contact discontinuity and a shock wave) are intrinsically limited to first-order accuracy. At a detail level the solution is to Sod’s shock tube is typically less than first-order accurate because the contact discontinuity’s solution convergences at less than first-order and eventually this dominates the numerical error under mesh refinement (covered in a paper by Banks, Aslam, and Rider(me)). The mindset is “well its going to be first order anyway so what’s the point of computing convergence rates?”
I’m going to tell you what the point is. The point is that the magnitude of the errors in the solutions is actually a lot different even among a whole bunch of methods of similar construction and accuracy. Not every method is created equal and there is a good bit of difference between the performance of Godunov’s original method, Van Leer’s method, Colella’s refined PLM method, WENO, PPM, etc. Ultimately what someone really cares about is the overall accuracy of the solution for amount of computational resource, as well as scheme robustness and extensibility.
So going back to the reality, people continued to use Sod’s problem and presented results with a graphical comparison usually at a single grid, and a comparison at runtime. I never saw anyone present the numerical error. People would only provide numerical errors for problems where they expected their method to achieve its full order of accuracy, i.e., smooth problems like advecting a Gaussian function or Sine wave. The only exception came from the flux-corrected transport world where they did present numerical accuracy for the propagation of square waves, but not systems of equations.
In 2004-2005 Jeff Greenough and I attacked this low hanging piece of fruit. Jeff’s Lab was using a weighted ENO (WENO) code developed at Brown University (where the inventor of WENO is a professor). Jeff worked on a code that used Colella’s PLM method (basically an improved version of Van Leer’s second-order Godunov method). I had my own implementations of both methods in a common code base. We undertook a comparison of the two methods head-to-head, with the following question in mind: “what method is the most efficient for different classes of problems?” We published the answer in the Journal of Computational Physics.
Before giving the answer to the question a little more background is needed. WENO methods are enormously popular in the research community with the fifth-order method being the archetype. I would say that the implementation of a WENO method is much more elegant and beautiful than the second-order Godunov method. It is compact and modular, and actually fun to implement. It is also expensive.
We used a set of 1-D problems: Sod’s shock tube, a stronger shock tube, a popular blast wave problem, and a shock hitting smooth density perturbations. For the first three problems, the 2nd order method either out performed or was even with the 5th order method at the same mesh. On the final problem WENO was better, until we asked the efficiency question. Accounting for runtime to achieve the same accuracy, the 2nd order method outperformed WENO on every problem. If the problem was shock dominated, the difference in efficiency wasn’t even close. WENO was only competitive on the problem with lots of smooth structure. On the advection of a Gaussian pulse WENO wins at almost any level of error.
This asks a deeper question, why would a second-order method be more efficient?
This answer gets to the heart of method design. The 2nd order method cheaply introduces some elements of high-order methods, i.e., it uses 4th order approximations to the slope. Its nonlinear stability mechanism is cheap, and it uses a single step time integrator rather than a multistep integrator. The single step integrator provides a more generous time step size. The combination of the larger time step, single step integrator and simpler nonlinear stability mechanism equates to a factor of six in runtime. In addition to accommodate the fifth-order formal accuracy, the WENO method actually increases the dissipation used near shocks, which increases the error.
The bottom line is that verification is about asking good questions, answering these questions, and then going back to the well… it is in essence the scientific method.