Solution and code verification are done for quite different reasons.  They reinforce each other, but they serve different purposes in conducting quality computational science.  Code verification is done to provide clear evidence that numerical methods are implemented to solve governing equations correctly.  Solution verification provides an estimate of discretization error for an applied problem.  Insofar as providing evidence of correctness of implementations, solution verification is simply a “feel-good” exercise.  This seeming confusion is actually quite understandable.  In retrospect it is the fault of a poorly chosen taxonomy.

Lately, it has become fairly clear to me people are starting to believe that actively doing verification is a good idea.  This is great, the message has been received and action has been taken.  The good work of code verification helps to provide faith that the implementation of a solution method is correct.  To remind the reader, code verification compares a numerical solution to an analytical solution. One of the key measures in conducting code verification is the observed rate of convergence for a method.  This is directly compared to what is theoretically expected.  If the values match, the verification is confirmed, and this evidence is amassed to indicate that the method is implemented correctly.  If one completes this exercise over-and-over for different problems having analytical solutions, the evidence can become overwhelming.  I’d note for experts out there that the expected rate of convergence depends not only on the method, but the nature of the solution.  In other words if the analytical solution lacks smoothness (differentiability) or contains a discontinuity, the expected rate of convergence will degrade over the ideal case.

As I stated above, solution verification has an entirely different purpose.  It is conducted to help provide an estimate of the numerical error in a solution.  That is the key thing, the error estimate, not the rate of convergence. The rate of convergence is an outcome of secondary interest, an auxiliary quantity.  If the convergence rate does not match the given order of a numerical method, it does not necessarily mean the method is implemented incorrectly.  It might mean that.  Instead it is a quantity that invites caution and examination by the code’s user.  The reason is that rarely do we have firm theoretical expectations for convergence in “real” applied problems.  Often the solution of interest in a problem involves functionals of the solution that are immune to firm theoretical estimates for convergence. The problem being studied does not generally have an analytical solution although the estimates could be applied to the same problems used for code verification.  In a sense “code” verification techniques should be used to verify the error estimates produced by solution verification.

While that last sentence was correct, the language is strained, and prone to confusion.  Perhaps the best thing to do is eliminate the confusion by renaming solution verification something clearer, “numerical error estimation” or “numerical uncertainty”.

That is my suggestion; we should replace the term “solution verification” by “numerical error estimation”.

Advertisements