Solution and code verification are done for quite different reasons. They reinforce each other, but they serve different purposes in conducting quality computational science. Code verification is done to provide clear evidence that numerical methods are implemented to solve governing equations correctly. Solution verification provides an estimate of discretization error for an applied problem. Insofar as providing evidence of correctness of implementations, solution verification is simply a “feel-good” exercise. This seeming confusion is actually quite understandable. In retrospect it is the fault of a poorly chosen taxonomy.

Lately, it has become fairly clear to me people are starting to believe that actively doing verification is a good idea. This is great, the message has been received and action has been taken. The good work of code verification helps to provide faith that the implementation of a solution method is correct. To remind the reader, code verification compares a numerical solution to an analytical solution. One of the key measures in conducting code verification is the observed rate of convergence for a method. This is directly compared to what is theoretically expected. If the values match, the verification is confirmed, and this evidence is amassed to indicate that the method is implemented correctly. If one completes this exercise over-and-over for different problems having analytical solutions, the evidence can become overwhelming. I’d note for experts out there that the expected rate of convergence depends not only on the method, but the nature of the solution. In other words if the analytical solution lacks smoothness (differentiability) or contains a discontinuity, the expected rate of convergence will degrade over the ideal case.

As I stated above, solution verification has an entirely different purpose. It is conducted to help provide an estimate of the numerical error in a solution. That is the key thing, the error estimate, not the rate of convergence. The rate of convergence is an outcome of secondary interest, an auxiliary quantity. If the convergence rate does not match the given order of a numerical method, it does not necessarily mean the method is implemented incorrectly. It might mean that. Instead it is a quantity that invites caution and examination by the code’s user. The reason is that rarely do we have firm theoretical expectations for convergence in “real” applied problems. Often the solution of interest in a problem involves functionals of the solution that are immune to firm theoretical estimates for convergence. The problem being studied does not generally have an analytical solution although the estimates could be applied to the same problems used for code verification. In a sense “code” verification techniques should be used to verify the error estimates produced by solution verification.

While that last sentence was correct, the language is strained, and prone to confusion. Perhaps the best thing to do is eliminate the confusion by renaming solution verification something clearer, “numerical error estimation” or “numerical uncertainty”.

That is my suggestion; we should replace the term “solution verification” by “numerical error estimation”.

Vijay M

said:Valid thoughts. Intuitively I’ve understood convergence rate verification is essentially measuring “discretization sensitivity” and solution verification is the quantification of “discretization uncertainty” or the true numerical error polluting the solution. In that regard, V&V+UQ seems redundant since both verification and validation are in essence measuring in some form, the code, discretization and physics model sensitivity and uncertainty to parameters of interest (choice of solver algorithm, geometry description, mesh size, basis order, subgrid model based correlations, input data, etc.). Stressing each individual aspect is great from a divide and conquer strategy but often focusing on just the primary verification and then moving to validation as a single step process is not a valid workflow IMO. VVUQ should be iterative and necessarily each step applies to all problems of interest.

There is typically a lot of work on using MMS to perform code verification with smooth solution profiles and not enough on performing solution verification via Richardson/Wynn extrapolation etc for more realistic problems. Granted it is harder to do the latter due to the need for consistent interpolation/projection methods, often claims that since code verification is successful for some chosen MMS, the code has been completely verified even for a different realistic problem with vastly different scales is somewhat false. My 2 cents.

wjrider

said:Thanks! I really liked the comment you made about convergence rate being like a sensitivity. Great observation.