Verification as usually defined for computational science is an important, but confusing aspect of simulation. The core concepts of verification are three-fold:

- Does a simulation converge?
- What is the rate of this convergence?
- And what is the magnitude of numerical error?

Answering these questions provides evidence of the quality of the numerical solution. This is an important component of the overall quality of simulation. Scientists who tend to focus on the fidelity of the physical modeling often systematically overlook this aspect of the overall quality in simulation.

V&V often comes with a confusing “word soup” defining important terms. This “word soup” may be at its worst in verification. Verification and validation mean the same thing within a non-technical context, but in the framing of simulation quality they have quite specific technical meanings. So that the overall simulation quality can be assessed and understood, the activities surrounding each are distinctly different. The pithy statement of what the two words mean is useful; verification is the determination of whether the model is being solved correctly, and validation is the determination of whether the model is correct. Each involves the accumulation of evidence that this correctness is present.

Scientists tend to focus on the correctness of the model itself. The determination of correctness of the solution of the model is a mathematical problem involving basic numerical analysis. Validation necessarily involves observational or experimental data, and its comparison to the simulation. A necessary observation is that validation involves several error modes that color any comparison: the size of the numerical error in solving the model, and the magnitude of the experimental or observational error. Too often, one or both of these are overlooked. For high quality work, both of these must be accounted for in the assessment of model correctness.

For the numerical error, verification is used, but this differs from the verification process used to determine the correctness of the model solution. Thus, the distinction in verification is made between its two uses. Code verification is the process of determining model solution correctness and necessarily involves comparison of numerical solutions with analytical solutions that are unambiguously correct. For the purpose of error estimation several procedures may be used, but solution (or calculation) verification is perhaps the most convincing methodology.

A big issue is the implementation of verification is the confounding definitions and purpose of verification. In this vein the outcomes of the different forms of verification focus on differing metrics. I am going to try to address these confounding definitions and idiosyncrasies clearly.

For code verification and the determination of implementation and solution procedure correctness, the key metric is the rate of convergence. This rate of convergence is then compared with the analysis of the formal order of accuracy for the method being tested. If the solution to the problem is sufficiently smooth, the computed order of accuracy should closely match the order of accuracy from the numerical analysis as the mesh density becomes high.

In addition, the magnitude of the error is available in code verification. The code and computational physics communities systematically overlook the utilization in practical terms of the error magnitude in code verification. This aspect could be used to great effect in determining the efficacy of numerical methods. The determination of order of accuracy and error magnitude is not limited to smooth solutions. If solutions are discontinuous, the convergence rate and error magnitude are usually completely overlooked. Comparisons between the analytical solution and the numerical result are limited to the viewgraph or eyeball norm. This is a mistake and a missed opportunity to discuss the impact of numerical methods. Most practical problems have various forms of discontinuous behavior, and the magnitude of error for these problems define the efficiency of the method.

Solution verification is important for estimating the numerical error in applied simulation. No analytical solution exists in these cases, and the goal is two fold: determine whether the model is converging toward a mesh independent solution, and the magnitude of the error. Often scientists will show a couple of mesh solutions to assess whether the solution is sensitive to the mesh resolution. This is better than nothing, but only just. This does not provide the key property of the magnitude of numerical error in the solution. The error magnitude is a function of the mesh resolution; different mesh resolution has different error magnitudes (for a convergent simulation). An auxiliary quantity of interest is the rate of convergence, but the error magnitude is the primary metric of interest.

Lastly, the expectations for the rate of convergence are not often clearly enough stated. Concisely, the rate of convergence is a function of the details of the numerical method, and the nature of the solution. This is true for both code and solution verification. If the solution does not possess sufficient smoothness (regularity) or certain degenerate features, the convergence rate will deviate from the design order of accuracy, which a numerical method can achieve under ideal circumstances. Typically, the observed convergence rate is expected to be the minimum of the design order of accuracy and the solution regularity.

If instead, another error estimation procedure is utilized (such as adjoint methods, PDE-based methods, Z-Z, etc…), there is a secondary burden for the simulation code to address. In these cases the error estimation itself needs to be verified (code verified using analytical error estimate comparison, and solution verification comparison for applied use). I have rarely observed the successful use of verification for such estimation procedures.

Finally, I’ll mention the concerns I have about commercial CFD codes, or codes downloaded and used without detailed knowledge of the solution procedures therein. In the vast majority of cases these codes do not have a well-evidenced pedigree. The codes often report to having a good pedigree, but the evidence of that pedigree is sorely lacking. Those writing, selling and distributing these codes rarely provide the necessary evidence to have good faith in such codes. This lack of evidenced pedigree and deep knowledge of the solution procedures greatly limits the effective estimation of numerical error when using such codes.