Tiny details imperceptible to us decide everything!

― W.G. Sebald

The standards of practice in verification of computer codes and applied calculations are generally appalling. Most of the time when I encounter work, I’m just happy to see anything at all done to verify a code. Put differently, most of the published literature accepts a slip shod practice in terms of verification. In some areas like shock physics, the viewgraph norm still reigns supreme. It actually rules supreme in a far broader swath of science, but you talk about what you know. **The missing element in most of the literature is the lack of quantitative analysis of results. Even when the work is better and includes detailed quantitative analysis, the work usually lacks a deep connection with numerical analysis results**. The typical best practice in verification only includes the comparison of the observed rate of convergence with the theoretical rate of convergence. Worse yet, the result is asymptotic and codes are rarely practically used with asymptotic meshes. Thus, standard practice is largely superficial, and only scratches the surface of the connections with numerical analysis.

The Devil is in the details, but so is salvation.

― Hyman G. Rickover

The generic problem is that it rarely occurs at all much less being practiced well, then we might want to do it with genuine excellence. Thus, the first step to take is regular pedestrian application of standard analysis. Thus, what masquerades as excellence today is quite threadbare. We verify order of convergence in code verification under circumstances that usually don’t meet the conditions where they formally apply. The theoretical order of convergence only applies in the limit where the mesh is asymptotically fine. **Today, the finite size of the discretization is not taken directly into account. This can be done, I’ll show you how below**. Beyond this rather great leap of faith, verification does not usually focus on the magnitude of error, numerical stability, or the nature of the problem being solved. All of these are available results through competent numerical analysis, in many cases via utterly classical techniques.

*A maxim of verification that is important to emphasize is that the results are a combination of theoretical expectations, the finite resolution and the nature of the problem being solved. All of these factors should be considered in interpreting results*.

Before I highlight all of the ways we might make verification a deeper and more valuable investigation, a few other points are worth making about the standards of practice. The first thing to note is the texture within verification, and its two flavors. Code verification is used to investigate the correctness of a code’s implementation. This is accomplished by solving problems with an analytical (exact or nearly-exact) solution. The key is to connect the properties of the method defined by analysis with the observed behavior in the code. **The “gold standard” is verifying that the order of convergence observed matches that expected from analysis**.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The second flavor of verification is solution (calculation) verification. In solution verification, the objective is to estimate the error in the numerical solution of an applied problem. **The error estimate is for the numerical component in the overall error separated from modeling errors**. It is an important component in the overall uncertainty estimate for a calculation. The numerical uncertainty is usually derived from the numerical error estimate. The rate or order of convergence is usually available as an auxiliary output of the process. Properly practiced the rate of convergence provides context for the overall exercise.

One of things to understand is that code verification also contains a complete accounting of the numerical error. This error can be used to compare methods with “identical” orders of accuracy for levels of numerical error, which can be useful in making decisions about code options. **By the same token solution verification provides information about the observed order of accuracy. Because the applied problems are not analytical or smooth enough, they generally can’t be expected to provide the theoretical order of convergence**. The rate of convergence is then an auxiliary result of the solution verification exercise just as the error is an auxiliary result for code verification. It contains useful information on the solution, but it is subservient to the error estimate. Conversely, the error provided in code verification is subservient to the order of accuracy. Nonetheless, the current practice simply scratches the surface of what could be done via verification and its unambiguous ties to numerical analysis.

Little details have special talents in creating big problems!

― Mehmet Murat ildan

If one looks at the fundamental (or equivalence( theorem of numerical analysis, the two aspects of theorem are stability and consistency implying convergence (https://wjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/ ). Verification usually uses a combination of error estimation and convergence testing to imply consistency. Stability is merely assumed. **This all highlights the relatively superficial nature of the current practice**. The result being tested is completely asymptotic, and the stability is merely assumed and never really strictly tested. Some methods are unconditionally stable, which might also be tested. In all cases the lack of stress testing the results of numerical analysis is short-sighted.

One of the most important results in numerical analysis is the stability of the approximation. Failures of stability are one of the most horrific things to encounter in practice. Stability results should be easy and revealing to explore via verification. It also offers the ability to explore what failure of a method looks like, and the sharpness of the estimates of stability. **Tests could be devised to examine the stability of a method and confirm this rather fundamental aspect of a numerical method**. In addition to confirming this rather fundamental behavior, the character of instability will be made clear if it should arise. Generally, one would expect calculations to diverge under mesh refinement and the instability to manifest itself earlier and earlier as the mesh is refined. I might suggest that stability could be examined via mesh refinement, and observing the conditions where the convergence character changes.

One of the most unpleasant issues with verification is the deviations of the observed rate of convergence from what is expected theoretically. No one seems to have a good answer to how close, is close enough? Sometimes we can observe that we systematically get closer and closer as the mesh is refined. **This is quite typical, but systematic**

**deviations are common. As I will show, the deviations are expected and may be predicted by detailed numerical analysis**. The key is to realize that the effects of finite resolution can be included in the analysis. As such for simple problems we can predict the rate of convergence observed and its deviations for the asymptotic rate. Beyond the ability to predict the rate of convergence, this analysis provides a systematic explanation for this oft-seen results.

This can be done very easily using classical methods for numerical analysis (see previous blog post https://wjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/). We can start with the knowledge that detailed numerical analysis uses an analytical solution to the equations as its basis. **We can then analyze the deviations from the analytical and their precise character including the finite resolution**. As noted in that previous post, the order of accuracy is examined via a series expansion in the limit where the step size or mesh is vanishingly small. We also know that this limit is only approached and never actually reached in any practical calculation.

For the simple problems amenable to these classical analyses, we can derive the exact rate of convergence for a given step size (this result is limited to the ideal problem central to the analysis). The key part of this approach is using the exact solution to the model equation and the numerical symbol providing an error estimate. Consider the forward Euler method for ODE’s, , the error is . **We can now estimate the error for any step size and analytically estimate the convergence rate we would observe in practice**. If we employ the relatively standard practice of mesh halving for verification, we get the estimate of the rate of convergence, . A key point to remember is that the solution with the halved time step takes twice the number of steps. Using this methodology, we can easily see the impact of finite resolution. For the forward Euler method, we can see that steps larger than zero raise the rate of convergence above the theoretical value of one. This is exactly what we see in practice.

When one starts to examine what we expect through analysis, a number of interesting things can be observed. If the coarsest step size is slightly unstable, the method will exhibit very large rates of convergence. Remarkably, we see this all the time. Sometimes results of verification produces seemingly absurdly high rates of convergence. **Rather than being indicative of everything being great, it is an indication that the calculation is highly suspect**. The naïve practitioner will often celebrate the absurd result as being triumphant when it is actually a symptom of problems requiring greater attention. With the addition of a refined analysis, this sort of result can be seen as pathological.

*Immediately recognize that we have yielded a significant result with the analysis of perhaps the simplest numerical method in existence*. Think of the untapped capacity for explaining the behavior observed in computational practice. **Moreover, this significant result explains a serious and far pernicious problem in verification, the misreading of results. Even where the verification practice is quite good, the issue of deviation of convergence rates from the theoretical rates is pervasive. We can easily see that this is a completely expected behavior that falls utterly in line with expectations**. This ought to “bait the hook” to conducting more analysis, and connecting it to the verification results.

*There is a lot more that could be done here, I’ve merely scratched the surface.*

The truth of the story lies in the details.

― Paul Auster

obliqueperspectives

said:Bill, When I was at LANL (CCS-1 &3) in visualization, I was working on uncertainty (esp visualization of it) and comparative visualization. I could not dissuade people from assuming that the viewgraph norm was the acme of analysis. Worked primarily on climate and turbulence, with support from Jim Kamm and Beth Wingate. To me, the crux of the matter was to look at the science in the images or data, not just the values. The viz folks chose to ignore the structures (ie vortices, etc) the net effect of which was to take the science out of scientific visualization. My approach was to look at the data as an assemblage of self similar forms whose measure was a demographic of maturities and sizes. While I retired from LANL 2 yrs ago for medical reasons. I am still working on the problem and would welcome interaction. Richard Strelitz, PhD,

On Fri, Oct 27, 2017 at 11:10 AM, The Regularized Singularity wrote:

> Bill Rider posted: “Tiny details imperceptible to us decide everything! > ― W.G. Sebald The standards of practice in verification of computer codes > and applied calculations are generally appalling. Most of the time when I > encounter work, I’m just happy to see anything at all ” >

Bill Rider

said:Hi Richard! So good to hear from you! What a treat. I’m still out here fighting the good fight. The disingenuous and naive are seemingly ever more taken as superior to depth and understanding. We seem to be in a time of the superficial over the substantial… I love hearing from you!

Pingback: This Week in CFD | Another Fine Mesh

Bill Rider

said:Thanks for the attention!

Bill Rider

said:A postscript for this blog post: I wrote this post being inspired by a flaw in a paper I reviewed. In response the authors removed the material rather than improving it. I was rather put off by their unwillingness to do a little more work (it took me 10 minutes to analyze the method using Mathematica). The associate editor decided to accept the paper even though it was now dull as dirt. Another stunning example of how flawed the publishing process is.

wdf

said:Bill,

I think you (rather rudely) took our unwillingness to be the first to document this method as our inability to. Please feel free to publish the details of this method yourself and then I would have no problem including the original (and updated) results using it and citing it as W.J. Rider’s two-step leap-frog method, or whatever you want to call it. (P.J. Roache’s suggestion of alternating Forward/Backward Euler is pretty good too.) But I’m not going to do it. I don’t think it’s a terribly good scheme and I don’t want to be credited with inventing it. No thank you.

Will

(William D. Fullmer)

Bill Rider

said:Will. I’ve had to do analysis of methods that I’ve not invented over and over in my career. I always learned something. This was no exception. People use methods for reasons that may be entirely unclear in retrospect. They are usually the product of convenience for the state of a code during some retrofitted improvement. Nonetheless, it is a learning experience. The method in your paper was unique. Leapfrog methods are used several fields rather extensively (geophysics & plasma physics). The documentation and development of a method adds to knowledge. It is possible that a gem is sitting there, it just needs to have some issues fixed. I won’t back away from the conclusion that an opportunity is missed by not doing the analysis. I’m more than happy to be disagreed with too, it makes life more interesting.

Bill Rider

said:Will. Since the paper was accepted, I’ll let you know that I thought it was a good paper, but an opportunity to make a great paper was missed, The method was genuinely interesting and a more complete analysis would have been a wonderful contribution. It was a shame not to include it.

More to the point of the blog post, Verification is generally practiced in a superficial manner divorced from detailed knowledge of the analysis. Analysis results are confirmed at a very superficial manner. Your paper could have really done things right. You’d have no ownership of this flawed method, but you would own being a poster child for doing verification properly and completely.

I’m sorry that you found my review to be rude. For that I apologize, truly, I’ve been on the receiving end of that too often. The paper was genuinely inspiring because it was so close to being wonderful. I was saddened by the ultimate outcome of not stepping toward greatness.

wdf

said:Bill,

Thanks for the follow up remarks. To be honest, we didn’t set out to write a great paper. Actually we had a serious discussion before writing it up if this was even worth publishing or not. Ultimately we decided we should archive this work largely because a) it’s interesting that a problem like the HCS that’s been beaten to death over the years somehow has escaped being used in this manner and b) we were also using the HCS to “benchmark” CFD-DEM simulations which is some type of verification exercise, but not a clean code verification as that work is, and pointing out this difference could be useful. [Aside: I would be curious to know your thoughts on that, in other words what would you call comparing results from molecular dynamics to an analytical solution of the Euler or NS equations? I am writing something up on CFD-DEM simulations in the HCS now for the V&V meeting and I’m just calling it all “benchmarking” which seems ambiguous enough to be safe and cover all bases.] I did take pause at your comment that the VVUQ journal should be reserved for really great works, not standard practices or minor works. You are probably right on that account and maybe we picked the wrong journal to archive a minor contribution. I will also admit that I sent back a rather hostile reply. My frustration level with this one was very high to begin with since this this short, simple paper was submitted in August of last year and was still under review some 14+ months later. I would guess that it didn’t even hit your desk until after we asked our AE to reject the paper so that we could be released from copyright and submit the paper elsewhere. Hopefully any future encounters will be under better circumstances. I do enjoy your blog posts.

Will

Bill Rider

said:Will. I really appreciate the healthy dialog. My feelings about validation are simple: it needs to involve an observation of the natural world, reality. Verification is analytical, which might mean a numerical evaluation, but one with defined precision.

Benchmarks are left over and may be very important within a given community, but they lack some aspects of “truth”. Still for some communities the benchmarks have great importance.

I would say that any example where verification or validation is done correctly is a great contribution because of how uncommon it is.

Pingback: The Best of the Regularized Singularity | The Regularized Singularity