Tiny details imperceptible to us decide everything!

― W.G. Sebald

downloadThe standards of practice in verification of computer codes and applied calculations are generally appalling. Most of the time when I encounter work, I’m just happy to see anything at all done to verify a code. Put differently, most of the published literature accepts a slip shod practice in terms of verification. In some areas like shock physics, the viewgraph norm still reigns supreme. It actually rules supreme in a far broader swath of science, but you talk about what you know. The missing element in most of the literature is the lack of quantitative analysis of results. Even when the work is better and includes detailed quantitative analysis, the work usually lacks a deep connection with numerical analysis results. The typical best practice in verification only includes the comparison of the observed rate of convergence with the theoretical rate of convergence. Worse yet, the result is asymptotic and codes are rarely practically used with asymptotic meshes. Thus, standard practice is largely superficial, and only scratches the surface of the connections with numerical analysis.

The Devil is in the details, but so is salvation.

― Hyman G. Rickover

The generic problem is that it rarely occurs at all much less being practiced well, then we might want to do it with genuine excellence. Thus, the first step to take is regular pedestrian application of standard analysis. Thus, what masquerades as excellence today is quite threadbare. We verify order of convergence in code verification under circumstances that usually don’t meet the conditions where they formally apply. The theoretical order of convergence only applies in the limit where the mesh is asymptotically fine. Today, the finite size of the discretization is not taken directly into account. This can be done, I’ll show you how below. Beyond this rather great leap of faith, verification does not usually focus on the magnitude of error, numerical stability, or the nature of the problem being solved. All of these are available results through competent numerical analysis, in many cases via utterly classical techniques.

A maxim of verification that is important to emphasize is that the results are a combination of theoretical expectations, the finite resolution and the nature of the problem being solved. All of these factors should be considered in interpreting results.

Before I highlight all of the ways we might make verification a deeper and more valuable investigation, a few other points are worth making about the standards of practice. The first thing to note is the texture within verification, and its two flavors. Code verification is used to investigate the correctness of a code’s implementation. This is accomplished by solving problems with an analytical (exact or nearly-exact) solution. The key is to connect the properties of the method defined by analysis with the observed behavior in the code. The “gold standard” is verifying that the order of convergence observed matches that expected from analysis.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The second flavor of verification is solution (calculation) verification. In solution verification, the objective is to estimate the error in the numerical solution of an applied problem. The error estimate is for the numerical component in the overall error separated from modeling errors. It is an important component in the overall uncertainty estimate for a calculation. The numerical uncertainty is usually derived from the numerical error estimate. The rate or order of convergence is usually available as an auxiliary output of the process. Properly practiced the rate of convergence provides context for the overall exercise.

imagesOne of things to understand is that code verification also contains a complete accounting of the numerical error. This error can be used to compare methods with “identical” orders of accuracy for levels of numerical error, which can be useful in making decisions about code options. By the same token solution verification provides information about the observed order of accuracy. Because the applied problems are not analytical or smooth enough, they generally can’t be expected to provide the theoretical order of convergence. The rate of convergence is then an auxiliary result of the solution verification exercise just as the error is an auxiliary result for code verification. It contains useful information on the solution, but it is subservient to the error estimate. Conversely, the error provided in code verification is subservient to the order of accuracy. Nonetheless, the current practice simply scratches the surface of what could be done via verification and its unambiguous ties to numerical analysis.

Little details have special talents in creating big problems!

― Mehmet Murat ildan

If one looks at the fundamental (or equivalence( theorem of numerical analysis, the two aspects of theorem are stability and consistency implying convergence (https://wjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/ ). Verification usually uses a combination of error estimation and convergence testing to imply consistency. Stability is merely assumed. This all highlights the relatively superficial nature of the current practice. The result being tested is completely asymptotic, and the stability is merely assumed and never really strictly tested. Some methods are unconditionally stable, which might also be tested. In all cases the lack of stress testing the results of numerical analysis is short-sighted.

One of the most important results in numerical analysis is the stability of the approximation. Failures of stability are one of the most horrific things to encounter in practice. Stability results should be easy and revealing to explore via verification. It also offers the ability to explore what failure of a method looks like, and the sharpness of the estimates of stability. Tests could be devised to examine the stability of a method and confirm this rather fundamental aspect of a numerical method. In addition to confirming this rather fundamental behavior, the character of instability will be made clear if it should arise. Generally, one would expect calculations to diverge under mesh refinement and the instability to manifest itself earlier and earlier as the mesh is refined.  I might suggest that stability could be examined via mesh refinement, and observing the conditions where the convergence character changes.

One of the most unpleasant issues with verification is the deviations of the observed rate of convergence from what is expected theoretically. No one seems to have a good answer to how close, is close enough? Sometimes we can observe that we systematically get closer and closer as the mesh is refined. This is quite typical, but systematic

feuler

The expected convergence rate for a single time step using forward Euler for a linear ODE

deviations are common. As I will show, the deviations are expected and may be predicted by detailed numerical analysis. The key is to realize that the effects of finite resolution can be included in the analysis. As such for simple problems we can predict the rate of convergence observed and its deviations for the asymptotic rate. Beyond the ability to predict the rate of convergence, this analysis provides a systematic explanation for this oft-seen results.

This can be done very easily using classical methods for numerical analysis (see previous blog post https://wjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/). We can start with the knowledge that detailed numerical analysis uses an analytical solution to the equations as its basis. We can then analyze the deviations from the analytical and their precise character including the finite resolution. As noted in that previous post, the order of accuracy is examined via a series expansion in the limit where the step size or mesh is vanishingly small. We also know that this limit is only approached and never actually reached in any practical calculation.

For the simple problems amenable to these classical analyses, we can derive the exact rate of convergence for a given step size (this result is limited to the ideal problem central to the analysis). The key part of this approach is using the exact solution to the model equation and the numerical symbol providing an error estimate. Consider the forward Euler method for ODE’s, u^{n+1} = u^n + h \lambda u^n, the error is E(h) = |1 + h \lambda - \exp(h \lambda) |. We can now estimate the error for any step size and analytically estimate the convergence rate we would observe in practice. If we employ the relatively standard practice of mesh halving for verification, we get the estimate of the rate of convergence, n(h) = \log\left[E(h)/E(h/2)\right]/\log(2). A key point to remember is that the solution with the halved time step takes twice the number of steps. Using this methodology, we can easily see the impact of finite resolution. For the forward Euler method, we can see that steps larger than zero raise the rate of convergence above the theoretical value of one. This is exactly what we see in practice.

feuler10

The expected convergence rate for a ten time steps using forward Euler for a linear ODE

When one starts to examine what we expect through analysis, a number of interesting things can be observed. If the coarsest step size is slightly unstable, the method will exhibit very large rates of convergence. Remarkably, we see this all the time. Sometimes results of verification produces seemingly absurdly high rates of convergence. Rather than being indicative of everything being great, it is an indication that the calculation is highly suspect. The naïve practitioner will often celebrate the absurd result as being triumphant when it is actually a symptom of problems requiring greater attention. With the addition of a refined analysis, this sort of result can be seen as pathological.

feuler100

The expected convergence rate for one hundred time steps using forward Euler for a linear ODE

Immediately recognize that we have yielded a significant result with the analysis of perhaps the simplest numerical method in existence. Think of the untapped capacity for explaining the behavior observed in computational practice. Moreover, this significant result explains a serious and far pernicious problem in verification, the misreading of results. Even where the verification practice is quite good, the issue of deviation of convergence rates from the theoretical rates is pervasive. We can easily see that this is a completely expected behavior that falls utterly in line with expectations. This ought to “bait the hook” to conducting more analysis, and connecting it to the verification results.

There is a lot more that could be done here, I’ve merely scratched the surface.

The truth of the story lies in the details.

― Paul Auster

 

Advertisements