A very great deal more truth can become known than can be proven.

― Richard Feynman

Solution verification involves examining error and results without the knowledge the imgresexact solution. This makes it a more difficult task than code verification where an exact solution is known removing a major uncertainty. A secondary issue associated with not knowing the exact solution is the implications on the nature of the solution itself. With an exact solution, a mathematical structure exists allowing the solution to be achievable analytically. Furthermore, exact solutions are limited to relatively simple models that often cannot model reality. Thus, the modeling approach to which solution verification is applied is necessarily more complex. All of these factors are confounding and produce a more perilous environment to conduct verification. The key product of solution verification is an estimate of numerical error and the secondary product is the rate of convergence. Both of these quantities are important to consider in the analysis.

The way to cope with this generally more hostile analysis environment involves improved analysis methods. One of the key elements in the analysis is contending with the lack of certainty about the solution, its nature and character mathematically. For this reason the knowledge and guarantees about the results is missing. For instance we don’t know what order of convergence to reasonably expect from the analysis and cannot use this to screen our results. Generally speaking if the verification result shows convergence at the theoretical rate for the method we can be sure we are solving a relatively simple “easy” problem. Usually the applied problems that modeling & simulation are attacking are mathematically difficult. Philosophically, the whole reason for modeling & simulation is solving problems that are beyond our analytical grasp. In a deep sense the complex and difficult character to problems is unavoidable for the practical a use of modeling with computers. When we’ve successfully attacked the problem of verification for a problem without an exact solution, the same analysis methodology can improve our code verification practice.chart-with-huge-error-bars

It is important to understand solution verification within the broader context of computational modeling. Solution verification contributes to the overall enterprise of analysis uncertainty quantification. The most classical investigation will involve comparing the modeled results with observations in the real World (ideally an experiment). There are many elements to the uncertainty in this case including the model parameters, the constitutive properties, the experimental measurements and the numerical solution. Solution verification is the process for examining and estimating the numerical error and specifying its uncertainty. Sometimes this is applied in the use of computational modeling for purposes of decision-making or scenario testing where no real World data exists. In this case the numerical error is an important element in the overall lack of certainty about the results. If the numerical error is well behaved it will be a bias from the exact continuum solution to the model. This bias is important to understand in how it might skew the results and any advise.

There are two ways to do great mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else — but persistent.

― Raoul Bott

When one lays out the mathematical framework for solution verification, the immediate impression is the added difficulty compared to code verification is the lack of direct knowledge of the precise solution. The full solution to the problem is inferred from the inaccurate numerical solutions. The equation to solve is the following S_0 = S_k + C h_k^a where the new unknown is the obstensible estimate of the exact solution S_0 that is the solution where h=0. The solutions used to determine this estimate are S_k the solutions found with h_k. We notice that we have imagesthree unknowns, S_0, C, a meaning the well-determined solution requires three pieces of determined data, S_k. As we will discuss this problem can be solved in a variety of ways including under-, fully and over-determined forms.

One of the key issues to recognize with solving this problem is an aspect of complexity because of the general nonlinearity of the determination of the model. The solution to this coupled system of nonlinear equations is generally subtle, and necessarily solved numerically. As such, the solution can have its own errors requiring some care and verification. The system of equations admits a simple analytical solution in special cases where the discrete solutions use a sequence of meshes where r = h_k/h_{k-1} is constant. In this case we can write the solution in closed form \log (E_{1,2}/E_{2,3}) / \log (r) , where E_{k,k-1} = S_k - S_{k-1}. More generally we need to attack this with a coupled nonlinear solve. If we deal with an over-determined version of the problem we will use a nonlinear least squares solver (or this is the knee-jerk response). As we discuss next, thinking about this decision opens the door to some more interesting and robust choices.

The general over-determined version of the solution verification equation (i.e., more than three grids) would be amenable to solution via nonlinear least squares method. This is not the only choice, and consideration of this opens the door to other choices. The solution to the over-determined problem is not unique, and the solution has the imprint of the method of solution. As such the choice of least squares implies a number of explicit assumptions that the typical practitioner doesn’t even know they are making. For example, one may choose to solve the over-determined problem in a different norm than the two norm (i.e., least squares). One may choose to solve a constrained problem instead of an unconstrained problem. In addition, one could consider solving an under-determined problem adding either constraints or regularizing the solution. A classical example of regularization is the Tikhonov method where a penalty is added to make the problem well determined. A popular recent approach focuses on a similar regularization, but in the one norm (compressed sensing, LASSO, …).

mediocritydemotivatorThere are several practical issues related to this whole thread of discussion. One often encountered and extremely problematic issue is insanely high convergence rates. After one has been doing verification or seeing others do verification for a while, the analysis will sometimes provide an extremely high convergence rate. For example a second order method used to solve a problem will produce a sequence that produces a seeming 15th order solution (this example is given later). This is a ridiculous and results in woeful estimates of numerical error. A result like this usually indicates a solution on a tremendously unresolved mesh, and a generally unreliable simulation. This is one of those things that analysts should be mindful of. Constrained solution of the nonlinear equations can mitigate this possibility and exclude it a priori. This general approach including the solution with other norms, constraints and other aspects is explored in the paper on Robust Verification. The key concept is the solution to the error estimation problem is not unique and highly dependent upon assumptions. Different assumptions lead to different results to the problem and can be harnessed to make the analysis more robust and impervious to issues that might derail it.

The techniques discussed in that paper were originally devised to deal with the all too often case where only one or two different grids are used and the error estimation problem is under-determined. The approach taken to solve this problem involves adding constraints to the solution based on expert knowledge and judgment. The overall approach was then approached when it was realized that the under- fully- and over-determined cases should all be treated consistently. The verification problem is solved repeatedly using different assumptions resulting in a natural variation in the results providing uncertainty in the error estimation and the rate of convergence. If the data is self consistent with a well-defined solution the uncertainty in the error will itself be small and the convergence rate will also be certain. Conversely if the data is conflicting or opposes expert expectations, the uncertainty will be large. This entire methodology produces a more robust numerical uncertainty that adapts to the data, and avoids using fixed size safety factors. It turns out that this expert judgment is usually called into action with verification, but in an ad hoc manner and only when the issues are serious. The robust verification adds the expert judgment from the outset so that more subtle issues are subject to the same treatment.

Instead of solving the verification equation once using a nonlinear least squares approach, robust verification solves the problem in a multitude of ways. This involves solving the verification problem using other error norms in a constrained minimization framework. The data is also used over. One standard assumption is that the solutions on the finer grids (smaller h) are closer to the exact solution, and this data is more prominent in the solution. The end result of the analysis is a multitude of estimates of the numerical error and convergence. These results are then subjected to robust statistical examination using median statistics. We report the median of the estimates as the error and convergence rate. The median deviation is used to place and uncertainty on this estimate. One of the key benefits of this estimation is its lack of susceptibility to corruption by outliers in the analysis. Outliers are further suppressed in the analysis by the use of expert judgment as constraints. For example, the absurdly large convergence rates are removed by the constraints if the rate of convergence is constrained to be below a given value.

forwardEulerBefore moving to examples of solution verification we will show how robust verification can be used for code verification work. Since the error is known, the only uncertainty in the analysis is the rate of convergence. As we can immediately notice that this technique will get rid of a crucial ambiguity in the analysis. In standard code verification analysis, the rate of convergence is never the exact formal order, and expert judgment is used to determine if the results is close enough. With robust verification, the convergence rate has an uncertainty and the question of whether the exact value is included in the uncertainty band can be asked. Before showing the results for this application of robust verification, we need to note that the exact rate of verification is only the asymptotic rate in the limit of h = 0. For a finite step size the rate of convergence should deviate from this value and for simple cases the value can be derived using a modified version of classical numerical analysis.

Our first example of solution verification will repeat our examination of simple ODE integrators, but disregard our knowledge of the exact solution. It is a useful example because we can examine the efficacy of solution verification with a precise knowledge of the true errors. We can use the data from our code verification study to good effect here. Here is the raw data used for the forward Euler study.

h Solution, t=1 Error, t=1
0.20 0.3277 0.0402
0.10 0.3487 0.0192
0.05 0.3585 0.0094
0.02 0.3642 0.0037
0.01 0.3660 0.0018
estimate 0.3678±0.0002  

For the code verification part of the example, the estimated truncation error is E=0.2030 h^{1.0245\pm0.0124}. The error bars do not take us to the theoretical convergence rate of one. The data is consistent with the rate being above one (and this is analytically expected). Using this same data for solution verification yields the following model, S(h) = 0.3678 \pm 0.0002 - 0.2080 h^{1.0386 \pm 0.0207}. Close examination shows that this solution is quite close to the exact solution 0.0001 and within the error bars. If we use the standard techniques of simply least square fitting the data we get the following model, S(h) = 0.3677 - 0.2239 h^{1.0717} . The error estimate here is 0.0017, which ends up being rather over generous when the standard safety factor of 1.25 is applied. Using the robust verification technique we get a better estimate of the exact solution, the actual convergence rate and a tighter error bound.

Supposing is good, but finding out is better.

― Mark Twain

It is also useful to look at a pathological case where the rate of convergence is absurd and standard analysis would be prone to missing it. The case we have at our fingertips involved very coarse grid solutions to large eddy simulation in a complex geometry relevant to heat transfer and fluid flow in nuclear reactors. Early calculations were used to estimate the mesh required for well-resolved calculations. As we found out, this is a perilous enterprise. A couple codes (one production and one research) we enlisted in thirodbundles study using some initial grids that were known to be inadequate. One of the codes was relatively well trusted for this class of applications and produced three solutions that for all appearances appeared reasonable. One of the key parameters is the pressure drop through the test section. Using grids 664K, 1224K and 1934K elements we got pressure drops of 31.8 kPa, 24.6 kPa and 24.4 kPa respectively. Using a standard curve fitting for the effective mesh resolution gave an estimate of 24.3 kPa±0.0080 kPa for the resolved pressure drop and a convergence rate of 15.84. This is an absurd result and needs to simply be rejected immediately. Using the robust verification methodology on the same data set, gives a pressure drop of 16.1 kPa±13.5 kPa with a convergence rate of 1.23, which is reasonable. Subsequent calculations on refined grids produced results that were remarkably close to this estimate confirming the power of the technique even when given data that was substantially corrupted.

drekarOur final example is a simple case of validation using the classical phenomena of vortex shedding over a cylinder at a relatively small Reynolds number. This is part of a reasonable effort to validate a research code before using in on more serious problems. The key experimental value to examine is the Stouhal number defined, St = f \ell/U the shedding frequency normalized by the size of cylinder and the velocity, which has the value experimentally of 0.164\pm 0.005 for a flow of Reynolds number 100 (the Reynolds number is the non-dimensional ratio of inertial to viscous force in a flow).

∆t RMS h St
0.002 0.054111988 0.110474853
0.002 0.023801688 0.152492294
0.002 0.010786082 0.164777976
0.002 0.005264375 0.165127187

When we apply the robust verification methodology to this data we find that the code produces a Strouhal number that is slightly larger than the experimental value St(h) = 0.1657\pm 0.0013 + C h^{1.8486\pm 0.1476}. Including error bars recovers the experimental value. This can be regarded as a modest success for the code’s ability to be considered for more complex flows.

The foundation of data gathering is built on asking questions. Never limit the number of hows, whats, wheres, whens, whys and whos, as you are conducting an investigation. A good researcher knows that there will always be more questions than answers.

― Karl Pippart III

Rider, William, Walt Witkowski, James R. Kamm, and Tim Wildey. “Robust verification analysis.” Journal of Computational Physics 307 (2016): 146-163.

 

Advertisements