…Next time you’re faced with a choice, do the right thing. It hurts everyone less in the long run.
As the best practices in scientific computing continue to improve, verification is more frequently being seen in papers and reports. The progress over the past decade has been fantastic to see. Despite this progress there are some underlying problems that are pervasive in the community’s practice, and whose impact will ultimately reduce progress. These poorly executed practices are inhibiting the characterization of methods and their impact (positive or negative) on solutions.
Firstly, code verification is almost always applied to problems that bear little resemblance to the problems that are intended to be solved in the application of a method. Code verification usually only reports order of accuracy for the purposes of matching the theoretical expectations. This is meeting the minimal requirements of code verification as a practice. Often ignored is the capability to report the precise numerical error for the problem being computed. Both the rate of convergence and the error contain important and useful information for the developers and users of a numerical method. Both should be systematically reported rather than just the minimum requirement.
For solution verification the problem is much worse. Even when solution verification is done we are missing important details. The biggest problem is the lack of solution verification for the application of scientific computing to problems. Usually the problem is simply computed and graphs are overlaid, and success is declared. The comparison looks good enough. No sense of whether the solution is accurate is given at least quantitatively. An error estimate for the solution shown, or better yet a convergence study would provide much enhanced faith in the results. In addition to the numerical error, the rate of convergence would also provide information on the tangible expectations for the solution for practical problems. Today such expectations are largely left to be guessed by the reader.
In any moment of decision, the best thing you can do is the right thing. The worst thing you can do is nothing.
I believe on of the deeper issues is the belief that the rate of convergence and numerical error only matter for problems with analytical results. This matters for code verification purposes, but also matters greatly for practical problems. In fact it is probably more important for practical problems, yet it is rarely reported. To get things working better we need to move to a practice where both convergence and error are reported as a matter of course. It would be a great service to the community.