The purpose of computing is insight, not pictures.

–Nick Trefethen
This is a brief take on the intersection of a couple of previous posts: the recent one on the viewgraph norm (https://wjrider.wordpress.com/2014/10/07/the-story-of-the-viewgraph-norm/), colorful fluid dynamics (https://wjrider.wordpress.com/2014/10/03/colorful-fluid-dynamics/), the Millennium prize for the Navier-Stokes equations (https://wjrider.wordpress.com/2014/03/07/the-clay-prize-and-the-reality-of-the-navier-stokes-equations/), and numerical viscosity (https://wjrider.wordpress.com/2013/10/04/there-is-nothing-artificial-about-artificial-viscosity/).Kelvin-Helmholtz_Instability.ogv

The basic view of quality is predicated on the belief that more “energy (disorder)” and complexity in the computed flow is directly correlated to the quality of the computation. This is typically applied in an intrinsically ad hokh-instabilityc manner that may not actually provide an accurate assessment of quality. At some point the disorder in the computation is too great and the quality is judged to be lower. This is done purely by expert judgment, not based on any sort of clear definitive measure or feature. The real issue is whether the computation is swirlier due to incipient errors that are on the verge of losing stability. This may inadvertently favor instability in the numerical method point-of-view (in fact, almost certainly).

There are three great branches of science: theory, experiment, and computation.

–Nick Trefethen,

cyl_vort_editThis topic involves deep-seated issues with each of these branches.

As soon as a fluid flow becomes unstable and vortically dominated the knowledge of the exact solution is absent. These flows are exceedingly important thus the quality of calculations is extremely interesting, but difficult-to-impossible to specifically determine. At the heart of the issue is the lack of theoretical grasp of turbulent flows. This is a fundamental limitation on our ability to reliably compute the behavior of real fluids and correspondingly determine the quality of computing methods. This in turn leaves us with the current state of affairs swirlier is better.

I became most troubled by this aspect of the determination of quality after seeingswirly2 a standard applied, which amounts to “the more swirly the result, the better the method” (more swirly means more vorticity). An exemplar of this approach is the paper by Shi, Zhang and Shu in the Journal of Computational Physics, 186, pp. 690 (1993) http://www3.nd.edu/~yzhang10/euler-weno9.pdf. Several problems are studied using mesh refinement (good!) including shock-driven mixing and Kelvin-Helmholtz, and Rayleigh-Taylor instabilities with high-order methods. The conclusion is that the higher order methods are better because they produce more fine scale structure.

Swirly1My concern about this issue is that the higher order methods also contain insidious and problematic numerical instabilities that could potentially contribute to physically incorrect solutions. The current “swirlier is better” standard yields little or no guidance towards improving the methods or uncovering their shortcomings. The problems with these methods can manifest themselves as entropy violating solutions, which are by definition unphysical. An unphysical solution will produce more vorticity, and hence be swirlier by the standard applied in the community; it would be viewed as better. In fact it would be worse and dangerously so.

In chaos, there is fertility.

― Anaïs Nin

Why does this standard exist?

The use of the first-order upwind method historically produced too much numerical dissipation. Upwind methods were robust enough to be used for applications, images-1 copy 5but also had large errors. These errors led to the destruction of vorticity, which made flows distinctly less swirly than reality. Modern methods provided the robustness of upwind methods with much smaller error, and much more realistic swirliness. The problem is that instabilities can lead to swirliness too and this standard leaves no room for determining the limits for methods. This is left for validation against experimental data. This is thoroughly unsatisfying because there is not a mathematical ground truth. Modeling and numerical effects are muddled together. Unfortunately, mathematics is not currently attacking this problem very aggressively (see my Applied Math critique https://wjrider.wordpress.com/2014/10/16/what-is-the-point-of-applied-math/). In truth, the mathematics to address this issue is not presently sufficient.

images copy 9What can be done to improve matters? One way would be to rely upon experimental comparison to decide quality. This leaves little guidance for improving the methods based on mathematical principles. Insofar as applied mathematics is concerned, a better theory for the development of these instabilities would enable guidance toward better methods. This is lacking today rather seriously. It would be useful to have a refined understanding of what unphysical solutions look like for these cases. Today such a characterization is not available to be applied. We are left with experimental comparison and/or expert judgment.

All that it is reasonable to ask for in a scientific calculation is stability, not accuracy.

–Nick Trefethen

Advertisements