Taking a new step, uttering a new word, is what people fear most.

Crays-Titan-SupercomputerComputing the solution to flows containing shock waves used to be exceedingly difficult, and for a lot of reasons it is now modestly difficult. Solutions for many problems may now be considered routine, but numerous pathologies exist and the limit of what is possible still means research progress are vital. Unfortunately there seems to be little interest in making such progress from those funding research, it goes in the pile of solved problems. Worse yet, there a numerous preconceptions about results, and standard practices about how results are presented that contend to inhibit progress. Here, I will outline places where progress is needed and how people discuss research results in a way that furthers the inhibitions.

I’ve written on this general topic before along with general advise on how to make good decisions in designing methods, https://wjrider.wordpress.com/2015/08/14/evolution-equations-for-developing-improved-high-resolution-schemes-part-1/. In a nutshell, shocks (discontinuities) provide a number of challenges and some difficult realities to thesupersonic-bullet_660table. To do the best job means making some hard choices that often fly in the face of ideal circumstances. By making these hard choices you can produce far better methods for practical use. It often means sacrificing things that might be nice in an ideal linear world for the brutal reality of a nonlinear world. I would rather have something powerful and functional in reality than something of purely theoretical interest. The published literature seems to be opposed to this point-of-view with a focus on many issues of little practical importance.

It didn’t used to be like this. I’ve highlighted the work of Peter Lax before https://wjrider.wordpress.com/2015/06/25/peter-laxs-philosophy-about-mathematics/, and it would be an understatement to say that his work paved the way for progress in compressible fluid mechanics. Other fields such as turbulence, solid mechanics, electro-magnetics have all suffered from the lack of similar levels of applied mathematical rigor and foundation. Despite this shining beacon of progress other fields have failed to build Peter_Laxupon this example. Worse yet, the difficulty of extending Lax’s work is monumental. Moving into high dimensions invariably leads to instability and flow that begins to become turbulent, and turbulence is poorly understood. Unfortunately we are a long way from recreating Lax’s legacy in other fields (see e.g., https://wjrider.wordpress.com/2014/07/11/the-2014-siam-annual-meeting-or-what-is-the-purpose-of-applied-mathematics/).

If one takes a long hard look at problems that pace our modeling and simulation, turbulence figures prominently. We don’t understand turbulence worth a damn. Our physical understanding is terrible and not sufficient to simply turn our understanding over to supercomputers to crush (see https://wjrider.wordpress.com/2016/07/04/how-to-win-at-supercomputing/). In truth, this is an example where our computing hubris exceeds our intellectual grasp considerably. We need significantly greater modeling understanding to power progress. Such understanding is far too often assumed to exist images-1where it does not. Progress in turbulence is stagnant and clearly lacks key conceptual advances necessary to chart a more productive path. It is vital to do far more than simply turn codes loose on turbulent problems and let great solutions come out because they won’t. Nonetheless, it is the path we are on. When you add shocks and compressibility to the mix, everything gets so much worse. Even the most benign turbulence is poorly understood much less anything complicated. It is high time to inject some new ideas into the study rather than continue to hammer away at the failed old ones. In closing this vignette, I’ll offer up a different idea: perhaps the essence of turbulence is compressible and associated with shocks rather than being largely divorced from these physics. Instead of building on the basis of the decisively unphysical aspects of incompressibility, turbulence might be better built upon a physical foundation of compressible (thermodynamic) flows with dissipative discontinuities (shocks) that fundamental observations call for and current theories cannot explain.

Further challenges with shocked systems occur with strong shocks where nonlinearity is ramped up to a level that exposes any lingering short-comings. Multiple materials are another key physical difficulty that exposes any solution methodology’s weaknesses to acute focus. Again and again the greatest rigor in simpler settings provide a foundation for good performance when things get more difficult. Methods that ignore a variety of difficult and seemingly unfortunate realities will underperform compared to those that confront these realities directly. Usually the methods that underperform simply add more dissipation to overcome things. The dissipation usually is added in a rather heavy-handed manner because it is unguided by theory and works in opposition to unpleasant realities. Rather than seeing these realities as being the result of being pessimistic, it is the result of pragmatism. The result of being irrationally optimistic is always worse than pragmatic realism.

logoLet’s get to one of the biggest issues that confounds the computation of shocked flows, accuracy, convergence and order-of-accuracy. For computing shock waves, the order of accuracy is limited to first-order for everything emanating from any discontinuity (Majda & Osher 1977). Further more nonlinear systems of equations will invariably and inevitably create discontinuities spontaneously (Lax 1973). In spite of these realities the accuracy of solutions with shocks still matters, yet no one ever measures it. The reasons why it matter are far more subtle and refined, and the impact of accuracy is less pervasive in its victory. When a flow is smooth enough to allow high-order convergence, the accuracy of the solution with high-order methods is unambiguously superior. With smooth solutions the highest order method is the most efficient if you are solving for equivalent accuracy. When convergence is limited to first-order the high-order methods effectively lower the constant in front of the error term, which is less efficient. One then has the situation where the gains with high-order must be balanced with the cost of achieving high-order. In very many cases this balance is not achieved.

What we see in the published literature is convergence and accuracy only being assessed for smooth problems where the full order of accuracy may be seen. In the cases that are actually driving the development of methods where shocks are present accuracy and convergence is ignored. If you look at the published papers and the examples, the order of accuracy is measured and demonstrated on smooth problems almost as a matter of coursodse. Everyone knows that the order of accuracy cannot be maintained with a shock or discontinuity, and no one measures the solution accuracy or convergence. The problem is that these details still matter! You need convergent methods, and you have interest in the magnitude of the numerical error. Moreover there are still significant differences in these results on the basis of methodological differences. To up the ante, the methodological differences carry significant changes in the cost of solution. What one finds typically is a great deal of cost to achieve formal order of accuracy that provides very little benefit with shocked flows (see Greenough & Rider 2005, Rider, Greenough & Kamm 2007). This community in the open, or behind closed doors rarely confronts the implications of this reality. The result is a damper on all progress.

The standard for complex flow is well-known and documented before (i.e., “swirlier is better” https://wjrider.wordpress.com/2014/10/22/821/). When combined with our appallingly poor understanding of turbulence, you have a perfect recipe for computing and selling complete bullshit (https://wjrider.wordpress.com/2015/12/10/bullshit-is-corrosive/). The side-dish for the banquet of bullshit is the even broader use of the viewgraph norm (https://wjrider.wordpress.com/2014/10/07/the-story-of-the-viewgraph-norm/) where nothing quantitative is used for comparing results. At its worst, the viewgraph norm is used in comparing results where an analytical solutions is available. So we have a case where an analytical solution is available to do a complete pileofshitassessment of error and we ignore its utility perhaps only using it for plotting. What a massive waste! More importantly it masks problems that need attention.

Underlying this awful practice is a viewpoint that the details, and magnitude of the error does not matter. Nothing could be further from the truth, the details matter a lot and there are huge differences from method to method. All these differences are systematically swept under the proverbial rug. With shock waves one has a delicate balance between the sharpness of the shock and the creation of post-shock oscillations. Allowing a shock wave to be slightly broader can remove many pathologies and produce a cleaner looking solution, but also increases the error. Determining the relative quality of the solutions is left to expert pronouncements, and experts determine what is good and bad instead of the data. I’ve written about how to do this right several times before, and its not really difficult, https://wjrider.wordpress.com/2015/01/29/verification-youre-doing-it-wrong/. What ends up being difficult is honestly confronting reality and all the very real complications it brings to the table. It turns out that most of us simply prefer to be delusional.

imagesIn the end shocks are a well-trod field with a great deal of theoretical support for a host issues of broader application. If one is solving problems in any sort of real setting, the behavior of solutions is similar. In other words you cannot expect high-order accuracy almost every solution is converging at first-order (at best). By systematically ignoring this issue, we are hurting progress toward better, more effective solutions. What we see over and over again is utility with high-order methods, but only to a degree. Rarely does the fully rigorous achievement of high-order accuracy pay off with better accuracy per unit computational effort. On the other hand methods which are only first-order accurate formally are complete disasters and virtually useless practically. Is the sweet spot second-order accuracy? (Margolin and Rider 2002) Or just second-order accuracy for nonlinear parts of the solution with a limited degree of high-order as applied to the linear aspects of the solution? I think so.

Perfection is not attainable, but if we chase perfection we can catch excellence

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

Majda, Andrew, and Stanley Osher. “Propagation of error into regions of smoothness for accurate difference approximations to hyperbolic equations.”Communications on Pure and Applied Mathematics 30, no. 6 (1977): 671-705.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

 

Advertisements