An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

One of my first talked about the Multimat2013, a biannual meeting of scientists specializing in the computation of multi-material compressible flows. Last time in 2013 we met in San Francisco, this time in Würzburg Germany. These conferences began as a minisymposium at an international congress in 2000. The first actual “Multimat” was 2002 in Paris. I gave the very first talk at this meeting (and it was a near disaster, an international jet lag tale with admonitions about falling asleep when you arrive in Europe, don’t do it!). The 2^{nd} Conference was in 2005 and then every two years thereafter. The spiritual leader for the meetings and general conference chairman is Misha Shashkov, a Lab fellow at Los Alamos. Still taken as a whole the meeting marks a remarkable evolution and renaissance for numerical methods, particularly Lagrangian frame shock capturing.

Sometimes going to a conference is completely justified by witnessing a single talk. This was one of those meetings. Most of the time we have to justify going to conferences by giving our own talks. The Multimat2015 conference was a stunning example of just how wrong-headed this point of view is. The point of going to a conference is to be exposed to new ideas from a different pool of ideas than you usually swim in. It is not to market or demonstrate our own ideas. This is not to say that giving talks at conferences aren’t valuable, they just aren’t the principle or most important reason for going. This is a key manner in which our management simply misunderstands science.

The morning of the second day I had the pleasure of seeing a talk by Michael Dumbser (University of Trento). I’ve followed his career for a while and deeply appreciate the inventive and interesting work he does. For example I find his PnPm methods to be a powerful and interesting approach to discretely attacking problems in a manner that may be vastly powerful. Nonetheless I was ill prepared for the magnificent work he presented at Multimat2015. One of the things that have held discontinuous Galerkin methods back for years is nonlinear stabilization. I believe Michael has “solved” this problem, at least conceptually.

Like many brilliant ideas he took a problem that cannot be solved well and recast it into a problem that we know how to solve. This is just such a case. The key idea is to identify elements that need nonlinear stabilization (or in other words, the action of a limiter). One identified, these elements are then converted into a number of finite volume elements corresponding to the degrees of freedom in the discontinuous basis used to discretize the larger element. Then a nonlinear stabilization is applied to the finite volumes (using monotonicity limiters, WENO, etc. whatever you like). Once the stabilized solution is found on the temporary finite volumes, the evolved original discontinuous basis is recovered from the finite volume solution. Wow what an amazingly brilliant idea! This provides a methodology that can retain high sub-element level resolution of discontinuous solutions.

The problem that remains are producing a nonlinear stabilization suitable for production use that goes beyond monotonicity preservation. This was the topic of my talk, how does one most to something better than mere monotonicity preservation as a nonlinear stabilization technique and be robust enough for production use. We need to produce methods that stabilize solutions physical, but retain accuracy to a larger degree while producing results robustly enough for use in a production setting. Once such a method is developed it would improve Dumbser’s method quite easily. A good step forward would be methods that do not damp isolated, well-resolved extrema in a robust way. Just as first order methods are the foundation for monotonicity preserving methods, I believe that monotonicity preserving methods can form the basis for extrema-preserving methods.

I often use a quote from Scott Adams to describe the principle for designing high-resolution methods, “Logically all things are created by a combination of simpler less capable components,” pointed out by Culbert Laney in his book Computational Gas Dynamics. The work of Dumbser is a perfect exemplar of this principle in many ways. Here the existing state of the art methods for Gas Dynamics are used as fundamental building blocks for stabilizing the discontinuous Galerkin methods.

Another theme from this meeting is the continued failure by the broader hyperbolic PDE community to quantify errors, or quantify the performance of the methods used. We fail to do this even when we have an exact solution… Well this isn’t entirely true. We quantify the errors when we have exact solution that is continuously differentiable. So when the solution is smooth we show the order of accuracy. Change the problem to something with a discontinuity and the quantification always goes away and replaced with hand-waving arguments and expert judgment.

The reason is that the rate of convergence for methods is intrinsically limited to first-order with a discontinuity. Everyone then assumes that the magnitude of the error is meaningless in this case. The truth is that the magnitude of the error can be significantly different from method to method and an array of important details changes the error character of the methods. We have completely failed to report on these results as a community. The archetype of this character is Sod’s shock tube, the “hello World” problem for shock physics. We have gotten into the habit of simply showing results to this problem that demonstrate that the method is reasonable, but never report the error magnitude. The reality is that this error magnitude can vary by a factor of 10 for commonly used methods at the same grid resolution. Even larger variations occur for more difficult problems.

The problems with a lack of quantification continue and magnify in more than one dimension. For problems with discontinuities there are virtually no exact solutions for problems in multiple dimensions (genuinely multi-dimensional as opposed to problems that are one-dimensional run in multiple dimensions). One of the key aspects of multiple dimensions is vorticity. This renders problems chaotic and non-unique. This only amplifies the hand waving and expert statements on methods and their relative virtues. I believe we should be looking for ways to move past this habit and quantify the differences.

This character in no small way is holding back progress. As long as hand waving and expert judgment is the guide for quality, progress and improvements for method will be hostage to personality instead of letting the scientific method guide choices.

General issues with quantifying the performance of methods. Where it is easy isn’t done, where it is hard. Multidimensional problems that are interesting all have vorticity, the results are all compared in the infamous eyeball or viewgraph norm. Mixing & vorticity are essential, but never measured. All comparisons are expert based and use hand-waving arguments, and the current experts and their methods will continue to hold sway and progress will wane.

The heart of excitement for previous meetings, collocated, cell-centered Lagrangian methods have now become so common and broadly used as to be passé. A number of good talks were given on this class of methods showing a broad base of progress. It is remarkable that such methods have now perhaps displaced the classical staggered mesh methods originating with Von Neumann as the stock and trade of this community. The constant and iterative progress with these methods awaits the next leap in performance, and the hard work of transitioning to being a workhorse in production solutions. This work is ongoing and may ultimately provide the impetus for the next great leap forward.

Aside from this work the meeting attempts to work within the natural tension between physics, mathematics, engineering and computer science to its great benefit. In addition to this beneficial tension, the meeting also straddles the practical and pragmatic needs of code developers for production software, and university research. Over the years we have witnessed a steady stream of ideas and problems flowing back and forth between these disparate communities. As such the meeting is a potpourri of variety and perspective providing many great ideas for moving the overall technical community forward through the creative utilization of these tensions

We tend to overvalue the things we can measure and undervalue the things we cannot.

Now I am looking forward to the next meeting two years hence in Santa Fe.

Pingback: Multimat 2017: Where did all the New Ideas go? | The Regularized Singularity