or the alternative title “These Problems go to Eleven!

images-1Fortunate are those who take the first steps.

― Paulo Coelho

When thinking about problems to run with a computer code there is both a fun and a harsh way to think about them. I’ll start with the fun way, borrowing from the brilliant movie “This is Spinal Tap” and the hilarious interview with the lead guitarist from the band,

Nigel Tufnel: The numbers all go to eleven. Look, right across the board, eleven, eleven, eleven and…

Marty DiBergi: Oh, I see. And most amps go up to ten?

Nigel Tufnel: Exactly.

Marty DiBergi: Does that mean it’s louder? Is it any louder?

Nigel Tufnel: Well, it’s one louder, isn’t it? It’s not ten. You see, most blokes, you know, will be playing at ten. You’re on ten here, all the way up, all the way up, all the way up, you’re on ten on your guitar. Where can you go from there? Where?

downloadMarty DiBergi: I don’t know.

Nigel Tufnel: Nowhere. Exactly. What we do is, if we need that extra push over the cliff, you know what we do?

Marty DiBergi: Put it up to eleven.

Nigel Tufnel: Eleven. Exactly. One louder.

Marty DiBergi: Why don’t you make ten a little louder, make that the top number and make that a little louder?

Nigel Tufnel[pauses] These go to eleven.

To make the best progress we need to look for problems that “go to eleven.” Even if the difficulty is somewhat artificial, the best problems are willfully extreme, if not genuinely silly. They expose the mechanisms that break methods. Often these problems are download-2simplified versions of what we know brings the code to its knees, and serve as good blueprints for removing these issues from the code’s methods. Alternatively, they provide proof that certain pathological behaviors do or don’t exist in a code. Really brutal problems that “go to eleven” aid the development of methods by highlighting where improvement is needed clearly. Usually the simpler and cleaner problems are better because more codes and methods can run them, analysis is easier and we can successfully experiment with remedial measures. This allows more experimentation and attempts to solve it using diverse approaches. This can energize rapid progress and deeper understanding.

The greater the obstacle, the more glory in overcoming it.

― Molière

images-1I am a deep believer in the power of brutality at least when it comes to simulation codes. I am a deep believer that we are generally too easy on our computer codes; we should endeavor to break them early and often. A method or code is never “good enough”. The best way to break our codes is attempt to solve really hard problems that are beyond our ability to solve today. Once the solution of these brutal problems is well enough in hand, one should find a way of making the problem a little bit harder. The testing should actually be more extreme and difficult than anything we need to do with codes. One should always be working at, or beyond the edge of what can be done instead of safely staying within our capabilities. Today we are too prone to simply solving problems that are well in hand instead of pushing ourselves into the unknown. This tendency is harming progress.

Stark truth, is seldom met with open arms.

― Justin K. McFarlane Beau

Easy problems make codes look good because they don’t push the boundaries. Easy problems are important for checking the code’s ability to be correct and work when everything is going right. The codes need to continue working when everything is going wrong too. A robust code is functional under the worst of circumstances one might encounter. The brutal problems are good at exposing many of the conditions where things go wrong, and pushing the codes to be genuinely robust. These conditions almost invariably appear in real problems, and good codes can navigate the difficulties without falling apart. Over time good codes can go from falling apart to solving these brutal problems accurately. When this happens we need to come up with new problems to brutalize the codes. We should always be working on the edge or over the edge of what is possible; safe and sound is no way to make things better.

Happiness is not the absence of problems; it’s the ability to deal with them.

― Steve Maraboli

The use of brutal problems has helped drive codes forward for decades. A long time ago simple problems that we routinely solve today brought codes to their collective knees. Examples of such problems for shock physics codes are the Sedov-Taylor blast wave and Noh’s problem. Scientists came up with new ways to solve problems that brought these problems to heel. Rather than rest upon our laurels, we found new problems to bring the codes down. We still solve the older, now tamed brutal problems, but now they are easy. Once we slay one dragon, it is time to construct a bigger, more dangerous dragon to do battle with. We should always test our mettle against a worthy opponent rather than matching up with a patsy. Getting fabulous results on a super-easy problem looks great, but does little or nothing to improve us. Through this process we progress and our codes get better and better. This is how we challenge ourselves systematically always working with the premise that we can improve and get better. Brutal problems provide a concrete focus for improvement.

A challenge only becomes an obstacle when you bow to it.

― Ray A. Davis

What does this process look like concretely? We have problems that used to push the boundaries of our ability such as the Sod shock tube. It used to be a problem we couldn’t solve well. Shock tube problems have the added benefit of having exact solutions (Noh and Sedov-Taylor do too), so agreement and error have little to no ambiguity. Solutions either had oscillations or they were inaccurate and very heavily diffused. Often important features in the solution were seriously corrupted. This problem appeared in the literature at a seminal time in the development of numerical methods for shock physics and offered a standard way of testing and presenting results.

img452By the time a decade had passed after the introduction of Sod’s problem almost all of the pathological solutions we used to see had disappeared and far better methods existed (not because of Sod’s problem per se, but something was “in the air” too). The existence of a common problem to test and present results was a vehicle for the community to use in this endeavor. Today, the Sod problem is simply a “hello World” for shock physics and offers no real challenge to a serious method or code. It is difficult to distinguish between very good and OK solutions with results. Being able to solve the Sod problem doesn’t really qualify you to attack really hard problems either. It is a fine opening ante, but never the final call. The only problem with using the Sod problem today is that too many methods and codes stop their testing there and never move to the problems that challenge our knowledge and capability today. A key to making progress is to find problems where things don’t work so well, focus attention on changing that.

Fig-11-Specific-internal-energy-for-the-LeBlanc-shock-tube-problem-at-t-60-N-xTherefore, if we want problems to spur progress and shed light on what works, we need harder problems. For example, we could systematically up the magnitude of the variation in initial conditions in shock tube problems until results start to show issues. This usefully produces harder problems, but really hasn’t been done (note to self this is a really good idea). One problem that produces this comes from the National Lab community in the form of LeBlanc’s shock tube, or its more colorful colloquial name, “the shock tube from Hell.” This is a much less forgiving problem than Sod’s shock tube, and that is an understatement. Rather than jumps in pressure and density of about one order of magnitude, the jump in density is 1000-to-1 and the jump in pressure is one billion-to-one. This stresses methods far more than Sod and many simple methods can completely fail. Most industrial or production quality methods can actually solve the problem albeit with much more resolution than Sod’s problem requires (I’ve gotten decent results on Sod’s problem on mesh sequences of 4-8-16 cells!). For the most part we can solve LeBlanc’s problem capably today, so its time to look for fresh challenges.

One such challenge is posed by problems with void in them, or those that dynamically produce vacuum conditions. We have found that this class of problems simply is not solved effectively with current methods. Every method I’ve looked at fails in some way shape or form. None of them converge to the correct solution once the problem gets close enough to vacuum or void conditions. The only methods that appear to work at all explicitly track the position of the vacuum, so my statement of dysfunction should be applied to shock capturing methods. It does not seem to be a well-appreciated or well-studied problem thus may be ripe for the picking. Running this problem and focusing on results would provide impetus for improving this clearly poor state of affairs.

1-s2.0-S1570865916300242-f02-09-9780444637895Other good problems are devised though knowing what goes wrong in practical calculation. Often those who know how to solve the problem create these problems. A good example is the issue of expansion shocks, which can be seen by taking Sod’s shock tube and introducing a velocity to the initial condition. This puts a sonic point (where the characteristic speed goes to zero) in the rarefaction. We know how to remove this problem by adding an “entropy” fix to the Riemann solver defining a class of methods that work. The test simply unveils whether the problem infests a code that may have ignored this issue. This detection is a very good side-effect of a well-designed test problem.

Another great example is the carbuncle instability where mesh aligned shock wave exhibit and symmetry breaking bounded instability. This problem was seen first in blunt body simulation of re-entry vehicles, but inhabits frightfully many simulations in a host of applications. The issue still remains inadequately solved although some remedies exist, none is fully satisfactory. The progress made to date has largely arisen through its more clear identification and the production of simplified test problems that exhibit its fundamental behavior. In my own experience if a code doesn’t explicitly stabilize the carbuncle instabilities, the problem will be present and manifest itself. These manifestations will often be subtle and difficult to detect often masquerading as valid physical effects. A good test problem will expose the difficulty and force the code to be robust to it.

One of the best carbuncle exposing problems simply propagates a strong shock wave in one dimension, hardly a challenge today. The wrinkle in the problem is to make the problem two or three dimensional even though the extra dimensions should be ignorable. It introduces a small perturbation to the problem and tests whether the shock wave induces an unphysical growth in the resulting solution. Codes that exhibit the carbuncle instability allow the perturbations to grow and ultimately corrupt the solution. The problem is simple and elegant, but brutally effective for the purpose it was designed. It was inspired by the path to successful analysis of the core issues leading to the carbuncle instability.

The key idea to the modeling and simulation enterprise is identifying problems that expose weaknesses in current methods and codes. Once these weaknesses are addressed they provide a codec of wisdom about problems that have solved, and whether such wisdom was accounted for in a new method or code. New problems working at the boundaries of our knowledge are needed to push the community and spur progress. Meaningful progress can be measured through the conversion of such challenging problems from those that break methods and codes, to reasonable solution by methods and codes, and finally to accurate solution. When a problem has moved from a threat to success, it is time to create a new harder problem to continue progress.

Ultimately the brutal problems are an admission of where the method or code is vulnerable. This reflects on the personal vulnerability of those who professionally associated themselves with the code. This makes testing the code rigorously and brutally impossibly hard especially in today’s “no failure allowed” accountability culture. Its much easier to simply stop where everything appears to work just fine. Brutal problems spur growth, but also challenge our belief of mastery and success. It takes strong people to continually confront the boundaries of their knowledge and capability push it back, and confront it anew over and over.

At a deeply personal level testing codes becomes an exercise in leaving one’s comfort zone. The comfort zone is where you know how to solve problems, and you feel like you have mastery of things. Breaking your code, fixing it and breaking it again is a process of stepping out of the comfort zone as a principle in energizing progress. In my experience we have a culture that increasingly cannot deal with failure successfully. We must always succeed and get punished mercilessly for failure. Risk must be identified and avoided at all costs. The comfort zone is success, and for codes this looks like successful operation, and bona fide healthy convergence all of which is provided by easy problems. Brutal problems expose not just code weaknesses, but personal weakness and intellectual failings or gaps. Rather than chase personal excellence we are driven to choose comfortable acquiescence to the status quo. Rather than be vulnerable and admit our limitations, we choose comfort and false confidence.

Vulnerability is the birthplace of love, belonging, joy, courage, empathy, and creativity. It is the source of hope, empathy, accountability, and authenticity. If we want greater clarity in our purpose or deeper and more meaningful spiritual lives, vulnerability is the path.

― Brené Brown

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Pember, R. B., and R. W. Anderson. A Comparison of Staggered-Mesh Lagrange Plus Remap and Cell-Centered Direct Eulerian Godunov Schemes for Rulerian Shock Hydrodynamics. No. UCRL-JC-139820. Lawrence Livermore National Lab., CA (US), 2000.

Gottlieb, J. J., and C. P. T. Groth. “Assessment of Riemann solvers for unsteady one-dimensional inviscid flows of perfect gases.” Journal of Computational Physics 78, no. 2 (1988): 437-458.

Munz, C‐D. “A tracking method for gas flow into vacuum based on the vacuum Riemann problem.” Mathematical methods in the applied sciences 17, no. 8 (1994): 597-612.

Quirk, James J. “A contribution to the great Riemann solver debate.” In Upwind and High-Resolution Schemes, pp. 550-569. Springer Berlin Heidelberg, 1997.

 

Advertisements