If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.

― Albert Einstein

A few years ago, I was challenged to examine the behavior of void in continuum hydrocodes. A senior colleague suggested looking at problems that might allow us to understand how the absence of material would be treated in a code. The simplest version of this problem would solve the expansion of a real gas into a void. With an ideal gas this problem has an exact solution that can be found with a Riemann solution. In the process, we have discovered that these problems are not solved well by existing methods. We approximate the void with a very low density and pressure material, and we have found as the material approaches an actual void, the solutions seem to become non-convergent, and prone to other significant numerical difficulties. Even when using extremely refined meshes with many 1000’s of cells in one dimension, convergence is not observed for a broad class of methods. These methods have solved many difficult problems and we believe them to be robust and reliable. These problems persist for all methods tested including our fail-safe methods (e.g., first order Godunov).

What is going on?

I’ll just say in passing that this post is a bit of a work in progress conversation with myself (or myself to you). My hope is that it will shake lose my thinking. It is patterned on the observation that sometimes you can solve a problem by carefully explaining it to someone else.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

This slideshow requires JavaScript.

One of the difficulties of this problem is the seemingly bad behavior coming from our most reliable and robust methods. When we want a guaranteed a good solution to a problem, we unleash a first-order Godunov method on it, and if use an exact Riemann solver we can expect the solution to be convergent. The results we see with void seemingly violate this principle. We are getting terrible solutions in a seemingly systematic manner. To make matters worse, the first-order Godunov method is the basis, and fallback position for the more important second- or third-order methods we practically want to use. We can conclude that this problem is exposing some rather serious problems with our workhorse methods and the potential for wholesale weakness in our capability.

There are no facts, only interpretations.

― Friedrich Nietzsche

Smalljumps-1stg

First order Godunov with 1000 cells. Plotting the maximum velocity over time shows the convergence for 100 and 1000:1 jumps. The velocioty peaks and decays to the correct solution.

Let’s look at what happens for the approximate-void problem. We approximate the void with a gas that has a density and pressure of twelve orders of magnitude smaller than the “real” material. This problem has a solution that almost gives the expansion into vacuum solution to the Euler equations (where the head of the rarefaction and the contact discontinuity collapse into a single structure that separates material from nothing). The problem is dominated by an enormous rarefaction that takes the density down by many orders of magnitude. What we see is a solution that appears to get worse and worse under mesh refinement. In other words, it diverges under mesh refinement. Actually, the behavior we see is a bit more complex than this. At very low resolutions, the solution is behind the exact solution, and as we refine the mesh, the solution catches up to and, then passes the exact solution. Then as we add more and more mesh, the solution just gets worse and worse. This is not supposed to happen. This is a very bad thing that needs focused attention.

Methods-compare-1000

Comparing first order, PLM and PPM solutions for the 1000:1 jump. The high order methods converge much faster than the first-order method.

So maybe backing away from the extreme problem is worth doing. I ran a sequence of shock tube problems varying the jump in pressure and density starting at 10:1 and slowly going up to the extreme jump that approximates an expansion into void. The shock tube is a self-similar problem, meaning that we can swap time and space through a similarity transformation. Thus, the very early time evolution on a very fine grid is essentially the same as a late time solution on a very coarse grid. What I noticed is the same pattern over and over. More importantly, the problem gets worse and worse as the jumps get larger and larger. By examining the trend as the jumps become very large, we start to see the nature of our problem. As the jump becomes larger and larger, the solution converges more and more slowly. We can start to estimate the mesh resolution needed for a good result and we can see that the situation becomes almost hopeless in the limit. I believe the solution will eventually converge given enough mesh, but the size of the mesh needed to get a convergent solution becomes completely absurd.

large-jumps-compare

For the large jumps of a million to a trillion convergence is lost at 1000 cells. The solution hasn’t even reached its peak value to decay toward the correct solution.

In summary, the problem with a factor of a million jump converges with modestly unreasonable mesh. As the jump grows in size, the convergence requires a mesh that is prohibitive for any practical work. If we are going to accurately solve this class of problems some other approach is needed. To make things worse the when the problem converges, the rate of convergence under refinement of the mesh is painfully slow, and incredibly expensive as a result.

Everywhere is walking distance if you have the time.

― Steven Wright

The second issue we see is a persistent velocity glitch at the head of the rarefaction. It is fair to say that the glitch has heretofore been viewed as a cosmetic problem. This velocity peak looks like a meaningfully wrong solution to the equations locally. It produces a shock like solution in the sense that produces a violation of Lax’s entropy condition, where the characteristics locally converge in a shock-like manner in a rarefaction where the characteristics should diverge locally. We might expect that this problem would hurt the physically meaningful solution. Not all-together surprisingly the solution can also violate the second law of thermodynamics when using higher than first-order methods. Moreover, this character simply gets worse and worse as the problem gets closer to a void. A reasonable supposition is that this feature in the numerical solution is a symptom of difficulties in rarefactions. Usually this feature can be categorized as a nuisance and relatively small contributor to error, but may be a sign of something deeper. Perhaps this nuisance becomes a significant issue as the rarefaction becomes stronger, and ultimately dominates the numerical character of the solution. We might be well-served by removing it from the solution. One notion we might add to the treatment of the glitch is its diminishing size as the mesh is refined. Having this anomalous shock-like character allows dissipation to damp the spike and improve the solution. The counter-point to this solution is not creating the glitch in the first place.

convergence-plot-1e8

For the jump of 100 million we get convergence with 2000 and 4000 cells. This also shows that the curves are quite close to self-similar In addition the slow convergence is evident in the behavior.

At this point it’s useful to back away from the immediate problem to a broader philosophical point. The shock capturing methods are naturally focused on computing shocks. Shock waves were a big challenge for numerical methods. They remain a large challenge, and failure to treat them effectively can be fatal for a calculation. If the shock wave was not treated with care, the numerical can fail catastrophically, or significantly damaged. Even when the results are not catastrophic, poor treatment of a shock can result in significant corruption of the solution that often spreads from the shock to other areas in the solution. For this reason, the shock wave and its numerical treatment has been an enduring focus of numerical methods for compressible flows. Conversely rarefactions have largely been an afterthought. Rarefactions are benign smooth structures that do not directly threaten a calculation. A few bad things can happen in rarefactions, but they are rarely fatal to the calculation. A few have been so cosmetically problematic that major effort has ensued (the rarefaction shock). Problems in rarefactions are generally just a nuisance, and only become a focal point when the details of the solution are examined. One aspect of the details is the convergence character of the solution. Shock tube problems are rarely subjected to a full convergence analysis. The problem we focus on here is dominated by a rarefaction thus magnifying any problems immensely. What we can conclude is that strong rarefactions are not computed with high fidelity.

The trick to forgetting the big picture is to look at everything close up.

― Chuck Palahniuk

One of the key ways of dealing with shock waves are upwind methods. A clear manner of treating these waves and getting an upwind solution is the use of a discontinuous basis to define the spatial discretization. This discontinuous basis is also used with high-order methods, and the first order solution becomes the fallback position for the methods. This approach is very well suited to computing shocks; a discontinuous approximation for a discontinuous phenomenon. By the same token, a discontinuous basis is not well suited for a continuous phenomenon like a rarefaction. One hypothesis to explore is different types of approximations to the problem where the rarefaction dominates the solution. We may find that we can solve this class of problem far more efficiently with a continuous basis getting asymptotically convergent solutions far sooner. What we observe is an ever slower approach to a convergent behavior in the code. For this class of problems we see a consistent pattern, the solution starts out being under-resolved and the velocity rises, it then overshoots the correct analytical result, then slowly decays toward the correct solution. As the rarefaction becomes stronger and stronger, we see that the mesh resolution needed to capture the full rise, its achievement of the peak overshoot value take place at a finer and finer mesh.  Ultimately, the mesh required to get a solution that converges becomes absurdly refined.

If this proposition is indeed correct, it implies that we need to define a hybrid approach where the basis is adaptively chosen. At discontinuous structures, we want to choose discontinuous approaches, and at continuous structures we want continuous structures. This is almost obvious, but carrying this out in practice is difficult. Clearly the current adaptive approaches are not working well enough as evidenced by the painful and absurd degree of mesh needed to get a reasonable solution. It would seem that the answer to this problem lies in developing a new method capable of solving extreme rarefactions on reasonable meshes.  We need to have methods that can solve strong, but continuous waves with higher fidelity. In all reality, these methods might need to effectively compute shocks albeit less effectively than methods using a discontinuous basis. The bottom line from attacking a challenging problem like this is the demonstration that our methods today are not sufficient to all our challenges.

Creativity consists of coming up with many ideas, not just that one great idea.

― Charles Thompson