Progress is discovering what you can do without.

― Marty Rubin

Note: WordPress LaTeX woes continue so equations that wouldn’t parse are separated by “$” in the TeX “code”.

After last week’s orgy of words I was a bit more restrained this week, after all I am on vacation! It’s a topic that means a lot to me and actually represents a fun topic, so I’m continuing the thematic direction. Go figure! Still some more thoughts on the general recent theme are presented where improvements would mean much more to computational science than a bunch of expensive, hard to use, energy wasting, hard-to-impossible to program computers; **Alternatively, improvements in methods that would make those precious computers radically more useful! **None of the press regarding “exascale” computing acknowledges that supercomputing is an intrinsically holistic endeavor where the models, methods and algorithmic advances do far more to empower the future than any hardware investments.

Off the soapbox, and on to the chalkboard!

I’ve been advancing the theme of methods for improving the solution of hyperbolic PDEs for several weeks now. At the end of the last post I posited the proposition that looking to produce high order accurate approximation wasn’t always the best idea. Quite often we must content with solutions that are very under-resolved and rough in character**. Experience seems to dictate that this eventuality is so important that it should always be present in the method’s design**. One of the issues with the ENO and WENO methods is the absence of these ideas from high-order methods, which may be limiting their utility for broad use. Under appropriate conditions abandoning the pursuit of accuracy may be the advisable thing to do. From our experience with first generation monotonicity-preserving methods we can see that under the conditions where accuracy cannot be maintained, the discrete conditions are incredibly useful and powerful. Perhaps there is a route to improvement in methods that may use this experience productively. Limiters seem to endow the methods with a greater and stronger sense of nonlinear stability (i.e., robustness) to the extent of allowing one to use unstable approximations locally in a completely stable, robust manner.

**The first thing to do is review the relations for providing monotonicity-preserving methods that can be easily stated by using TVD theory**. This was discussed last week. First, the key fact that upwind differencing is a positive scheme is a foundation for things, $ U(j,n+1) = U(j,n) – \nu \left[ U(j,n) – U(j-1,n) \right] $ where is the CFL number. Next, apply this basic thought process to second-order extension (or a high-order extension more generally), . Plugging this into our update equation, $ U(j,n+1) = U(j,n) – \nu \left[ U(j+1/2,n) – U(j-1/2,n) \right] $ can produce conditions on limiters for . These are functional relationships defined by dividing the slope by the upwind difference on the mesh. These produce the famous diagrams introduced by Sweby under the proper CFL stability limits.

Secondly it is instructive to look at the differing approach taken by Suresh and Huynh. **Here the approach to the limiting is done on the high-order edge values with two applications of the median function**. The first step assures that the edge is bounded by its neighboring edge values, . The final step requires a bit more explanation and discussion because it differs in philosophy from the TVD limiters (or at least seemingly, actually its completely congruent). The idea of the second argument is to prevent the transport equation from producing any new extrema in the flow. Since the first test checks for the value of the edge to be bounded by the anti-upwind value, we need to keep things bounded for the upwind value. To find the boundary for the values where things go south, we find the value where the update equals the upwind values, $ U(j-1) = U(j) – \nu \left[ U(j+1/2) – U(j-1/2) \right]$, and then make sure that as not to help matters at all. Then we rearrange to find the bounding value, $ U(j+1/2) = U(j) + \frac{(U(j) – U(j-1))}{ \nu }$, which is a bit unsatisfying because of the Courant number dependence, but its based on transport, so what else did we expect! The second test is then $ \mbox{median} ( U(j), U(j+1/2), U(j) – \frac{( U(j) – U(j-1) )}{\nu } $. The result will be monotonicity-preserving up to .

Every solution to every problem is simple. It’s the distance between the two where the mystery lies.

― __Derek Landy__

These two views are utterly complementary in nature. One revolves around extending the positivity of coefficients that naturally comes from the use of upwind differencing. The second uses the combination of bounding reconstructions of a function, and the direct impact of transport on the development of new extrema in flows. **We could easily reuse either under the right conditions and more importantly these methods produce the “same” limiters under the right conditions.** The ideas I’m developing here are a bit of an amalgam of my ideas with Suresh & Huynh and Colella & Sekora. In particular, they might work extremely well within a framework that is built around the PPM scheme, which I believe is both the most powerful and flexible method to build modern methods upon.

With this basic background in hand let’s throw out the idea that will be developed for the rest of the post. If one looks at the simplest hyperbolic PDE, the one-way wave equations, we can see that the same equation can simply be derived to describe the evolution of any derivative of the solution. and . **As a result a limiter on the value or derivative or second derivative, and so on of the solution should all take basically the same form. We can use basically the same ideas over and over again with appropriate accommodation of the fact that we are bounding derivatives**. The question we need to answer is when we should do this. Here is the major proposition we are making in this blog post: *if the derivative in question takes the same sign then the limiter will be governed by either a high-order approximation, or the limit defined, but if the sign is different, the limiter will move to evaluation of the next derivative, and this may be done recursively*.

In its simplest form we will simply have a monotonicity preserving method. We can take a high-order approximation –fifth-order upwinding suffices– and see whether it is monotone. **If the local differences are different in sign we simply use first-order upwinding**. If the signs are the same we see whether the high-order approximation is bounded by the monotone limit and upwinding, where . Note that if there is an extrema. Doing something with this knowledge would seem to be a step in the right direction instead of just giving up.

It takes a long time to learn how to do something simple.

― Marty Rubin

In the vein of keeping things simple, I will stick with the bounding approach for now, and avoid the complications of the transport mindset. **We basically go through a set of bounding arguments starting with basic monotonicity-preserving limiters, and graduating to higher order forms if we detect an extrema.** This idea can be then applied recursively to whatever degree you’d like or can tolerate. The concept is that one can do something if a high-order structure is detected – some sort of critical point where a derivative of the solution goes through zero, but if the structure doesn’t have a critical point, and is simply under-resolved, higher order doesn’t make sense to pursue. This is the key to the entire dialog here, sometimes high-order simply doesn’t make sense, and the pragmatic approach is to stop.

The basic idea is that we can produce a sequence of approximations using the same basic conceptualizations as the wildly popular and powerful monotoncity-preserving methods in a manner that will promote improved accuracy to the extent we are willing to exercise. More importantly the methods acknowledge the impact of under-resolved solutions and the utter lunacy of trying to achieve high-order accuracy under those circumstances. **If we find an extrema in the solution, we looks at whether the extrema is smooth enough to continue the use of the high-order approximation**. The first derivative is approximately zero, but may need to have a non-zero value to promote accuracy. We start by using the safest value for the first derivative using a *mineno* limiter introduced two posts ago, which is the local gradient with the smallest absolute magnitude . Step two would produce a set of estimates of the second-derivative locally, we could easily come up with two or three centered in cell, *j*. We will suggest using three different approximations, , and . Alternatively we might simply use the classic second order derivative in the adjacent cells instead of the edges. We then take the limiting value of . We then can define the bounding value for the edge as . Like the monotonicity-preserving method, this estimate should provide a lot of room if the solution well enough resolved. This will be at least a second-order approximation by the properties of the *median* function and the accuracy of the *mineno* based linear extrapolation.

This should allow one to get at least second-order accuracy in the infinity norm. In may make sense to use the broader stencils for the first derivative to improve the solutions a bit. **To repeat, the general concept is make sure any evolution of the derivative of the solution does not create a new extrema in the derivatives (instead of the cell values as in basic monotoicity-preserving methods**. Now, I will sketch out what the next order higher in the limiter hierarchy would look like. This limiter would only be used in places where there are the twin critical points in the value and derivative of the solutions. *A final flourish for these methods would be to add a positivity limiter that helps insure that no unphysical values are generated (i.e., negative densities or pressures etc.).*

The next higher order in the hierarchy would require us to make a safe approximation to the first and second derivatives, then use a bounding approximation for the third derivative. In addition if the third derivative has a sign change in the cell (or nearby vicinity), we either go to the next order in the hierarchy of truncate it with the limiter based on the safe extrapolation. This highlights one of the characteristics of these limiters in that the approximation will truncate to one order lower than the attempt (i.e., a safe limit). **These lower limits are all within the ENO family of schemes and should provide entropy-stable approximations**.

The key to this entire line of thinking is the manner of encoding pragmatic thinking into extending limiters. TVD–Monotonicity-preserving limiters are dynamically pragmatic in their makeup. **If a profile is too steep, it is under-resolved and cannot be evolved without inducing oscillations. Furthermore the whole idea of a convergent Taylor series expansion, based on intrinsic smoothness is not a sensible approximation strategy**. It makes perfect sense to stop from making a bad decision. This pragmatism extends to local extrema where we basically reject high-order ideas and use a first-order method to be safe. This safety is also built into the whole foundation of the method. The idea I am exploring here is to consider carefully these extrema for being capable of being represented with a smoothness based approximation; to see whether a Taylor series holds and can be used profitably, but eventually surrender to the reality of under-resolution. For problems with discontinuities and/or shocks, this is always in play.

**It is notable that with the median limiters and ENO-like schemes, the first-order approximation is always in play via the TVD-like comparison scheme**. These might be equally pursued in the case of the extrema-preserving method. In other words instead of using a TVD comparison scheme, the extrema-preserving method may be used to good measure in creating an even more powerful high-resolution method.

The role of genius is not to complicate the simple, but to simplify the complicated.

― Criss Jami

Everything has boundaries. The same holds true with thought. You shouldn’t fear boundaries, but you should not be afraid of destroying them. That’s what is most important if you want to be free: respect for and exasperation with boundaries.

― Haruki Murakami

Pingback: Smoothed Operators | The Regularized Singularity

Pingback: Tricks of the trade: Making a method robust | The Regularized Singularity