A couple of notes:

  1. This is the longest post I’ve written, its way too long (4500 words! TL;DR)! Its a topic that inspires me, so sue me! The blog is for me and if others find it useful then its all the better.
  2. The WordPress LaTeX capability is an awesome thing, but its buggy as hell and things sometimes just don’t work (it pisses me off cause a couple of the equations are important and they never worked)! In these cases I’ve left the equation in their naked LaTeX form since the parser is too stupid to render them properly. At least you’ll know what I’m intending to say–these cases are offset by a pair of “$”.

Life is really simple, but we insist on making it complicated.

― Confucius

The last couple of weeks have featured a general-purpose discussion of nonlinearity in approximation, and a clever way of introducing it. The original wave of nonlinear differencing that overcame Godunov’s theorem has been massively successful in transforming computational physics. The flux and slope limiting schemes provided both practical and pragmatic methods for solving problems that vexed earlier generations. The trick is continuing the forward progress and overcoming the shortcomings of these successful methods. While successful and useful, the first generation of nonlinear methods is not without fault and problem. Their replacements have never been entirely successful for a variety of reasons that I’ll elaborate upon below. Along with the failure to replace the first generation we have also seen a relative demise of further progress for reasons I’ve written about extensively. It is high time for some new principles to be introduced and seek a new generation of methods that make things better.

Just for the sake of completeness, linear methods are generically inadequate for solving many modern or real problems. As Godunov’s theorem dictated the linear methods cannot both be high quality and accurate, you have to choose one or the other, and you need both. These methods may be resurrected via a variety of heavy-handed mechanisms that almost always either render a solution first-order accurate, or rely upon excessive mesh resolution to resolve everything. Problems that can be fully solved are useless idealizations anyway! Sometimes the lousy solution is simply accepted either being overly diffuse or numerical corrupted. In either case these methods were not remotely the path to success for modeling and simulation. Their replacement was transformative and opened the door to success far more than any advances in computing hardware or power. The lack of acknowledgment for the necessity of method’s improvement instead of reliance on hardware is vexing and intellectually dishonest and shallow. The truth is that the nonlinear methods are an absolute and primal necessary for modeling and simulation to have its current measure of success. Computers rather than being enabling are just the “icing on the cake”.

Like all magnificent things, it’s very simple.

― Natalie Babbitt

As noted before, linear methods are only limited (pun intended) in their utility. Godunov’s theorem gives the reason for the limitations, and the path forward to nonlinear methods. In an “explosion” of innovation in the early 1970’s four different researchers came up with various nonlinear (adaptive) differencing methods to overcome Godunov’s theorem (in the USA, Russia, Netherlands, and Israel). In the fullness of time, these approaches are fundamentally similar and all intimately related. Of course in the heat of the moment, the fighting and turmoil over the development was bitter and tumultuous. Each method took the path of blending first-order methods together with high-order differencing by detecting where the high-method would get into trouble (cause oscillations), and adding enough first-order method to get the solution out of trouble. First-order methods that are useful are viewed as being overly dissipative, but also produce physically relevant solutions. The detection mechanism became widely known as limiters acting to determine the mix of methods via analysis of the underlying local solution.

An important side note is that the nonlinear differencing was used for shock wave solutions almost from the start. The first useful shock capturing methods was Von Neumann and Richtmyer’s method using artificial viscosity, which is a nonlinear method. This work includes three major contributions each having great traction in the tale unfolding below. The first is the underlying high-order method using a staggered mesh in time and space. It is second-order accurate, but catastrophically unstable without the next contribution, the artificial viscosity, which stabilizes the solution in the vicinity of shock waves by adding a shock-selective viscosity. This is a nonlinear differencing method because the dissipation is only added where there is a shock and not everywhere. Finally they used a stability analysis method that has become the standard for analyzing the stability of methods for PDE’s.

650px-LimiterPlots1A key part of the first generation’s methods success was the systematic justification for the form of limiters via a deep mathematical theory. This was introduced by Ami Harten with his total variation diminishing (TVD) methods. This nice structure for limiters and proofs of the non-oscillatory property really allowed these methods to take off. To amp things up even more, Sweby did some analysis and introduced a handy diagram to visualize the limiter and determine simply whether it fit the bill for being a monotone limiter. The theory builds upon some earlier work of Harten that showed how upwind methods provided a systematic basis for reliable computation through vanishing viscosity.

Simplicity is the ultimate sophistication.

― Clare Boothe Luce

The ground state method is upwind (donor cell) differencing, U(j,n+1) = U(j,n) - \nu \left(U(j,n) - U(j-1,n) \right) , which upon rearrangement shows that the new value, U(j,n+1) is a combination of positive definite values as long as \nu<1 . If the update to the new value is fully determined by positive coefficients, the result will be monotonicity preserving without any new extrema (new highs or lows not in the data). The TVD methods work in the same way except the coefficients for the update are now nonlinear and the conditions for the update being positive may produce conditions by which the update will be positive. I’ll focus on the simplest schemes possible using a forward Euler method in time, U(j,n+1) = U(j,n) - \nu \left(U(j+1/2,n) - U(j-1/2,n) \right) . For the upwind method this update is defined by U(j+1/2,n) = U(j,n) using a positive direction for transport. For a generic second-order method one uses, U(j+1/2,n) = U(j,n) + \frac{1}{2} S(j,n) where the use of linear extrapolation gives accuracy. Plugging this into the forward Euler update, and defining the extrapolation as a nonlinear function, Q(j,n) . Now the update is written as, U(j,n+1) = U(j,n) - \nu (1 + 1/2 Q(j,n) $ – 1/2 Q(j-1,n)) $ D(j-1/2,n) where D(j-1/2,n) = U(j,n) - U(j-1,n) . Now we can do some math using the assumption that Q(j-1,n) does nothing to help matters insofar as keeping coefficients positive. We are left with a pretty simple condition for positivity as 0 \le Q(j,n) \le \frac{2(1-\nu)} {\nu}. If 0 \le Q(j,n) \le 1, the method is positive to \nu \le \frac{2}{3}; if 0 \le Q(j,n) \le 2, $, the method is positive to \nu \le \frac{1}{2}. The point is that we now can concretely design schemes based on these inequalities that have reliable and well-defined properties.

Timageshe TVD and monotonicity-preserving methods while reliable and tunable through these functional relations have serious shortcomings. These methods have serious limitations in accuracy especially near detailed features in solutions. These detailed structures are significantly degraded because the methods based on these principles are only first-order accurate in these areas basically being upwind differencing. In mathematical terms this means that the solution are first-order in the L-infinity norm, approximately one-and-a-half in the L-2 (energy) norm and second-order in the L1 norm. The L1 norm is natural for shocks so it isn’t a complete disaster. While this class of method is vastly better than their linear predecessors effectively providing solutions completely impossible before, the methods are imperfect. The desire is to remove the accuracy limitations of these methods especially to avoid degradation of features in the solution.

The next generation of methods (the 2nd nonlinear generation) sought to overcome these problems. The thing to avoid in the approximation is the utilization of first-order differencing at extreme values. While this is sensible, it is also dangerous. This is where new extrema in the solutions are created. This is where unphysical oscillations come from, and opens the door for all the things these methods are supposed to cure. The upshot of this is that getting rid of the detriments of the TVD methods requires being really careful. This care went into the creation of these methods, which were called essentially non-oscillatory methods (ENO). The gist of the design was to try and select the smoothest, smallest in magnitude variation differencing option locally.

Because of this the method tend to be very dissipative. This might seem odd at first glance, but consider that upwind differencing is based on a piecewise constant approximation within each cell. This is the lowest magnitude of variation possible, as such the small variation approximation within the cell used in ENO results in enhanced dissipation compared to TVD methods. This dissipative nature is one reason why these methods have not really been successful. The dissipative nature of ENO has recently been proven. It is a good thing in the sense that entropy satisfaction can be shown, but in some cases it can exceed that produced by upwinding. This is the second reason why ENO is not successful; exceeding upwind dissipation is actually a threat to stability of the scheme. Thus we have a method that is both low fidelity and lacking in robustness. This ends up being the price to pay for not degrading to a low order approximation at extrema and maintaining formal high-order accuracy there. Do we always have to pay this price? I don’t think so, and the median I talked about last week might allow us to get around the problem.

Eno-fruit-saltA big part of understanding the issues with ENO comes down to how the method works. The approximation is made through hierarchically selecting the smoothest approximation (in some sense) for each order and working ones way to high-order. In this way the selection of the second-order term is dependent on the first-order term, and the third-order term is dependent on the selected second-order term, the fourth-order term on the third-order one,… This makes the ENO method subject to small variations in the data, and prey to pathological data. For example the data could be chosen so that the linearly unstable stencil is preferentially chosen leading to nasty solutions. The safety of the relatively smooth and dissipative method makes up for these dangers. Still these problems resulted in the creation of the weighted ENO (WENO) methods that have basically replaced ENO for all intents and purposes.

3aos1WENO gets rid of the biggest issues with ENO by making the approximation much less sensitive to small variations in the data, and biasing the selection toward more numerically stable approximations. Moreover the solution to these issues allows for an even higher order approximation to be used if the solution is smooth enough to allow this. Part of the issue is the nature of ENO’s approach to approximation. ENO is focused on diminishing the dissipation at extrema, but systematically creates lower fidelity approximations everywhere else. If one looks at TVD approximations the “limiters” have a lot of room for non-oscillatory solutions if the solution is locally monotone. This is part of the power of these methods allowing great fidelity in regions away from extrema. By not capitalizing on this foundation, the ENO methods failed to build on the TVD foundation successfully. WENO methods come closer, but still produce far too much dissipation and too little fidelity to fully replace monotonicity-preservation in practical calculations. Nonetheless ENO and WENO methods are pragmatically and empirically known to be nonlinearly stable methods.

images-2Given the lack of success of the second generation of methods, we have a great need for a third generation of nonlinear methods that might displace the TVD methods. Part of the issue with the ENO methods is the lack of a strict set of constraints to guide the development of the methods. While WENO methods also lack such constraints, they do have a framework that allows for design through the construction of the smoothness sensors, which help to determine the weights of the scheme. It allows for a lot of creativity, but the basic framework still leads to too much loss of fidelity compared to TVD methods. When one measures the accuracy of real solutions to practical problems WENO still has difficulty being competitive with TVD–don’t fooled by comparisons that show WENO being vastly better than TVD, the TVD method chosen to compare with is absolute shit. A good TVD method is very competitive, so the comparison is frankly disingenuous. For this reason something better is needed or progress will continue to stall.

My philosophy is to try and use the first two generations of nonlinear methods as the foundation. The median function is a means to allow the combining of the properties to get the best of both! This approach was taken in my 2007 paper with Jeff Greenough and Jim Kamm. As I stated then the objective is to get the best of both Worlds (TVD/monotonicity-preserving and ENO/WENO). Today I will explore if we can go even further.

The median comes in as a selection mechanism that allows beneficial properties to be built into methods. The three properties we will focus upon are listed here

  • Numerical accuracy, both resolution/fidelity and order of accuracy
  • Linear stability, or L2-energy stability
  • Nonlinear stability, L1, variation or non-oscillatory

These properties if brought together in a provable manner should allow new methods to be crafted that may replace TVD methods in practical codes if all works together to produce accurate, high-fidelity, stable, non-oscillatory methods for producing better solutions efficiently.

The approach to take in producing better methods is to start as before with TVD methods, U(j,n+1) = U(j,n) - \nu \left(U(j+1/2,n) - U(j-1/2,n) \right) defining U(j+1/2,n)   = U(j,n) + \delta U(j,n) where this edge value is defined as being high order accurate. If we can define inequalities that limit the values that \delta U(j,n) may take we can have productive methods. This is very challenging because we cannot have purely positive definite coefficients if we want high-order accuracy, and we want to proceed forward beyond monotonicity-preservation. The issue that continues to withstand strict analysis is the determination of how large the negativity of coefficients may be and not create serious problems.

Satisfaction is the biggest hurdle to success

― Pushkar Saraf

The approach that I am advocating was described in last week’s post where the median function was used to “breed” new high-order approximations with appropriate linear and nonlinear stability properties. The “seed” for any given scheme will be a monotonicity-preserving–TVD method to act as the selector for the median. A good reason to use these methods is their nonlinear stability properties. Remember that the manner in which the median works requires at least two of the three arguments to have a property in order to assure that the evaluation of the function retains this property.

Dealing with this serious deficiency in a rigorous manner is extremely difficult. A philosophy to dealing with it may be to ignore the problem and introduce other mechanisms to make the solution robust. For example, use a positivity limiter approach to keep positive definite values, positive definite during the integration of the equation. These limiters are devilishly simple and drop into this numerical framework almost seamlessly. The limiter takes a minimum acceptable value U_{\min} and modifies the approximation to remove violations of that value in the evolution. The form is $ U(j+1/2,n) := U(j,n) + \theta\left( U(j+1/2,n) – U(j,n) \right) $ with $ \theta = \min \left( 1,\frac{ U_{\min} – U(j,n) }{ U(j+1/2,n) – U(j,n) } \right) $. The really awesome aspect of this limiter is its preservation of the accuracy of the original approximation. This could be extended to the enforcement of other bounds without difficulty.

The various extrema preserving limiters of Suresh and Huynh, Colella and Sekora or from myself along with Greenough and Kamm may be employed to good effect to keep the integration fully nonlinearly stable while allowing high-order accuracy. In every case these “limiters” or strategies are used to provide accurate approximations throughout the flow field being solved. It is opposed to the previous limiter where a non-local bound is enforced. By being more local the problem is more difficult and accuracy is more likely to be disturbed. The approaches taken before all work to detect extrema in the solutions and produce approximations that preserve extrema that are valid, but not produce new invalid extrema. A trap that this may fall into is only provide the capacity to produce solutions that are uniformly second-order accurate instead of the first-order accuracy that is standard with monotonicity-preservation. The problem with these approaches is their lack of rigorous and provable bounds or properties that helped drive the success of TVD methods. We will end by discussing an approach that might go to arbitrarily high-order without inducing extreme dissipation in the bulk of solution.

This approach is based on the properties of the median discussed last week. Given that we are going to gang up on bad approximations using the median by using two approximations with a desirable property to make sure any bad properties are removed. A really cool thing is getting rid of two bad approximations in one fell swoop. Take the following example, argument one is low-order, but linear and nonlinearly stable–upwind perhaps, or TVD, the second is linearly unstable (nonlinearly too), but high-order and the third is both high-order and nonlinearly stable. Using a median evaluation will produce a new approximation that is both nonlinear stable and high-order even if the evaluation does not return the original value with that property. This is a really powerful tool for getting what we are looking for!

More typically, we will focus on a single property in the evaluation of the median. In structuring and evaluating the tests one should carefully look at what common properties can be provably produced through any given test. The key is to use the evaluations to build up approximations with uniformly desirable properties. I gave several examples last week, and I will close with a new example that brings us closer to the goals set forth for a new generation of methods.

Let’s lay out the building blocks for our approximation scheme.

  1. Have a TVD method, either a “classic” one, or a montonicity-preserving high-order scheme using the stencil described next.
  2. Have an extra high-order method using a full stencil. This is analogous to the fifth-order WENO. Use a linear combination three third-order stencils to get a linear fifth-order scheme
  3. One might also evaluate the fifth-order WENO method.
  4. Finally have the three base third-order stencils recognizing that one of them is the “classic” ENO scheme.
  5. Take inventory. You have three nonlinearly stable stencils, two fifth-order stencils, and three third-order stencils. You also have three or four linearly stable stencils.

Now let the median–magic happen right in front of you! Could one orchestrate the evaluations to give you a fifth-order, linearly and nonlinearly stable approximation? Can you really have it all? The answer, dear readers, is yes.

Here is how one might take on this challenge to systematically produce the desired result. Start with the playoff system discussed last week. This would use three median tests of the TVD scheme and the three third-order stencils. One of the third-order stencils is linearly unstable, and another is the classic ENO stencil, and U_{3,2} is the third-order upstream-centered scheme – and upstream-centered is known to be very desirable. They might be the same one, or they might not, such is the nature of nonlinear stability. At the end of the three tests we now have a third-order scheme that is linearly and nonlinearly stable. Don’t take my word for it, lets take it all apart. The semi-finals would consist of two choices, U_{3,A} = \mbox{median}\left(U_{3,1}, U_{3,2}, U_{2,M}\right) and U_{3,B} = \mbox{median}\left(U_{3,2}, U_{3,3}, U_{2,M}\right) . Both U_{3,A} and U_{3,B} are third-order accurate and one of them is also nonlinearly stable to the extent that the ENO approximation would be, and another one is linearly stable U_{3,B} in this case. Now we have the finals for step one, U_{3,C} = \mbox{median}\left(U_{3,A}, U_{3,B}, U_{2,M}\right) , this is nonlinearly stable and third-order, but linear stability isn’t necessarily given. For example the first-order method is linearly and nonlinear stable, or the second-order monotone Fromm’s scheme is linearly and nonlinearly stable. These two choices give the desired outcome guaranteed third-order, linear and nonlinearly stable schemes.

Now we can use a single additional test to promote things to a fifth-order version of the same. Our final test is U_{5,LN} = \mbox{median}\left(U_{3,C}, U_{5,L}, U_{5,N}\right) . Here the third-order approximation has both desired stability properties, the fifth-order linear approximation–upsteam centered is linearly stable and the fifth-order WENO is nonlinear stable. The final evaluation has it all! Fifth-order! Linear stability! Nonlinear Stability! All using the magical mystical median!

Cherish those who seek the truth but beware of those who find it.

― Voltaire

There is still a lot to do. A few things stand out insofar as coming to more optimal methods. For example, is it a good idea to test the high-order approximation you’d like to use to see if it violates monotonicity, first? If it doesn’t, dispense with the whole survey of the other choices? What gives a better answer? In previous work this can yield big savings. In addition, the major issues with the monotone methods appear at extrema where almost all of the methods degrade to first-order accuracy. Getting safe, high fidelity approximations at extrema benefit from the sort of things done in the set of papers by myself and others trying to extend limiters. The extent to which new extrema would be created and manifesting themselves as some negative coefficients instead of positive definite monotonicity preserving approaches whose magnitude remain undefined by rigorous analysis. Similarly if the approximations are steeper than non-oscillation would allow in monotone regions of the flow, it would manifest itself as negative coefficients. The magnitude of the negativity and oscillation size is difficult to define. So far it does not appear to match the desired expectation of ENO schemes of being O(h^n) where n is the order of the method, which might be as high as one likes. Evidence appears to go in the opposite direction insofar as WENO goes where the very high order WENO methods–higher than 7th order appear to have larger oscillations and generic robustness issues.

What I will say is that the community of scientists building shock capturing methods do not as a rule measure the accuracy of the methods they work on under practical circumstances. Typically accuracy is only measured on problems where the design order of accuracy may be achieved. This characterizes none of the problems we develop such methods to solve! NONE! ZERO! NADA!

Thinking something does not make it true. Wanting something does not make it real.

― Michelle Hodkin

If we want continued improvement of methods we need to measure our progress! We need to get in the business of measuring the overall efficiency of the methods. The goal of producing a new generation of methods to replace the very successful first generation nonlinear methods should be focused on innovation with the purpose of producing better solutions for less effort along with deeply nonlinear stability, high-resolution and a reliable and robust framework allowing the method to adapt to the problem at hand. Today we avoid this like the proverbial plague and it serves no one, but a small cadre of insiders. It serves them poorly by maintaining an increasingly antiquated status quo, which only stands in the way of actual transformative progress.

While I’m dishing out dollops of cruel reality I should acknowledge that for practical problems there is a certain futility in seeking formally high-order accurate solutions. Almost everything necessary to model reality conspires against us. At every turn we are confronted with a brutal reality that offers us rough, and nearly singular behavior. To some extent the first generation methods reflect this brutish reality offering a pragmatic methodology. These methods simply reject any attempt at accuracy if the solution is too under-resolved or rough. Perhaps we must accept a degree of imperfection in the face of this truth?

There are two ways to get enough. One is to continue to accumulate more and more. The other is to desire less.

― G.K. Chesterton

The attempts to produce methods that robustly allow extrema in the solution to be evolved with less error effectively extend this principled submission to difficulty. They offer rather involved analysis of the extrema including explicit limits on their structure. The problem is that these methods have rather distinct limits on the accuracy that may be achieved with them. Maybe this is really OK? For too long we have pursued methods whose design and energy is driven by problems that are largely meaningless idealizations of the reality we should focus on. We might actually progress more effectively by accepting the intrinsic limitations of our models, and produce pragmatic improvements that manage the reality of their solution.

We all die. The goal isn’t to live forever, the goal is to create something that will.

― Chuck Palahniuk

References

Harten, Ami. “High resolution schemes for hyperbolic conservation laws.”Journal of computational physics 49.3 (1983): 357-393. Also. Harten, Ami. “High resolution schemes for hyperbolic conservation laws.”Journal of Computational Physics 135.2 (1997): 260-278. (see Harten, Ami. “On a class of high resolution total-variation-stable finite-difference schemes.” SIAM Journal on Numerical Analysis 21.1 (1984): 1-23. Too)

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21.5 (1984): 995-1011.

Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” Upwind and High-Resolution Schemes. Springer Berlin Heidelberg, 1987. 218-290. (see also Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.”Journal of Computational Physics 131, no. 1 (1997): 3-47.)

Harten, Ami, and Stanley Osher. “Uniformly high-order accurate nonoscillatory schemes. I.” SIAM Journal on Numerical Analysis 24.2 (1987): 279-309.

Fjordholm, Ulrik S., Siddhartha Mishra, and Eitan Tadmor. “Arbitrarily high-order accurate entropy stable essentially nonoscillatory schemes for systems of conservation laws.” SIAM Journal on Numerical Analysis 50.2 (2012): 544-573.

Fjordholm, Ulrik S., Siddhartha Mishra, and Eitan Tadmor. “ENO reconstruction and ENO interpolation are stable.” Foundations of Computational Mathematics 13.2 (2013): 139-159.

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient Implementation of Weighted ENO Schemes.” Journa of Computational Physics 126 (1996): 202-228.

Suresh, A., and H. T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” Journal of Computational Physics 136.1 (1997): 83-99.

Balsara, Dinshaw S., and Chi-Wang Shu. “Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy.”Journal of Computational Physics 160.2 (2000): 405-452.

Zhang, Xiangxiong, and Chi-Wang Shu. “Maximum-principle-satisfying and positivity-preserving high-order schemes for conservation laws: survey and new developments.” Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. Vol. 467. No. 2134. The Royal Society, 2011.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225.2 (2007): 1827-1848.

Colella, Phillip, and Michael D. Sekora. “A limiter for PPM that preserves accuracy at smooth extrema.” Journal of Computational Physics 227.15 (2008): 7069-7076. (see also Sekora, Michael, and Phillip Colella. “Extremum-preserving limiters for MUSCL and PPM.” arXiv preprint arXiv:0903.4200 (2009).)

 

Advertisements