Every solution to every problem is simple. It’s the distance between the two where the mystery lies.

― Derek Landy

The *median* is what a lot of people think of when you say “household income”. It is a statistical measure of central tendency of data that is a harder to compute alternative to the friendly common mean value. For large data sets the *median* is tedious and difficult to compute while the mean is straightforward and easy. The *median* is the middle entry of the ordered list of numerical data thus the data being studied needs to be ordered, which is hard and expensive to perform. The mean of the data is simple to compute. **On the other hand by almost any rational measure, the median is better than the mean.** For one thing, the

*median*is not easily corrupted by any outliers in the data (data that is inconsistent or corrupted); it quite effectively ignores them. The same outliers immediately and completely corrupt the mean. The

*median*is strongly associated with the one-norm, which is completely awesome and magical. This connection is explored through the amazing and useful field of compressed sensing, which uses the one norm to do some really sweet (cool, awesome, spectacular, …) stuff.

About 15 years ago I found myself quite taken with a mathematical tool associated with constructing “limiters” for solving hyperbolic conservation laws numerically. **This mathematical tool is the “ median” function for three arguments. It simply returns the value that lies in the middle of the triad**. It simple and powerful, but hasn’t really caught on more generally. It seems to be a boutique method rather than common in use. It’s a true shame; because the

*median*is pretty fucking awesome. Given the

*median’s*power and utility in building numerical method, I find its lack of “popularity” puzzling. Since I like it so much, I will wax a bit on its greatness and what I’ve done with it.

It’s easy to attack and destroy an act of creation. It’s a lot more difficult to perform one.

― Chuck Palahniuk

I came across the function and its use with limiters reading the work of HT Huynh. HT works at NASA in Cleveland and has done some really great work over the years. The *median* function made its first appearance in work on Hermitian polynomial interpolation (used a lot with tabular lookups). **He wrote several of my favorite papers on solving hyperbolic PDE’s, first on slope limited second-order schemes (a real gem covering a wealth of material and presenting it in a wonderful manner, it deserves many more citations!) and a second with Suresh on limiting edge values** **(another tour de force!).** The *median* function appears prominently in both and allows some cool stuff to be done in a very clear and literal fashion. Like the *median* function, HT’s work is significantly under-appreciated.

If you want something new, you have to stop doing something old

― Peter F. Drucker

History is filled with brilliant people who wanted to fix things and just made them worse.

― Chuck Palahniuk

The function itself is quite elegant. If you’re in the business of limiters, you can implement it directly from another function you certainly have on hand, the *minmod* (minimum modulus) function. Jay Boris invented the *minmod* as part of the flux corrected transport method (arguably the first limiter). It returns the function with the minimum magnitude if the arguments to the function have the same sign, and zero otherwise. **It can be implemented in several ways of varying elegance.** The classic way of implementing *minmod *is, . HT Huynh introduced a really cool way to write *minmod* that was much better since you could use it with Taylor series expansions straightaway, .

Once the *minmod* is in hand, the *median* is a piece of cake to implement, . **In papers by Huynh and Huynh & Suresh, the median is used to enforce bounds. This mindset is perfectly in keeping with the thinking about monotonicity preserving methods but the function has other properties that can take it much further**. A lack of recognition of these other properties may limit its penetration into further use. In a general sense it is used in defining the limits that an approximation value can take and produce a monotonicity preserving answer. The median is used to repeatedly bring an approximate value inside well-defined bounding values although for monotonicity it just requires two applications. Huynh first applied this to the linear interpolated second-order methods. In that case the median could be used to implement the classical TVD limiters or slope limiters. Suresh and Huynh took the same approach to higher order accuracy through looking edge value/flux approximations that might have an arbitrary order of accuracy and integrated via a method of lines approach, but uses many other applications of the

*median*in the process.

Do not get obsolete like an old technology, keep innovating yourself.

― Sukant Ratnakar

**Once the median function is in hand it may be used to write limiters in what I think is a rather elegant fashion**. For example take the standard monotonicity or TVD limiter for a “good” second-order method based on Fromm’s scheme, which can be written in a relatively hideous fashion as, . Here is the unlimited slope for the linear Fromm scheme, with – and – being the negative and positive differences of the variables on the mesh. With the

*median*we can write the limiter in two compact statements, , . More importantly as Huynh shows this format allows significant and profitable extensions of the approach leading to higher accuracy approximation.

**It is the extensibility that may be exploited as a platform to innovate limiters and remove some of their detrimental qualities**.

The same basic recipe of bounding allows the parabolic limiter for the PPM method to be expressed quite concisely and clearly. The classic version of the PPM limiter involves “if” statements and seems rather unintuitive. **With the median it is very clear and concise**. The first simply assures that the chosen (high-order) edge values, and are enclosed by the neighboring mesh values, and . While the second step assures that the parabola does not contain a zero derivative (or no local minima or maxima) in the cell, – and – . With this version of PPM there is significant liberty in defining the left and right initial edge values by whatever formula, accuracy and nature you might like. This makes PPM an immensely flexible and powerful method (the original PPM is very powerful to begin with).

The *median* preserves accuracy and allows one to create new nonlinearly accurate approximations. Huynh delivered the median along with a theorem regarding it accuracy properties. **If two of the three arguments in the median function have a certain order of accuracy in approximation than the evaluated median function will have that order of accuracy**. The proof is simple and intuitive. If the defined order of accuracy values is returned there isn’t a question, but if the returned value does not have the order accuracy a priori, its accuracy is in question. By being bounded by the two values with well-defined linear order of accuracy, the

*median*value is the convex combination of the two accurate values, which locally means it has the same order of accuracy. Viola! This means the

*median*has the ability to “breed” new high order approximations from existing high-order expressions. If you’re interested in producing well-behaved approximations that are also high-order; it is a super useful property.

Here is a supposition that extends the accuracy property of the median to make it more powerful. The *median* can also “breed” new nonlinearly stable approximations. This would work as follows: **if two of the arguments are stable, the evaluation of the median would produce a stable value even if the argument returned is linearly unstable**. Thus we can take various stable approximations and through the median use them to produce new stable approximations even if they are unstable. This should work exactly like the order of approximation through the evaluated median producing a value that is the convex linear combination of the two defined stable approximations. If this property can be proven then all you need to do is keep track of the properties as you work defining your method. If you are careful you can end up with provable methods having well-defined accuracy and stability even when some parts of the method have neither to begin with. This is the essence of a the nonlinear method. It does something the linear method cannot.

This produced a concept of producing nonlinear schemes that I originally defined as a “playoff” system. **The best accurate and stable approximation would be the champion of the playoffs and would inherit the properties of the set of schemes used to construct the playoff**. This would work as long as the playoff was properly orchestrated to produce stable and high-order approximations in a careful manner if the *median* were used to settle the competition. I had discussed this as a way to produce better ENO-like schemes. Perhaps I should have taken the biological analogy in the definition of the scheme in terms of what sort of offspring each decision point in the scheme bred. Here the *median* is used to breed the schemes and produce more fit offspring. The goal at the end of scheme is to produce an approximation with a well-chosen order of accuracy and stability that can be proven.

The method I show here was developed with a goal in mind, produce an ENO(-like) method with all of the stability, but with better accuracy and fidelity. It would have the same order of accuracy as ENO methods, but would produce solutions of greater fidelity. **One of the big issues with ENO (and WENO) methods is that they are relatively dissipative as well as being complex**. For applied problems with shocks and discontinuous features a well-designed second-order method will produce a much better, higher fidelity solution than an ENO method. WENO is more competitive, but still not better than a method such as PPM until the solution has immense amount of high frequency structure. The goal is to produce ENO methods that inherit the sort of fidelity properties that second-order methods produce, but also provide bona fide high-order accuracy,

Here is a relatively simple example of the sort of scheme I suggested and how the “playoff” system would work. **In this case, the goal is to produce a third-order approximation with well-defined stability properties**. We start with three third order approximations, , and (all written from the right edge of cell ) plus a second-order monotone approximation . For example one might use the second-order *minmod* based scheme, . One of the three third-order methods is the upstream-centered scheme, which has favorable properties from an accuracy and stability point-of-view ( for this example).

The second thing to note is that one of the three third-order approximations is the nonlinearly stable classical ENO approximation (we don’t know which one, and don’t need to either! and that is cool). The playoff would consist of two rounds and three applications of the *median* in the process. The semi-finals would consist of two choices, and . Both and are third-order accurate and one of them is also nonlinearly stable to the extent that the ENO approximation would be (the second stable scheme is ). The finals then would be defined by the following test, and would produce a third-order and nonlinearly stable approximation. This is a direct consequence of the median function’s use. **Two of the approximations are third-order, and two of the approximations are nonlinearly stable, so the result of the playoff final is an approximation that gives us both.** The results from my paper strongly indicate that this approach is a success, and can be used to construct methods of much higher order accuracy.

If one were feeling more bold about solving problems of this ilk, one might throw the fifth-order upstream centered approximation into the mix. This fifth-order approximation is the asymptotic limit of the fifth-order WENO method when the solution is (very) smooth and a linear combination of the three third-order approximations, . **Make the fifth-order approximation the returning champion have it play the winner of the playoff system in a final match**. This would look like the following, . It would still only be third-order accurate in a formal manner, but have better accuracy where the solution is well-resolved. The other thing that is intrinsically different from the classic ENO approach is the guidance of the choice for . To a very large extent the result of the playoff is determined by this choice and the result inherits the properties of this method. There is a wealth of monotonicity-preserving second-order methods to choose from used to determine the fitness of the high-order approximations.

ENO methods traditionally use a hierarchical stencil selection approach that introduces serious practical problems. Their weighted version is a far better method (WENO) both because it is higher order, but also better stability properties. The high order nature of these schemes can be analyzed via the elegant continuous nature of the weights. The largest issue with both methods is their great cost either in floating point or logical expression evaluation. **Using the median function can provide a route analogous to the playoff scheme above to produce an ENO method where the prospect is that the chosen scheme is the one closest to some “comparison” method chosen from the wealth of monotonicity-preserving methods**. This ENO scheme would then inherit the basic properties of the base scheme, but also have the high-order nature of ENO along with being non-oscillatory. A close relative of the *median* function is the *xmedian*, which delivers whatever argument is closer in an absolute value sense to the comparison value. For the following form, the function will provide either or depending on their distance from . The function is written down at the end of the post along with all the other functions used (I came up with these to allow the analysis of this class of methods via modified equation analysis).

Here is a third-order ENO like method based on either the *median* or *xmedian* procedure. **Both methods will produce a bona fide method with third-order accuracy (confirmed via numerical tests)**. The method starts with the selection of the comparison scheme, . This scheme may be first or second-order accurate, but needs to have strong nonlinear stability properties. For the third-order (or 4^{th} order) method, the scheme has two rounds and three total tests (i.e., semi-finals and finals). The semi-finals are and . The finals are then easy to predict from the previous method, $ latex U_{3,C} = \mbox{xmedian}\left( U_c, U_{3,A}, U_{3,B} \right) $. With the properties of the *xmedian* we can see that the scheme will select one of the third-order methods. This issue is that this playoff scheme does not produce a selection that would obviously be stable using the logic from earlier in the post. The question is whether the non-oscillatory method chosen by its relative closeness to the comparison scheme is itself nonlinearly stable.

The main question is the nature of the stability. The proposition is that the comparison scheme drawn from the category of nonlinear stable, monotonicity preserving methods endows the result with stability. With the *median* there is a very strong argument for stability. For the *xmedian* this argumentation is absent. Stability in both cases is born out through numerical testing, but theoretical support does not seem to have an obvious path. **I note that ENO and WENO methods don’t have a deep stability result aside from the notion that classic ENO will produce an entropy-stable flux** (but WENO does not). Practical success would seem to favor WENO to a very large degree as it has been applied to a vast number of practical cases.

If you’re not prepared to be wrong, you’ll never come up with anything original.

― Ken Robinson

The use of the *median* and related functions starts to beg some questions desperately in need of answers. There is a broad a deep of understanding of what numerical stability is and implies. This is for linear schemes, but the concept of nonlinear stability needs deeper thought. **The archetypes of nonlinear stability are TVD/monotonicity-preserving schemes and ENO or WENO methods.** The nature of these methods allows the local use of intrinsically unstable approximations without the usual path to catastrophe this implies. Where is the dividing line between a method being non-oscillatory (nonlinearly stable) and not. Using the *median* or its brother the *xmedian*, I can come up with a host of methods that perform in a fashion that implies that they are nonlinearly stable. Can we come up with a technically defensible definition that can be used to take these methods even further?

The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.

― Anaïs Nin

**References**

Huynh, Hung T. “Accurate monotone cubic interpolation.” *SIAM Journal on Numerical Analysis* 30.1 (1993): 57-100.

Huynh, Hung T. “Accurate upwind methods for the Euler equations.” *SIAM Journal on Numerical Analysis* 32.5 (1995): 1565-1619.

Suresh, A., and H. T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” *Journal of Computational Physics* 136.1 (1997): 83-99.

Boris, Jay P., and David L. Book. “Flux-corrected transport. I. SHASTA, A fluid transport algorithm that works.” *Journal of computational physics* 11.1 (1973): 38-69. (Kuzmin, D., R. Löhner, and S. Turek, eds. *Flux-Corrected Transport: Principles, Algorithms, and Applications. Scientific Computation*. Springer, 2005.)

Balsara, Dinshaw S., and Chi-Wang Shu. “Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy.”*Journal of Computational Physics* 160.2 (2000): 405-452.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” *Journal of computational physics* 54.1 (1984): 174-201.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” *Journal of Computational Physics* 225.2 (2007): 1827-1848.

Harten, A., Engquist, B., Osher, S., & Chakravarthy, S. R. (1997). Uniformly high order accurate essentially non-oscillatory schemes, III. *Journal of Computational Physics*, *131*(1), 3-47. (Harten, Ami, et al. “Uniformly high order accurate essentially non-oscillatory schemes, III.” *Upwind and High-Resolution Schemes*. Springer Berlin Heidelberg, 1987. 218-290.)

Liu, Xu-Dong, Stanley Osher, and Tony Chan. “Weighted essentially non-oscillatory schemes.” *Journal of computational physics* 115.1 (1994): 200-212.

Shu, Chi-Wang. *Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws*. Springer Berlin Heidelberg, 1998.

Henrick, Andrew K., Tariq D. Aslam, and Joseph M. Powers. “Mapped weighted essentially non-oscillatory schemes: achieving optimal order near critical points.” *Journal of Computational Physics* 207.2 (2005): 542-567.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” *Journal of Computational Physics* 196.1 (2004): 259-281.

Rider, William J. “Building Better (Weighted) ENO Methods.” *Computational Fluid Dynamics 2006*. Springer Berlin Heidelberg, 2009. 161-166.

Fjordholm, Ulrik S., Siddhartha Mishra, and Eitan Tadmor. “Arbitrarily high-order accurate entropy stable essentially nonoscillatory schemes for systems of conservation laws.” *SIAM Journal on Numerical Analysis* 50.2 (2012): 544-573.

**Algebraically defined functions:**

Pingback: A More Robust, Less Fragile Stability for Numerical Methods | The Regularized Singularity

Pingback: Smoothed Operators | The Regularized Singularity