To understand a science it is necessary to know its history.

― Auguste Comte

After monotonicity-preserving methods came along and revolutionized the numerical solution of hyperbolic conservation laws, people began pursuing follow-on breakthroughs. Heretofore nothing has appeared as a real breakthrough although progress has been made. There are some very good reasons for this and understanding them helps us see how and where progress might be made and how. As I noted several weeks ago in the blog post about Total Variation Diminishing methods, the breakthrough with monotonicity preserving came in several stages. **The methods were invented by practitioners who were solving difficult practical problems. This process drove the innovation in the methods**. Once the methods received significantly notice as a breakthrough, the math came along to bring the methodology into rigor and explanation. The math produced a series of wonderful connections to theory that gave results legitimacy, and the theory also connected the methods to earlier methods dominating the codes at that time. People were very confident about the methods once math theory was present to provide structural explanations. With essential non-oscillatory (ENO) methods, the math came first. This is the very heart of the problem.

Later I will elaborate on some of the technical challenges with ENO methods, but their first problem was related to their origin. Real progress is made by solving difficult problems in impossible ways. The methods preceding ENO were created to deal with real problems that could not be successfully solved. **The innovation arose to solve the problems, not create better methods. The solution to the problems was allowed by better methods. This is key**. Solving the problem is the thing to focus on without prejudice toward the means. Today’s research tends to define the means of progress *a priori* and results in an unnatural process. In addition, we need to be open to a multitude of means to solution. Progress and breakthroughs often come via serendipity and from places unexpected. ENO was a solution looking for a problem. This is why it hasn’t meant the level of success we had hoped for.

As I noted the monotonicity preserving methods came along and total variation theory to make it feel rigorous and tie it to solid mathematical expectations. Before this the monotonicity preserving methods felt sort of magical and unreliable. The math solidified the hold of these methods and allowed people to trust the results they were seeing. With ENO, the math came first with a specific mathematical intent expressed by the methods. **The methods were not created to solve hard problems although they had some advantages for some hard problems. This created a number of issues that these methods could not overcome**. First and foremost was fragility, followed by a lack of genuine efficacy. The methods would tend to fail when confronted with real problems and didn’t give better results for the same cost. More deeply, the methods didn’t have the pedigree of doing something amazing that no one had seen before. ENO methods had no pull.

A bit of deeper dive is needed here. Originally, the monotone methods were low accuracy, but exceedingly reliable (monotonicity is the feature of producing physical solutions without unphysical artifacts, i.e. oscillations). These low-order methods had their own artifacts, extreme dissipation making solutions to every problem essentially laminar and unenergetic. These solutions did not replicate what we seen naturally. Conversely, high accuracy methods came with oscillations and unreliability. To solve real problems with high-order methods seemingly ad hoc methods like artificial viscosity could provide greater reliability. **Innovation came along and produced a solution where you could blend the high order methods with the original monotone low-order methods in an adaptive manner. All of a sudden you could get reliability along with most of the accuracy. Most importantly the complex energetic flows seen in nature could be simulated practically**. Flows that are turbulent suddenly looked and acted turbulent. As if almost by magic the results were regarded. This magic caught people’s attention and drove almost complete adoption of these methods by the community.

Don’t mistake activity with achievement.

― John Wooden

Only after the interest in the community came along did the mathematical rigor join the fray. I’ll note that the preceding state of affairs had a good mathematical ground itself providing the foundation for progress. **Most notably the barrier theorem by Godunov provided a clear challenge that the innovators needed to overcome. Godunov’s theorem told us that a linear second-order method could not be monotone (non-oscillatory)**. The key to overcoming the theorem was to move to nonlinear second-order methods where the discrete representation is a function of the solution itself. The new mathematics tied admissibility conditions for solutions together with the new nonlinear methods. We overcame existing mathematical limits by changing the rule and tied ourselves to modest and minimal requirements for the validity of the results.

ENO methods were devised to move the methods ahead. ENO took the adaptive discrete representation to new heights. Aside from the “adaptive” aspect the new method was a radical departure from those it preceded. The math itself was mostly notional and fuzzy lacking a firm connection to the same preceding work. **If you had invested in TVD methods, the basic machinery you used had to be completely overhauled for ENO. The method also came with very few guarantees of success**. Finally, it was expensive, and suffered from numerous frailties. It was a postulated exploration of interesting ideas, but in the mathematical frame, not the application frame. Its development also happened at the time when applied mathematics began to abandon applications in favor of a more abstract and remote connection via packaged software.

80% of results come from 20% of effort/time

― Vilfredo Pareto

In spite of this, the intrinsic weaknesses of ENO were exposed and have certainly improved over time. The adaptive stencil selection in the original ENO could produce genuinely pathological results including instabilities. **The answer to this issue has canonically been provided by weighted ENO (WENO) methods. The methods were constructed to be intrinsically numerically stable**. WENO also provided another benefit albeit only partially. If a solution is sufficiently smooth locally, the domain of dependence for the discrete representation can support a higher order method. WENO automatically selects this method. This was another decrement of ENO, the wastefulness of the method’s adaptively in places where is was unnecessary. The original ENO also could produce extreme sensitivity to small changes in the solution. An infinitesimal change in the solution can result in a completely different discrete method, and WENO cured this issue. **Nonetheless, WENO was not a complete answer because of its intrinsic expense, and its modification of the high-order stencil when linear and nonlinear stability did not require it**. Robustness of solutions could be compromised by unphysical solutions (often with negative densities, pressure or energies). New limiters were devised to provide protection from these problems and improved the methods. In spite of all this progress, for difficult problems, WENO was still less accurate and more expensive than high quality second-order methods.

ENO and WENO methods were advantageous for a narrow class of problems usually having a great deal of fine scale structure. **At the same time, they were not a significant (or any) improvement over the second-order accurate methods that dominate the production codes for the broadest class of important application problems**. It’s reasonable to ask what might have been done differently to product a more effective outcome? One of the things that hurt the broader adoption of ENO and WENO methods is an increasingly impenetrable codes where large modification is nearly impossible as we create a new generation of legacy codes (retaining the code base).

Part of the adoption of the monotonicity preserving methods was the quantum leap in solution quality. This may not be achievable with other methods, or ENO & WENO. Part of the quantum leap derived its origin from the place the methods came from: innovative application solutions. Part was simply an incredibly valuable low hanging fruit that was harvested in the process of invention. A second part of the rapid adoption was a firm tie to the past where a hybridization of legacy methods could produce a fantastically more powerful method. **ENO and WENO broke from this connection and was expressed as a completely different method that can’t be melded in. On the other hand, if ENO had started as a more incremental evolution from TVD methods, the methods could have been implemented as an extension of an existing code**. This would have made the success of the methods more inevitable than difficult. Perhaps backing away from the path, we have been on and seeking a method that steps incrementally forward could stir real progress in methods.

Stark truth, is seldom met with open arms.

― Justin K. McFarlane Beau

Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” In *Upwind and high-resolution schemes*, pp. 218-290. Springer, Berlin, Heidelberg, 1987.

Shu, Chi-Wang. “Numerical experiments on the accuracy of ENO and modified ENO schemes.” *Journal of Scientific Computing* 5, no. 2 (1990): 127-149.

Liu, Xu-Dong, Stanley Osher, and Tony Chan. “Weighted essentially non-oscillatory schemes.” *Journal of computational physics* 115, no. 1 (1994): 200-212.

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” *Journal of computational physics* 126, no. 1 (1996): 202-228.

Rider, William J., and Len G. Margolin. “Simple modifications of monotonicity-preserving limiter.” *Journal of Computational Physics* 174, no. 1 (2001): 473-488.

Zhang, Xiangxiong, and Chi-Wang Shu. “Maximum-principle-satisfying and positivity-preserving high-order schemes for conservation laws: survey and new developments.” In *Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences*, vol. 467, no. 2134, pp. 2752-2776. The Royal Society, 2011.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” *Journal of Computational Physics* 196, no. 1 (2004): 259-281.