# The Primal Nature of Hyperbolic Conservation Laws

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.

― Isaac Asimov

In conducting science, the importance of models is central to practice. Modeling is paired with observation as Man’s abstraction for understanding the World around us. Models need to be descriptive and tractable for examining nature.  These two aspects can be in direct conflict with each other. Observation under natural or controlled circumstances provides the core of scientific knowledge. Observation becomes science when we provide a systematic explanation for what we see. More often than not this explanation has a mathematical character as the mechanism we use. Among our mathematical devices differential equations are among our most powerful tools. In the most basic form these equations are rate of change laws for some observable in the World. Most crudely these rate equations can be empirical vehicles for taking observations into a form useful for prediction, design and optimization. A more basic form is partial differential equations (PDEs) that describe the basic physics in a more expansive form. It is important to consider the consequences of the model forms we use. Several important categories of models are intrinsically unphysical in aspects thus highlighting the George Box aphorism that “essentially, all models are wrong”!

Assumptions are the most damaging enemies of our mind’s equilibrium…An assumption is an imaginary truth.

― A.A. Alebraheem

Partial differential equations come in three basic flavors, hyperbolic, parabolic and elliptic. These flavors describe some of the basic character of the equations and have fundamental differences in how they are solved, understood as objects and more importantly physical context. The core of this essay is going to be physical in nature and to the point only hyperbolic equations are primal in physics. This is to say that at the basic level everything that we might describe as a physics law is hyperbolic in character. This is for a simple and very good reason, the principle of causality. Cause and effect, the flow of time and the presence of a cosmic speed limit. If we adhere to these maxims, the conclusion is utterly obvious. Other forms of PDEs produce instantaneous global effects that violate this principle. This in no way implies that parabolic or elliptic models are not incredibly useful, they are. Their utility and other properties exceed the issues with causality violations.

More on that point soon, but first a bit of digression on the other forms of PDEs. The classical elliptic equations is Laplace’s equation, $\partial_{xx} u + \partial_{yy} = 0$. Elliptic equations are the simplest form and often describe physics where spatial terms are in equilibrium and there is no temporal, rate terms. Elliptic equations can include time terms, but usually implying something so deeply unphysical as to be utterly outlawed. If time is elliptic, the past is determined by the future, and since we know that time flows in one direction, this is deeply and fundamentally unphysical. In other uses, the elliptic PDEs are found through ignoring temporal terms. This is a philosophical violation of the second law of thermodynamics, which can be used to establish the arrow of time. In this sense we find that elliptic equations are an asymptotic simplification of more fundamental laws. Another implication of ellipticity of PDEs is infinite speed of information, or more correctly an absence of time. If elliptic equations are found within a set of equations, we can be absolutely sure that some physics has been chosen to be ignored. In many cases these ignored physics are not important and some benefit is achieved through the simplification. On the other hand, we shouldn’t lose sight of what has been done and its potential for mischief. At some point this mischief will become relevant and disqualifying.

Assumptions aren’t facts; they’re opportunities for research and testing.

― Laurie Buchanan

Next along the way we have parabolic equations and we can repeat the above discussion. Most classically the equation of heat transfer is parabolic (along with other diffusion processes). The classical form is the heat equations, $\partial_{t} u - \partial_{xx} u$. We often learn that these diffusion processes are fundamental, leading to the second law of thermodynamics. This comes with a deep problem that we should acknowledge. The parabolic equations imply an infinite propagation speed. Physically the process of diffusion is quite discrete associated with collisionality of the particles that make up materials, or discrete effects of solids (where electrons are particles that move, exchange and interact). This physical effect is utterly bound by finite speeds of propagation.

With elliptic equations the strength of the signal is unabated in time, but with parabolic equations, the signal diminishes in time. As such the sin of causality violation isn’t quite so profound, but it is a sin nonetheless. As before we get parabolic equations by ignoring physics. Usually this is a valid thing to do based on the time and length scales of interest. We need to remember that at some point this ignorance will damage the ability to model. We are making simplifications that are not always justified. This point is lost quite often. People are allowed to think the elliptic or parabolic equations are fundamental when they are not.

We now get to the third category of PDEs, the hyperbolic kind. The simplest form is a wave equation, $\partial_{tt} u + \partial_{xx} u = 0$. This can be written as a system of equations first-order PDEs, $\partial_{t} u + \partial_{x} v = 0$ and $\partial_{t} v - \partial_{x} u = 0$. We can derive the simple wave equation by differentiating the first equation in time and the second in space then substituting to eliminate $v$. The propriety of these steps depends on the variable being continuously differentiable, i.e., smooth. The second, first-order form is the entry point for the beautiful mathematics of hyperbolic conservation laws. As we will show, the elliptic and parabolic equations are simplifications of the hyperbolic equations made upon applying some assumptions.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

One key example of an equation where the presumption of the basic laws of physics is generally wrong are diffusion processes. One might consider Fourier’s law to be a fundamental law of physics as applied to heat condition in a parabolic form, $C \partial_t T = \nabla\cdot q; q = k \nabla T \rightarrow C \partial_t T = \nabla\cdot\nabla T$. Instead this is a simplification of a more broadly valid law where heat flows using a hyperbolic equation. This requires a simple modification of the Fourier law to $\tau \partial_t q+ q = k \nabla T$.  For most applications heat flow can be modeled in the parabolic form as the hyperbolic form is important over very short time and space scales. Still the more fundamental law is the hyperbolic form, and the classical parabolic form is derived by assuming that certain aspects of the dynamics can be ignored. We must always remember that the standard modeling of diffusion processes has an unphysical aspect baked into the equations.

The importance of the hyperbolic character of heat conduction may be more important. It is related to the property of material called second sound. This property is measured and known to be significant under cryogenic conditions where quantum effects are significant. It is also very hard to measure. The leading and rather compelling fact is its relation to the sound speed; the second sound is slower than the sound speed. This would mean that its effective time scale is also longer than the acoustic time. If this aspect is generally true, then the time scale can’t be ignored under many conditions. The deeper question is how all of this plays with thermodynamics. So much of the grounding of thermodynamics is in an equilibrium setting, and this phenomenon adds a natural and potentially important relaxation time scale.

There is an issue with hyperbolic diffusion that we should acknowledge. This form of the equations can violate the second law of thermodynamics, which underpins the macroscopic dynamics of the universe and the arrow of time. By the same token the imposition of the second law through a physical process in continuum physics is invariably tied to diffusion. As such we have formed a veritable technical Mobius strip.  A question is whether a fundamentally different equation can actually violate a less physical law that it is based upon. This might call the violations of the second law by hyperbolic diffusion into question rather directly! In other words, what would change about the second law of thermodynamics; if the diffusion process itself were hyperbolic. Perhaps this is a specific inroad to discussions of non-equilibrium thermodynamics. This may provide a necessary and distinct framing of a deeper discussion. Clearly infinite speeds of propagation for information is unphysical, functionally the second law could be recovered to account for temporal effects.

There is nothing so expensive, really, as a big, well-developed, full-bodied preconception.

― E.B. White

The incompressible Navier-Stokes equations are second primal example where hyperbolic equations are replaced by both parabolic-elliptic equations. A starting point would be the compressible equations that are purely hyperbolic without viscosity. Of course, the viscosity could be replaced with hyperbolic equations to make the compressible flow totally hyperbolic. These equations are the following, $\partial_t {\bf u} + {\bf u} \cdot\nabla{\bf u} +\nabla p = 0;\nabla\cdot {\bf u} = 0$. Previously, we discussed the replacement of hyperbolic diffusion by the parabolic terms. For the incompressibility we remove sound waves analytically. The key to doing this is remove any connection between pressure and density with the divergence free constraint, $\nabla\cdot {\bf u} = 0$. This also turns mass into a passively advected scalar. This is a useful model for low speed flows, but the diffusion and suppressed sound waves both produce infinite speeds of propagation. This violates the principle of causality where there is cause and effect. Instead everywhere is impacted by everything immediately.

As noted repeatedly these infinite speeds are definitely and unabashedly unphysical and signs that the equations are intrinsically limited in modeling scope. These issues are almost routinely ignored by most scientists and engineers. The reason is that the assumptions associated with parabolic or elliptic equations are valid for use. This will not always be true. It should be in the back of their mind. The message is clear, the equations will become invalid under some conditions, some length or time scale will unveil this invalidity. The question is what are these scales, and have we stumbled upon them yet? More generally the use of parabolic or elliptic equations produce these unphysical effects as a matter of course. This implies that the model equation will lose utility at some point under some conditions. We simply need to guard ourselves to this potential and keep this firmly in the back of our mind. The issue in this regard is the lack of capability to solve these non-standard models and make a complete assessment of model validity. By the same token, the non-standard models are harder to solve and may have deleterious side-effects if the full physics is retained.

A very good example of these side effects occurs with compressible flows when the Mach number is small. Solving low-Mach number flows with compressible codes is terribly inefficient and prone to significant approximation errors. This has a great deal to do with the separation of scales. As a result, the solutions often do not adhere to expectations. The consequence is there are many “fixes” to compressible flow solvers to remove the difficulties. The odd thing about this issue is the definitely greater physical reality associated with the compressible flow equations as compared to the incompressible equations. This might imply that the conditioning of the equations is the greatest problem. In addition, modern shock capturing methods have an implied discontinuity with their construction. It would seem that a continuous approximation might alleviate the problems. Conditioning issues with the separation of scales remains.

For modeling and numerical work, the selection of the less physical parabolic and elliptic equations provides better conditioning. The conditioning provides a better numerical and analytical basis for the solutions. The recognition that the equations are less physical is not commonly appreciated. A broader and common appreciation may provide impetus for identifying when these differences are significant. Holding models that are unphysical as sacrosanct is always dangerous. It is important to recognize the limitations of models and allow ourselves to question them regularly. Even models that are fully hyperbolic are wrong themselves, this is the very nature of models. By using hyperbolic models, we remove an obviously unphysical aspect of a given model. Models are abstractions of reality, not the operating system of the universe. We must never lose sight of this.

Everything must be made as simple as possible. But not simpler.

― Albert Einstein

Courant, Richard, and David Hilbert. Methods of mathematical physics [Methoden der mathematischen Physik, engl.] 1. CUP Archive, 1965.

Lax, Peter D. Hyperbolic partial differential equations. Vol. 14. American Mathematical Soc., 2006.

Körner, C., and H. W. Bergmann. “The physical defects of the hyperbolic heat conduction equation.” Applied Physics A 67, no. 4 (1998): 397-401.

Chester, Marvin. “Second sound in solids.” Physical Review131, no. 5 (1963): 2013.

Christov, C. I., and P. M. Jordan. “Heat conduction paradox involving second-sound propagation in moving media.” Physical review letters 94, no. 15 (2005): 154301.

Fefferman, Charles L. “Existence and smoothness of the Navier-Stokes equation.” The millennium prize problems 57 (2006): 67.

Doering, Charles R. “The 3D Navier-Stokes problem.” Annual Review of Fluid Mechanics 41 (2009): 109-128.

# Our Models of Reality are Fundamentally Flawed

… Nature almost surely operates by combining chance with necessity, randomness with determinism…

― Eric Chaisson

On many occasions I’ve noted the tendency for science to see the World through a highly deterministic lens. We do this despite the World around us that includes a large degree of chance and random events. In science we might consider highly deterministic experiments to be well designed and useful. In a sense this is correct as such experiments confirm our existing theories grounded heavily in determinism. When we take this attitude into the real World of observation of nature, or engineered systems, the deterministic attitude runs aground. The natural World and engineered systems rarely behave in a completely deterministic manner. We see varying degrees of non-determinism and chance in how things work. Some of this is the action of humans in a system, some of it are complex initial conditions, or structure that deterministic models ignore. This variability, chance, and structure is typically not captured by our modeling, and as such modeling is limited in utility for understanding reality.

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

Determinism. universe has a starting point (Big Bang?) correct formulations for laws of nature allow histories of all particles to be traced and predicted into the future. everything is predictable, universe functions like clockwork. Free will? Sir Isaac Newton.

Mathematical models of reality are heavily grounded in a deterministic assumption. This grounding is largely the legacy of Newton whose assumptions were heavily influenced by his religious faith and an almighty God. This God controlled the universe and determined the outcomes. These beliefs ran headlong into reality in the 20th Century with quantum physics and the need for probabilities in models. The power of non-determinism for the most fundamental laws of physics was undeniable, but at larger scales determinism rules supreme. We explain that the law of large numbers pushes the laws of physics over into determinism. On the other hand, we have pervasive laws like the second law of thermodynamics that encapsulate the disorder in the World in the deterministic view. Is this sufficient to capture all of non-determinisms role? I think not. In this sense the work of Newton and 19th Century thought still controls much of science today. Almost every modeling exercise is following determinism as an unspoken underlying assumption. This happens without regard to what we see each day in the real World. The second law of thermodynamics and the power of entropy is not adequate to capture the full span of disorder’s impact on our World. This assumption does untold damage, and it is time to overthrow some aspects of determinism as it has outlived its utility.

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

― Pierre-Simon Laplace

Complex systems and experiments have a great deal of non-determinism in their fundamental behavior and outcomes.   Commonly this non-determinism is completely ignored and modeled with a fully deterministic modeling approach (e.g., the second law). More to the point, a better assumption is a combination of deterministic and stochastic effects are present. The stochastic effects are largely ignored today and swept up into the deterministic model in a heavy handed one-size fits all manner. This sort of approach isn’t usually even considered as a problem because the behavior is considered totally deterministic. The consequence is an inability to consider the proper source of the behavior is misattribution. We are putting physical effects that are non-deterministic into a model that is deterministic. This should seriously limit the predictive power of our modeling.

To move forward we should embrace some degree of randomness in the fundamental models we solve. This random response naturally arises from various sources. In our deterministic models, the random response is heavily incorporated in boundary and initial conditions. The initial conditions include things like texture and structure that the standard models homogenize over. Boundary conditions are the means for the model to communicate with the broader world, which has vast complexities are grossly simplified. In reality both the initial and boundary conditions are far more complex than our models currently use.

The sort of deterministic models we use today attempt to include the entire system without explicitly modeling non-deterministic aspects. These effects are incorporated into the deterministic model or end up increasing the uncertainty of the modeling effort. Our efforts could advance significantly by directly modeling the stochastic aspects. This would produce an ability to separate the modeling effects that are completely deterministic from those that are random plus the interaction between these. We might expect that producing models with an appropriate separation would make the deterministic modeling lower in uncertainty. Some amount of uncertainty in any of these systems is irreducible, and proper modeling of the non-deterministic would produce results that capture these effects properly. Instead of being irreducible this aspect would simply be part of the model, and part of the result. It would move from being uncertain to being part of the answer. We should not expect that modeling non-deterministic dynamics with deterministic models to be the best we can do.

Applying logic to potentially illogical behavior is to construct a house on shifting foundations. The structure will inevitably collapse.

― Stewart Stafford

Another aspect of the complexity that current modeling ignores are the dynamics associated with the stochastic phenomena or lumps it whole cloth into the model’s closure. In a real system the stochastic aspects of the model evolve over time including nonlinear interactions between deterministic and stochastic aspects. When the dynamics are completely confined to deterministic models, these nonlinearities are ignored or lumped into the deterministic mean field. When models lack the proper connection to the correct dynamics, the modeling capability is diminished. The result is greater uncertainty and less explanation of what is happening in nature. From an engineering point of view, the problem is that the ability to explicitly control for the non-deterministic aspect of systems is diminished because its influence on results isn’t directly exposed. If the actual dynamics were exposed, we could work proactively to design better. This is the power of understanding in science; if we understand we can attempt to mitigate and control the phenomena. Without proper modeling we are effectively flying blind.

Die Quantenmechanik ist sehr achtung-gebietend. Aber eine innere Stimme sagt mir, daß das doch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.

Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the “old one.” I, at any rate, am convinced that He does not throw dice.

–Albert Einstein, in a letter to Max Born

Today’s current modeling paradigm is relentlessly deterministic. We attempt to model experiments as a single well-determined event even when significant aspects of the experiment are non-deterministic. Effectively the non-deterministic aspects are ignored or misattributed to determinism. The second experiment is then inconsistent because it is a different instance, instead of being the same deterministic case with a different stochastic forcing. If we model the stochastic element of the phenomena directly we can get to understanding its impact. With our current modeling we simply drive a fundamental misunderstanding of what is happening. We are left with models that have fundamental limitations. None of these issues is going to be handled by brute force. Neither computer power, or computational accuracy, or algorithmic efficiency will impact these problems. The answer to the issues are centered on modeling and increasing its span of physical phenomena addressed.

The impediments to changing our modeling are massive. We have a strong tendency to lump all of the non-deterministic effects into constitutive laws and closure that lends itself to relative ease of modification. Changing or expanding the governing equations in a code can be utterly daunting and usually not supported by current funding. The entire enterprise of developing new equations is difficult and risky in nature. Our system today is utterly opposed to anything risky and actively undermines attempting anything difficult at every turn. Our computational science is extremely invested in existing models, and most paths to improvement are routed through them. Increasingly we are invested in the most brutish and painfully naïve path to improvement by investing almost entirely in faster computers. The most painful aspect of this path is its lack of timeliness, the ease of creating faster computers has ended with the death of Moore’s law. Getting a faster computer is now extremely expensive and inefficient. Other paths to improvement are not favored and we have almost forgotten how to do science like that. The capper to this sad tale is the utter inability of these computers to help fix faulty models. We have lost the ability to conduct intellectually rigorous work.

The sort of science needed is enormously risky. I am proposing that we have reached the end of utility for models used for hundreds of years. This is a rather bold assertion on the face of it. On the other hand, the models we are using have a legacy going back to when only analytical solutions to models were used, or only very crude numerical tools. Now our modeling is dominated by numerical solutions, and computing from the desktop (or handheld) to supercomputers of unyielding size and complexity. Why should we expect models derived in the 18th and 19th centuries to still be used today? Shouldn’t our modeling advance as much as our solution methods have. Shouldn’t all the aspects of modeling and simulation be advancing. The answer is a dismal no.

The reasons for this dismal state of affairs is somewhat reasonable. The models defined over the past few centuries defined general solution. Computing offered a path to solution that analytical methods failed to provide. As a result, we saw computing work to provide useful solutions to models that have had limited utility for a very long time. The models now being routinely solved numerically that had been unavailable for a huge amount of time. The numerical work is often done quite poorly with marginal quality control. Assessment of the quality of numerical work is usually slipshod and casual. The “eyeball” and “view graph” norm rule quantified uncertainty and error. Most good results using these models are heavily calibrated and lack any true predictive power. In the absence of experiments, we are generally lost and rarely hit the mark. Instead of seeing any of this as shortcomings in the models, we seek to continue using the same models and focus primarily on computing power as a remedy. This is both foolhardy and intellectually empty if not outright dishonest.

As such the evidence that our models are inadequate is overwhelming. Our response to this evidence has been virtual ignorance of this conclusion. We continue to invest in the same areas that have failed to improve results over a long period of time. We continue to sell massive computing power as the fix it all, remedy for problems. We fail to recognize that neither computing power, or solution accuracy will cure any problems if the fundamental model is flawed. Our fundamental models are flawed, and the routes taken for improving modeling and simulation will not help. If the basic model has flaws, a faster computer or a better method, a more accurate discretization, or better scaling will not help. The only cure is to fix or change the model. One of the biggest places where modeling fails is modeling the separation in deterministic and non-deterministic aspects of our models.

A simple setting and familiar setting to see how this might help is weather. If we look at any of our models, at any of our scales, it is obvious that enormous variability and details are being excluded from our modeling. One of the biggest needs of weather modeling are extreme weather events that dominate financial and political consequences of weather. Analogous issues exist in a myriad of other fields where modeling and simulation impact the science. A reasonable supposition is that interactions among the averaged over and ignored fine scale structure help produce extreme events when interacting with the large-scale weather. It is well known that large scale weather phenomena set the stage or increases the likelihood of extreme events. The actual phenomenology of extreme events depends on how the large-scale weather interacts with local detail such as the surface topology.

Analogous phenomena happen is many other fields such as material failure, and turbulence. These models are strained under the demands of the modern World and progress is desperately needed. The solutions are not being supported, instead focused on risk adverse and rather pedestrian approaches while eschewing riskier work like model creation. The focus on computing power reflects this intellectual cowardice quite acutely. Our current models are limited by their fundamental structure rather than solution methods, or computing power. Our science programs need to address these challenges in a credible manner by coupling a focus on theory with innovations in experimental science. The challenge is not refining old ideas but allowing ourselves to pursue new ones with sufficient freedom and aggression. Our greatest challenge is not the science, but rather our inability to conceive of solutions in today’s World. This work could be enormously valuable to society as a whole if we could envision it and take the risks necessary to reach success.

The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

# Integrating Modeling and Simulation for Predictive Science

Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

We would be far better off removing the word “predictive” as a focus for science. If we replaced the emphasis on prediction with a focus on explanation and understanding, our science would improve overnight. The sense that our science must predict carries connotations that are unrelentingly counter-productive to the conduct of science. The side-effects of the predictivity undermine the scientific method at every turn. The goal of understanding nature and explaining what happens in the natural world is consistent with the conduct of high quality science. In many respects large swaths of the natural world are unpredictable in highly predictable ways. Our weather is a canonical example of this. Moreover, we find the weather to be unpredictable in a bounded manner as time scales become longer. Science that has focused on understanding and explanation has revealed these truths. Attempting to focus prediction under some circumstances is both foolhardy and technically impossible. As such the reality of prediction needs to be entered into carefully and thoughtfully under well-chosen circumstances. We also need the freedom to find out that we are wrong and incapable of prediction. Ultimately, we need to find out limits on prediction and work to improve or accept these limits.

“Predictive Science” is mostly just a buzzword. We put it in our proposals to improve thechances of hitting funding. A slightly less cynical take would take predictive as the objective for science that is completely aspirational. In the context of our current world, we strive for predictive science as a means of confirming our mastery over a scientific subject. In this context the word predictive implies that the we understand the science well enough to foresee outcomes. We should also practice some deep humility in what this means. Predictivity is always a limited statement, and these limitations should always be firmly in mind. First, predictions are limited to some subset of what can be measured and fail for other quantities. The question is whether the predictions are correct for what matters? Secondly, the understanding is always waiting to be disproved by a reality that is more complex than we realize. Good science is acutely aware of these limitations and actively probes the boundary of our understanding.

In the modern world we constantly have new tools to help expand our understanding of science. Among the most important of these new tools is modeling and simulation. Modeling and simulation is simply an extension of the classical scientific approach. Computers allow us to solve our models in science more generally than classical means. This has increased the importance and role of models in science. We can envision more complex models having more general solutions with computational solutions. Part of this power comes with some substantial responsibility; computational simulations are highly technical and difficult. They come with a host of potential flaws, errors and uncertainties that cloud results and need focused assessment. Getting the science of computation correct and assessed to play a significant role in the scientific enterprise requires a broad multidisciplinary approach with substantial rigor. Playing a broad integrating role in predictive science is verification and validation (V&V). In a nutshell V&V is the scientific method as applied to modeling and simulation. Its outcomes are essential for making any claims regarding how predictive your science is.

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

We can take a moment to articulate the scientific method and then restate it in a modern context using computational simulation. The scientific method involves making hypotheses about the universe and testing those hypotheses against observations of the natural world. One of the key ways to make observations are experiments where the measurements of reality are controlled and focused to elucidate nature more clearly. These hypotheses or theories usually produce models of reality, which take the form of mathematical statements. These models can be used to make predictions about what an observation will be, which then confirms the hypothesis. If the observations are in conflict with the model’s predictions, the hypothesis and model need to be discarded or modified. Over time observations become more accurate, often showing the flaws in models. This usually means a model needs to be refined rather than thrown out. This process is the source of progress in science. In a sense it is a competition between what we observe and how well we observe it, and the quality of our models of reality. Predictions are the crucible where this tension can be realized.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feigl

One of the best ways to understand how to do predictive science in the context of modeling and simulation is a simple realization. V&V is basically a methodology that encodes the scientific method into modeling and simulation. All of the content of V&V is assuring that science is being done with a simulation and we aren’t fooling ourselves. Verification is all about making sure the implementation of the model and its solution are credible and correct. The second half of verification is associated with estimating the errors in the numerical solution of the model. We need to assess the numerical uncertainty and the degree to which it clouds the model’s solution.

Validation is then the structured comparison of the simulated model’s solution with observations. Validation is not something that is completed, but rather it is an assessment of work. At the end of the validation process evidence has been accumulated as to the state of the model. Is the model consistent with the observations? If the uncertainties in the modeling and simulation process along with the uncertainties in the observations can lead to the conclusion that the model is correct enough to be used. In many cases the model is found to be inadequate for the purpose and needs to be modified ˙or changed completely. This process is simply the hypothesis testing so central to the conduct of science.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

― George Box

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

― George Box

The George Box maxim about all models being wrong, but some being useful is important and key in the conduct of V&V. It is also central to modeling and simulation’s most important perspective, the constancy of necessity for improvement. Every model is a mathematical abstraction that has limited capacity for explaining nature. At the same time the model has a utility that may be sufficient for explaining everything we can measure. This does not mean that the model is right, or perfect, it means the model is adequate. The creative tension in science is the narrative of arc of refining hypotheses and models of reality or improving measurements and experiments to more acutely test the models. V&V is a process for achieving this end in computational simulations. Our goal should always be to find inadequacy in models and define the demand for improvement. If we do not have the measurements to demonstrate a model’s incorrectness, the experiments and measurements need to improve. All of this serves progress in science in a clear manner.

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

–Sir Arthur Stanley Eddington

Let’s take a well thought of and highly accepted model, the incompressible Navier-Stokes equations. This model is thought to largely contain the proper physics of fluid mechanics, most notably turbulence. Perhaps this is true although our lack of progress in turbulence might indicate that something is amiss. I will state without doubt that the incompressible Navier-Stokes equations are wrong in some clear and unambiguous ways. The deepest problem with the model is incompressibility. Incompressible fluids do not exist and the form of the mass equation showing divergence free velocity fields implies several deeply unphysical things. All materials in the universe are compressible and support sound waves, and this relation opposes this truth. Incompressible flow is largely divorced from thermodynamics and materials are thermodynamic. The system of equations violates causality rather severely, the sound waves travel at infinite speeds. All of this is true, but at the same time this system of equations is undeniably useful. There are large categories of fluid physics that they explain quite remarkably. Nonetheless the equations are also obviously unphysical. Whether or not this unphysical character is consequential should be something people keep in mind.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

In conducting predictive science one of the most important things you can do is make a prediction. While you might start with something where you expect the prediction to be correct (or correct enough), the real learning comes from making predictions that turn out to be wrong. It is wrong predictions that will teach you something. Sometimes the thing you learn is something about your measurement or experiment that needs to be refined. At other times the wrong prediction can be traced back to the model itself. This is your demand and opportunity to improve the model. Is the difference due to something fundamental in the model’s assumptions? Or is it simply something that can be fixed by adjusting the closure of the model?  Too often we view failed predictions as problems when instead they are opportunities to improve the state of affairs. I might posit that if you succeed with a prediction, it is a call to improvement; either improve the measurement and experiment, or the model. Experiments should set out to show flaws in the models. If this is done the model needs to be improved. Successful predictions are simply not vehicles for improving scientific knowledge, they tell us we need to do better.

When the number of factors coming into play in a phenomenological complex is too large scientific method in most cases fails. One need only think of the weather, in which case the prediction even for a few days ahead is impossible.

― Albert Einstein

In this context we can view predictions as things that at some level we want to fail at. If the prediction is too easy, the experiment is not sufficiently challenging. Success and failure exists on a continuum. For simple enough predictions our models will always work, and for complex enough predictions, the models will always fail. The trick is finding the spot where the predictions are on the edge of credibility, and progress is needed and ripe. Too often we take the mindset is taken where predictions need to be successful. An experiment that is easy to predict is not a success, it is a waste. I would rather see predictions be focused at the edge of success and failure. If we are interested in making progress, predictions need to fail so that models can improve. By the same token a successful prediction indicates that the experiment and measurement need to be improved to more properly challenge the models. The real art of predictive science is working at the edge of our predictive modeling capability.

A healthy focus on predictive science with a taste for failure produces a strong driver for lubricating the scientific method and successfully integrating modeling and simulation as a valuable tool. Prediction involves two sides of science to work in concert; the experiment-observation of the natural world, and the modeling of the natural world via mathematical abstraction. The better the observations and experiments, the greater the challenge to models. Conversely, the better the model, the greater the challenge to observations. We need to tee up the tension between how we sense and perceive the natural world, and how we understand that world through modeling. It is important to examine where the ascendency in science exists. Are the observations too good for the models? Or can no observation challenge the models? This tells us clearly where we should prioritize.

We need to understand where progress is needed to advance science. We need to take advantages of technology in moving ahead in either vein. If observations are already quite refined, but new technology exists to improve them, it behooves us to take advantage of it. By the same token modeling can be improved via new technology such a solution methods, algorithmic improvements and faster computers. What is lacking from the current dialog is a clear focus on where the progress imperative exists. A part of integrating predictive science well is determining where the progress is most needed. We can bias our efforts to focus on where the progress is most needed while keeping opportunities to make improvements in mind.

The important word I haven’t mentioned yet is “uncertainty”. We cannot have predictive science without dealing with uncertainty and its sources. In general, we systematically or perhaps even pathologically underestimate how uncertain our knowledge is. We like to believe our experiments and models are more certain than they actually are. This is really easy to do in practice. For many categories of experiments, we ignore sources of uncertainty and simply get away with an estimate of zero for that uncertainty. If we do a single experiment, we never have to explicitly confront that the experiment isn’t completely reproducible. On the modeling side we see the particular experiment as something to be modeling precisely even if the phenomena of interest are highly variable. This is common and a source of willful cognitive dissonance. Rather than confront this rather fundamental uncertainty, we willfully ignore it. We do not run replicate experiments and measure the variation in results. We do not subject the modeling to reasonable variations in the experimental conditions and check the variation in the results. We pretend that the experiment is completely well-posed, and the model is too. In doing this we fail at the scientific method rather profoundly.

Another key source of uncertainty is numerical error. It is still common to present results without any sense of the numerical error. Typically, the mesh used for the calculation is asserted to be fine enough without any evidence. More commonly the results are simply given without any comment at all. At the same time the nation is investing huge amounts of money in faster computers that implicitly assume that faster computers yield better solutions, a priori. This entire dialog often proceeds without any support from evidence. It is 100% assumption. When one examines these issues directly there is often a large amount of numerical error that is being ignored. Numerical error is small in simple problems without complications. For real problems with real geometry and real boundary conditions with real constitutive models, the numerical errors are invariably significant. One should expect some evidence to be presented regarding its magnitude, and you should be suspicious if it’s not there. Too often we simply give simulations a pass on this detail and fail due diligence.

Truth has nothing to do with the conclusion, and everything to do with the methodology.

― Stefan Molyneux

In this sense the entirety of V&V is a set of processes for collecting evidence about credibility and uncertainty. In one respect verification is mostly an exercise in collecting evidence of credibility and due diligence for quality in computational tools. Are the models, codes and methods implemented in a credible and high-quality manner. Has the code development been conducted in a careful manner where the developers have checked and done a reasonable job of producing code without obvious bugs? Validation could be characterized by collecting uncertainties. We find upon examination that many uncertainties are ignored in both computational and experimental work. Without these uncertainties and the evidence surrounding them, the entire practice of validation is untethered from reality. We are left to investigate through assumption and supposition. This sort of validation practice has a tendency to simply regress to commonly accepted notions. In such an environments models are usually accepted as valid and evidence is often skewed toward that as a preordained conclusion. Without care and evidence, the engine of progress for science is disconnected.

In this light we can see that V&V is simply a structured way of collecting evidence necessary the scientific method. Collecting this evidence is difficult and requires assumptions to be challenged. Challenging assumptions is courting failure. Making progress requires failure and the invalidation of models. It requires doing experiments that we fail to be able to predict with existing models. We need to assure that the model is the problem, and the failure isn’t due to numerical error. To determine these predictive failures requires a good understanding of uncertainty in both experiments and computational modeling. The more genuinely high quality the experimental work is, the more genuinely testing the validation is to model. We can collect evidence about the correctness of the model and clear standards for judging improvements in the models. The same goes for the uncertainty in computations, which needs evidence so that progress can be measured.

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Now we get to the rub in the context of modeling and simulation in modern predictive science. To make progress we need to fail to be predictive. In other words, we need to fail in order to succeed. Success should be denoted by making progress in becoming more predictive. We should take the perspective that predictivity is a continuum, not a state. One of the fundamental precepts of stockpile steward ship is predictive modeling and simulation. We want confident and credible evidence that we are capable of faithfully predicting certain essential aspects of reality. The only way to succeed at this mission is continually challenging and pushing ourselves at the limit of our capability. This is means that failure should be an almost constant state of being. The problem is projecting a sense of success, which society demands while continually failing. We do not do this well. Instead we need to project a sense that we continually succeed at everything we promise.

In the process we create conditions where the larger goal of prediction is undermined at every turn. Rather than define success in terms of real progress, we produce artificial measures of success. A key to improving this state of affairs is an honest assessment of all of our uncertainties both experimentally and computationally. There are genuine challenges to this honesty. Generally, the more work we do, the more uncertainty we unveil. This is true of experiments and computations. Think about examining replicate uncertainty in complex experiments. In most cases the experiment is done exactly once, and the prospect of reproducing the experiment is completely avoided. As soon as replicate experiments are conducted the uncertainty becomes larger. Before the replicates, this uncertainty was simply zero and no one challenges this assertion. Instead of going back and adjusting our past state based on current knowledge we run the very real risk of looking like we are moving backwards. The answer is not to continue this willful ignorance but take a mea culpa and admit our former shortcomings. These mea culpas are similarly avoided thus backing the forces of progress into an ever-tighter corner.

The core of the issue is relentlessly psychological. People are uncomfortable with uncertainty and want to believe things are certain. They are uncomfortable about random events, and a sense of determinism is comforting. As such modeling reflects these desires and beliefs. Experiments are similarly biased toward these beliefs. When we allow these beliefs to go unchallenged, the entire basis of scientific progress becomes unhinged. Confronting and challenging these comforting implicit assumptions may be the single most difficult for predictive science. We are governed by assumptions that limit our actual capacity to predict nature. Admitting flaws in these assumptions and measuring how much we don’t know is essential for creating the environment necessary for progress. The fear of saying, “I don’t know” is our biggest challenge. In many respects we are managed to never give that response. We need to admit what we don’t know and challenge ourselves to seek those answers.

Only a few centuries ago, a mere second in cosmic time, we knew nothing of where or when we were. Oblivious to the rest of the cosmos, we inhabited a kind of prison, a tiny universe bounded by a nutshell.

How did we escape from the prison? It was the work of generations of searchers who took five simple rules to heart:

1. Question authority. No idea is true just because someone says so, including me.
2. Think for yourself. Question yourself. Don’t believe anything just because you want to. Believing something doesn’t make it so.
3. Test ideas by the evidence gained from observation and experiment.If a favorite idea fails a well-designed test, it’s wrong. Get over it.
4. Follow the evidence wherever it leads. If you have no evidence, reserve judgment.

And perhaps the most important rule of all…

1. Remember: you could be wrong. Even the best scientists have been wrong about some things. Newton, Einstein, and every other great scientist in history — they all made mistakes. Of course they did. They were human.

Science is a way to keep from fooling ourselves, and each other.

― Neil deGrasse Tyson

# Why are experts not important today?

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

In today’s world from work to private life to public discourse, experts are receding in importance. They used to be respected voices who added deep knowledge to any discussion, not any more. Time and time again experts are being rejected by the current flow of events. Experts are messy and bring painful reality into focus. With the Internet, Facebook and the manufactured reality they allow, it’s just easier to dispense with the expert. One can replace the expert with a more comforting and simpler narrative. One can provide a politically tuned narrative that is framed to support an objective. One can simply take a page from the legal world and hire their own expert. The expert is a pain to control, and expertise is expensive. Today we can just make shit up and it’s just as credible as the truth, and much less trouble to manage. Today our management culture with its marketing focus has no time for facts and experts to cloud matters. Why deal with the difficulties that reality offers when you can wish them away. The pitch for money is much cleaner without objective reality to make things hard. Since quality really doesn’t matter anyway, no one knows the difference. We live in the age of bullshit and lies. The expert is obsolete.

The more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.

― Nate Silver

I will acknowledge that the problem is somewhat of a modern problem where everyone can feel like an expert. Anyone can publish stuff online whether its Twitter, Facebook or a blog (fingers pointing at myself). There is a lot of self-promotion and institutionalized catfishing going on. As a result, almost, anyone can be a self-identified expert, and as a consequence no one is an expert. Meanwhile our educational system is completely out of date with modernity, and people have no ability to tell shit from shinola. People are not educated to think critically, instead they are indoctrinated to be servile pawns of the management class. As a result, the masses are utterly incapable of judging legitimacy of sources and trusting credible sources. The impact of this illegitimate basis of reality, trust is crumbling, and people increasingly choose to listen to the voice that matches their own biases. This is creating a huge number of problems as the truth of the world begins to diverge from any objective reality. This whole process is leaking over into the technical world although the effects are subtle. Most acutely the tendency to produce marketable results has replaced quality as the focus. Difficult problems and unforeseen results are easily ignored when something more comforting can easily be manufactured. When that comforting creation produces funding for more work, the ruse gets all the validation it needs from our increasingly corrupt system.

Wise people understand the need to consult experts; only fools are confident they know everything.

― Ken Poirot

Listen to the experts. They got that way for a reason.

― Erica Larsen

When leaders lack expertise, nothing else works

― V.S. Parani

Experts are complicated, difficult, hard to control, and bring painful objective reality to the table. When subjects are difficult it is appealing to choose the simpler and cheaper point-of-view. An expert will steer you away from this and keep the conversation from over simplifying things. Experts also tend to be passionate about their topics of interest. As a result of their passion, the expert is hard to control. This is an issue with management who want to control everything. All the realities that the expert brings up only make the work harder. Thus, the expert makes everything less in control, more difficult and expensive. What’s the point of applying expertise if you can massage and manage the results into something nice and marketable. With abysmal standards for quality there is no reason to deal with complex realities any way. Everything is simply a sales pitch. The reality and expertise only make the job of success harder. Experts only matter if you have high standards and want to accomplish something real.

Reality is messy and full of problems. The expert knows this and provides the best way to navigate the mess. Since reality has become optional and the objective is a marketed result, the expert is also optional. Messy reality is also a problem for marketing, which needs to be simple. Problems that are hard to solve are risky too. This is also something to be avoided. The best way to avoid a potential failure is to focus on past success and simply repackage it into something new. Since there aren’t any experts there to digest the results, the ruse will likely work. This sort of cozy relationship is widespread. Repackaged, and marketed results are simply easier to deal with and sell. If your remove experts and their passion for a topic, no one will notice anyway. We simply create a system where no real progress is made. All that is necessary is to keep any experts from being involved and the whole system works like a charm. Increasingly work and education are conspiring to cut off the supply of experts too. We are simply not learning or working in a way that creates deep knowledge and without depth of understanding there aren’t experts.

Incestuous, homogeneous fiefdoms of self-proclaimed expertise are always rank-closing and mutually self-defending, above all else.

― Glenn Greenwald

All of these horrors have been slowing dawning on me while seeing our broader world begin to go up in flames. The evening news is a cascade of ever more surreal and unbelievable events. The news has become absolutely painful to watch. A big part of this horrible discourse is the chants of “fake news” and the reality of it. The problems with fake news are permeating the discourse across society. Science and scientific experts are no different. A lack of confidence and credibility in the sources of information is a broad problem. Unless the system values integrity, quality and truth it will fade from view. Increasingly, the system values none of these things and we are getting their opposites. For each thing experts acts as gatekeepers of integrity, quality and truth. As such they are to be pushed out the way as impediments to success. The simple politically crafted message that comforts those with a certain point of view is welcomed by the masses. The messy objective reality with its subtle shadings and complexity are something people would rather not examine.

The crux of the problem is the recession of experts from dialog is happening at research institutions too. It is more muted than the trend in the broader public sphere but happening quite surely. The expert is reviled as being harder to control, and generally more expensive. They represent a real threat to the encroachment of the management class and ethos into every leadership role. Expertise supersedes the concerns that the management focuses upon bringing up difficult topics like quality and correctness. These things cannot be managed in the shallow manner we have become accustomed to. Marketing pitches and sales do not lend themselves to the expert. Moreover, the expert will not bend to the politically correct messaging with a tendency to speak uncomfortable truths that managers would just as soon hide away. In today’s world every message is politically crafted, and facts are fungible. The credibility of experts used to transcend these concerns to the benefit of all, but now they are simply nuisances. Today experts are only used when they can be manipulated to the benefit of some vested interest.

At some point this trend must end. Our ability to function at all in the current environment is a testament to our societal largess. If we are to recover the expert needs to return to a more prominent role in providing competence and truth. The adherence to some basic set of facts and truths is necessary to keep our direction focused in a constructive direction. Deep knowledge and experience is invaluable in doing difficult things. The lack of objective reality in our dynamics is evidenced by the chaos we are descending into. Some common ground would assist the orderly solution of problem. Experts provide an objective reality that is sometimes unremittingly unpleasant, and difficult to deal with. It often opposes deeply held beliefs and views. We need this sort of input to our struggles. People need to be challenged and consider their beliefs critically. Some problems need to be confronted head on and viewed in an objective manner. We should welcome this much the same as the doctor’s or dentist’s visit. Unpleasant and uncomfortable, but necessary for our health. As long as we allow ourselves to avoid the painful truth by thrusting experts out of the public life, our problems will grow, and we will continue to tumble into chaos.

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.

― John F. Kennedy

Ultimately leadership worth believing in and following would tell us some difficult truths. We would be encouraged to confront painful truths and make genuine efforts to overcome these. We would do some difficult things and court failure as a necessity for success. We should be committed to progressing toward a better world as a society through solving our most difficult problems. Experts are among our best sources for defining and solving such problems. As long as they are pushed out of the public sphere by political concerns, our most difficult problems will go undefined and lack solution. I would go so far as to say that our entire society is mostly headed backwards. We are not creating a better world, we are moving into a worse one. Lack of objective truths and basic facts is a big part of the recipe. Valuing, listening and engaging expertise is one way to reverse this worrying trend. Now we need leadership that can recognize how dire the danger is and do something. The rejection of expertise in managing our affairs is widespread and pervasive. We need real leadership to recognize that solving our collective problems needs expertise to be a part of the solution. We need to affirm the fundamental value of experts, facts and truth in managing our affairs. We need leadership with the courage to face reality and make difficult decisions. Expertise is a big part of defining that reality in a comprehensive manner that lends itself to action.

― John F. Kennedy

# Money Equals Quality and Marketing of Research is Poisoning Science

Money is a great servant but a bad master.

― Francis Bacon

One of the clearest characteristics of our current research environment is the dominance of money. This only shadows the role of money is society at large. Money has become the one-size-fits-all measuring stick for science. This includes the view of the quality of science. If something gets a lot of money, it must be good. Quality is defined by budget. This shallow mindset is incredibly corrupting all the way from the sort of Lab’s where I work at to Universities and everything in between. Among the corrupting influences is the tendency for promotion of science to morph into pure marketing. Science is increasingly managed as a marketing problem and quality is equivalent to its potential for being flashy. In the wake of this attitude is a loss of focus on the basics and fundamentals of managing research quality.

Making money isn’t hard in itself… What’s hard is to earn it doing something worth devoting one’s life to.

― Carlos Ruiz Zafón

Doing science properly becomes an afterthought, and ultimately a lower priority. Doing the basic fundamental work for high quality research does not bring in money thus becomes optional. More and more the basics simply don’t get done. The core of managing research is talent management and development. It is about hiring, developing and retaining the best people for the work. If one thing is clear about our Universities and Labs, talented people are not important. There are those who might chafe at this, but talent is now the ability to gets lots of money, not do great work. Expertise is something all of these institutions are ceasing to value. Experts are expensive, and complicate things. Marketing is all about simple and experts tend to make things hard. Things are hard because they are. All of this is consistent with the overall diminishing ethics and integrity in public life. Rather than focus on a mission, or high-quality, money becomes the emphasis with mission and quality sacrificed as nuisance, and troublingly made equivalent to financial measures.

Don’t think money does everything or you are going to end up doing everything for money.

― Voltaire

Money is a tool. Just like a screwdriver, or a pencil, or a gun. We have lost sight of this fact. Money has become a thing unto itself and replaced the value it represents as an objective. Along the way the principles that should be attached to the money have also been scuttled. This entire ethos has infected society from top to bottom with the moneyed interests at the top lording over those without money. Our research institutions are properly a focused reflection of these societal trends. They have a similar social stratification and general loss of collective purpose and identity. Managers have become the most important thing superseding science or mission in priority. Our staff are simply necessary details and utterly replaceable especially with quality being an exercise in messaging. Expertise is a nuisance, and expert knowledge something that only creates problems. This environment is tailored to a recession of science, knowledge and intellect from public life. This is exactly what we see in every corner of our society. In its place reigns managers and the money, they control. Quality and excellence are meaningless unless they come with dollars attached. This is our value system, everything is for sale.

What’s measured improves

― Peter F. Drucker

The result of the system we have created is research quality in virtual freefall. The technical class has become part of the general underclass whose well-being is not the priority of this social order. Part of the rise of the management elite as the identity of organizations is driven by this focus on money. Managers look down into organizations for glitzy marketing ammo, to help the money flow. The actual quality and meaning of the research is without value unless it comes with lots of money. Send us your slide decks and especially those beautiful colorful graphics and movies. Those things sell this program and get the money in the door. That is what we are all about, selling to the customer. The customer is always right, even when they are wrong as long as they have the cash. The program’s value is measured in dollars. Truth is measured in dollars, and available for purchase. We are obsessed with metrics, and organizations far and wide work hard to massage them to look good. Things like peer review are to be managed and generally can be politicked into something that makes organizations look good. In the process every bit of ethics and integrity can be squeezed out. These managers have rewritten the rules to make this all kosher. They are clueless about the corrosive and damaging all of this is to the research culture.

Make no mistake our research culture has been undermined systematically. The people at the top are acting in full accordance with the rules designed to make their job better and provide them with “ethical” justification. The pay structure and benefits have been systematically slanted to their advantage. Organizations are defined by their management talent rather than the technical talent. Managers are celebrated and held up as the emblems of organizational identity. Gone is the sense that managers are there to serve their organizations and enable the best work. The issue is the low-quality, low-integrity and low-ethics culture instilled at the top. These attitudes are in lock step with the rest of society. Across the organizations from industry to academia to government we see one set of rules for the management at the top and another set of rules for the peons laboring below. Ethical lapses and low integrity actions by peons are swiftly and mercilessly punished while the same actions by managers receive praise. Our management is creating a culture of hypocrisy and privilege then acting utterly oblivious to the consequences. We are a society where as the saying goes “the fish rots from the head”.  Our leaders lack ethical fiber and integrity while celebrating incompetence all while being compensated handsomely. They will all simply claim to be acting within the written rules and avoid any discussion of the moral, ethical and culturally corrosive implications of their actions. The new cultural norm is that the top of society rules with a “do as I say, not as I do” mentality. Our leadership is morally bankrupt and ethically corrupt; yet operating fully within the parameters of the rules or laws.

On the face of it, shareholder value is the dumbest idea in the world.

– Jack Welch

Once upon a time we had incredible research organizations across our society including industry, academia, and government. We have allowed a number of forces loose to destroy these societal treasures. One of the biggest forces undermining the quality and competence of our research is lack of trust. This lack of trust has manifested itself as an inability to take risks necessary for research quality. The lack of trust has also produced an immense administrative load that our management class delivers to make society happy. This is only one of the forces undermining research albeit a powerfully destructive one. The second force is equally harmful. This is the topic today, the dominance of money in managing and measuring science. Money has become the great measure of what is good and bad.  Rich is good, poor is bad.  If you are poor, you are a bad person. It is your fault. A big part of this force is related to the dominant business principle of today. Profit is king, and everything is OK if it benefits stockholders. This principle is undermining society as a whole and making life awful for the vast majority of people while enriching the upper class and powering inequality to record levels. The same poisonous principles have been adopted by research institutions almost reflexively. The impact on organization structure mirrors society. In addition to managing society’s lack of trust, the adoption of “business” principles to research have powered the management class. Along with these principles has come a redefinition of integrity, ethics and quality to be strongly associated with money. Simply having money makes things high integrity, ethical and high quality. Without money you have the opposite without regard to other facts (which are optional today anyway). Culture has followed suit.

Free enterprise cannot be justified as being good for business. It can be justified only as being good for society.”

–Peter Drucker

This discussion cannot be approached in a rational way without addressing the nature of our highest leadership today. We are not led by people with integrity, ethics or basic competence. The United States has installed a rampant symptom of corruption and incompetence in its highest office. Trump is not the problem, he is the symptom of the issue. He may become a bigger problem if allowed to reign too long, he can become a secondary infection. He exemplified every single issue we have with ethics, integrity and competence to an almost cartoonish magnitude. Donald Trump is the embodiment of every horrible boss you’ve ever had, then amplified to an unimaginable degree. He is completely and utterly unfit for the job of President whether measured by intellect, demeanor, ethics, integrity or philosophy. He is pathologically incurious. He is a rampant narcissist whose only concern is himself. He is lazy and incompetent. He is likely a career white color criminal who has used money and privilege to escape legal consequences. He is a gifted grifter and conman (whose greatest con is getting this office). He has no governing philosophy or moral compass. He is a racist, bigot and serial abuser of women.

He is a fucking moron.

– Rex Tillerson, Secretary of State under President Trump

In a nutshell Donald Trump is someone you never want to meet and someone who should never wield the power of his current office. You don’t want him to be your boss, he will make your life miserable and throw you under the bus if it suits him. He is a threat to our future both physically and morally. In the context of this discussion he is the exemplar of what ills the United States including organizations that conduct research. He stands as the symbol of what the management class represents. He is decay. He is incompetence. He is a pathological liar. He is worthy of no respect or admiration save his ability to fool millions. He is the supremacy of marketing over substance. He is someone who has no idea how ironic his mantra “make America great again” is completely undermined by his every breath. His rise to power is the most clear and evident example of how our greatness as a nation has been lost and his every action accelerates our decline. People across the World have lost faith in the United States for the good reason. Any country that elected this moronic, unethical con man as leader is completely untrustworthy. No one symbolizes our fall from greatness more completely than Donald Trump as President.

Rank does not confer privilege or give power. It imposes responsibility.

― Peter F. Drucker

The deeper worry is that all of these problems will ultimately result in very real consequences. The signs are all around us and our leaders at every level act do nothing. We cannot violate the fundamentals of competence and quality for so long and not suffer ill effects. Reality will descend upon us and it will not be pretty. Just as the research in the United States is falling from its summit, the effects will be felt in other areas of life. The long-term impact could well be catastrophic. We can only fake it for so long before it catches up with us. We can allow our leadership to demonstrate such radical disregard for those they lead for so long. The lack of integrity, ethics and morality from our leadership even when approved by society will create damage that our culture cannot sustain. Even if we measure things in the faulty lens of money, the problems are obvious. Money has been flowing steadily into the pockets of the very rich and the management class and away from societal investment. We have been starving our infrastructure for decades. Our roads are awful, and bridges will collapse. 21st Century infrastructure is a pipe dream. Our investments in research and development have been declining in the same time frame scarified for short term profit. At the same time the wealth of the rich has grown, and inequality has become profound and historically unprecedented. These figures are completely correlated. This correlation is not incidental, it is a change in the priorities of society to favor wealth accumulation. The decline of research is simply another symptom.

Money is not quality, money is not the objective. Money does not replace ethics and integrity. Reality matters and marketing does not replace quality and focus on the fundamentals. We need to prize people and prioritize talent and expertise if we want to succeed. Who we choose to lead us matters and the values they represent. It is time to choose differently.

Top 15 Things Money Can’t Buy

Time. Happiness. Inner Peace. Integrity. Love. Character. Manners. Health. Respect. Morals. Trust. Patience. Class. Common sense. Dignity.

― Roy T. Bennett

# The Essential Problem with Essentially Non-Oscillatory Methods

To understand a science it is necessary to know its history.

― Auguste Comte

After monotonicity-preserving methods came along and revolutionized the numerical solution of hyperbolic conservation laws, people began pursuing follow-on breakthroughs.  Heretofore nothing has appeared as a real breakthrough although progress has been made. There are some very good reasons for this and understanding them helps us see how and where progress might be made and how. As I noted several weeks ago in the blog post about Total Variation Diminishing methods, the breakthrough with monotonicity preserving came in several stages. The methods were invented by practitioners who were solving difficult practical problems. This process drove the innovation in the methods. Once the methods received significantly notice as a breakthrough, the math came along to bring the methodology into rigor and explanation. The math produced a series of wonderful connections to theory that gave results legitimacy, and the theory also connected the methods to earlier methods dominating the codes at that time. People were very confident about the methods once math theory was present to provide structural explanations. With essential non-oscillatory (ENO) methods, the math came first. This is the very heart of the problem.

Later I will elaborate on some of the technical challenges with ENO methods, but their first problem was related to their origin. Real progress is made by solving difficult problems in impossible ways. The methods preceding ENO were created to deal with real problems that could not be successfully solved. The innovation arose to solve the problems, not create better methods. The solution to the problems was allowed by better methods. This is key. Solving the problem is the thing to focus on without prejudice toward the means. Today’s research tends to define the means of progress a priori and results in an unnatural process. In addition, we need to be open to a multitude of means to solution. Progress and breakthroughs often come via serendipity and from places unexpected. ENO was a solution looking for a problem. This is why it hasn’t meant the level of success we had hoped for.

As I noted the monotonicity preserving methods came along and total variation theory to make it feel rigorous and tie it to solid mathematical expectations. Before this the monotonicity preserving methods felt sort of magical and unreliable. The math solidified the hold of these methods and allowed people to trust the results they were seeing. With ENO, the math came first with a specific mathematical intent expressed by the methods. The methods were not created to solve hard problems although they had some advantages for some hard problems. This created a number of issues that these methods could not overcome. First and foremost was fragility, followed by a lack of genuine efficacy. The methods would tend to fail when confronted with real problems and didn’t give better results for the same cost. More deeply, the methods didn’t have the pedigree of doing something amazing that no one had seen before. ENO methods had no pull.

A bit of deeper dive is needed here. Originally, the monotone methods were low accuracy, but exceedingly reliable (monotonicity is the feature of producing physical solutions without unphysical artifacts, i.e. oscillations). These low-order methods had their own artifacts, extreme dissipation making solutions to every problem essentially laminar and unenergetic. These solutions did not replicate what we seen naturally. Conversely, high accuracy methods came with oscillations and unreliability. To solve real problems with high-order methods seemingly ad hoc methods like artificial viscosity could provide greater reliability. Innovation came along and produced a solution where you could blend the high order methods with the original monotone low-order methods in an adaptive manner. All of a sudden you could get reliability along with most of the accuracy. Most importantly the complex energetic flows seen in nature could be simulated practically. Flows that are turbulent suddenly looked and acted turbulent. As if almost by magic the results were regarded. This magic caught people’s attention and drove almost complete adoption of these methods by the community.

Don’t mistake activity with achievement.

― John Wooden

Only after the interest in the community came along did the mathematical rigor join the fray. I’ll note that the preceding state of affairs had a good mathematical ground itself providing the foundation for progress. Most notably the barrier theorem by Godunov provided a clear challenge that the innovators needed to overcome. Godunov’s theorem told us that a linear second-order method could not be monotone (non-oscillatory). The key to overcoming the theorem was to move to nonlinear second-order methods where the discrete representation is a function of the solution itself. The new mathematics tied admissibility conditions for solutions together with the new nonlinear methods. We overcame existing mathematical limits by changing the rule and tied ourselves to modest and minimal requirements for the validity of the results.

ENO methods were devised to move the methods ahead. ENO took the adaptive discrete representation to new heights. Aside from the “adaptive” aspect the new method was a radical departure from those it preceded. The math itself was mostly notional and fuzzy lacking a firm connection to the same preceding work. If you had invested in TVD methods, the basic machinery you used had to be completely overhauled for ENO. The method also came with very few guarantees of success. Finally, it was expensive, and suffered from numerous frailties. It was a postulated exploration of interesting ideas, but in the mathematical frame, not the application frame. Its development also happened at the time when applied mathematics began to abandon applications in favor of a more abstract and remote connection via packaged software.

80% of results come from 20% of effort/time

― Vilfredo Pareto

In spite of this, the intrinsic weaknesses of ENO were exposed and have certainly improved over time. The adaptive stencil selection in the original ENO could produce genuinely pathological results including instabilities. The answer to this issue has canonically been provided by weighted ENO (WENO) methods. The methods were constructed to be intrinsically numerically stable. WENO also provided another benefit albeit only partially. If a solution is sufficiently smooth locally, the domain of dependence for the discrete representation can support a higher order method. WENO automatically selects this method. This was another decrement of ENO, the wastefulness of the method’s adaptively in places where is was unnecessary. The original ENO also could produce extreme sensitivity to small changes in the solution. An infinitesimal change in the solution can result in a completely different discrete method, and WENO cured this issue. Nonetheless, WENO was not a complete answer because of its intrinsic expense, and its modification of the high-order stencil when linear and nonlinear stability did not require it. Robustness of solutions could be compromised by unphysical solutions (often with negative densities, pressure or energies).  New limiters were devised to provide protection from these problems and improved the methods. In spite of all this progress, for difficult problems, WENO was still less accurate and more expensive than high quality second-order methods.

ENO and WENO methods were advantageous for a narrow class of problems usually having a great deal of fine scale structure. At the same time, they were not a significant (or any) improvement over the second-order accurate methods that dominate the production codes for the broadest class of important application problems. It’s reasonable to ask what might have been done differently to product a more effective outcome? One of the things that hurt the broader adoption of ENO and WENO methods is an increasingly impenetrable codes where large modification is nearly impossible as we create a new generation of legacy codes (retaining the code base).

Part of the adoption of the monotonicity preserving methods was the quantum leap in solution quality. This may not be achievable with other methods, or ENO & WENO. Part of the quantum leap derived its origin from the place the methods came from: innovative application solutions. Part was simply an incredibly valuable low hanging fruit that was harvested in the process of invention. A second part of the rapid adoption was a firm tie to the past where a hybridization of legacy methods could produce a fantastically more powerful method. ENO and WENO broke from this connection and was expressed as a completely different method that can’t be melded in. On the other hand, if ENO had started as a more incremental evolution from TVD methods, the methods could have been implemented as an extension of an existing code. This would have made the success of the methods more inevitable than difficult. Perhaps backing away from the path, we have been on and seeking a method that steps incrementally forward could stir real progress in methods.

Stark truth, is seldom met with open arms.

Justin K. McFarlane Beau

Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” In Upwind and high-resolution schemes, pp. 218-290. Springer, Berlin, Heidelberg, 1987.

Shu, Chi-Wang. “Numerical experiments on the accuracy of ENO and modified ENO schemes.” Journal of Scientific Computing 5, no. 2 (1990): 127-149.

Liu, Xu-Dong, Stanley Osher, and Tony Chan. “Weighted essentially non-oscillatory schemes.” Journal of computational physics 115, no. 1 (1994): 200-212.

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics 126, no. 1 (1996): 202-228.

Rider, William J., and Len G. Margolin. “Simple modifications of monotonicity-preserving limiter.” Journal of Computational Physics 174, no. 1 (2001): 473-488.

Zhang, Xiangxiong, and Chi-Wang Shu. “Maximum-principle-satisfying and positivity-preserving high-order schemes for conservation laws: survey and new developments.” In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, vol. 467, no. 2134, pp. 2752-2776. The Royal Society, 2011.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

# The Fall of the Technical Class; The Rise of the Management Class

Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.

― Richard Feynman

When I got my first job out of school it was in Los Alamos home of one of the greatest scientific institutions in the World. This Lab birthed the Atomic Age and changed the World. I went there to work, but also learn and grow in a place where science reigned supreme and technical credibility really and truly mattered. Los Alamos did not disappoint at all. The place lived and breathed science, and I was bathed in knowledge and expertise. I can’t think of a better place to be a young scientist. Little did I know that the era of great science and technical superiority was drawing to a close. The place that welcomed me with so much generosity of spirit was dying. Today it is a mere shell of its former self along with Laboratories strewn across the country whose former greatness has been replaced by rampant mediocrity, pathetic leadership and a management class that rules this decline. Money has replaced achievement, integrity and quality as the lifeblood of science. Starting with a quote by Feynman is apt because the spirit he represents so well is the very thing we have completely beat out of the system.

Don’t think money does everything or you are going to end up doing everything for money.

― Voltaire

If one takes a look at the people who get celebrated by organizations today, it is almost invariably managers. This happens internally to organizations, their external face and alumni recognition by universities. In almost every case the people who are highlighted to represent achievement are managers. One explanation is managers have a direct connection to money. One of the key characteristics of the modern age is the centrality of money to organizational success. Money is connected to management, and increasingly disconnected from technical achievement. This is true in industry, government and university worlds, the entire scientific universe. The whole post could have replaced “the rise of management” with the “rise of money”. We increasingly look at aggregate budget as coequal to quality. The more money an organization has, the better it is, and more important it is. A few organizations still struggle to hang on to celebrating technical achievers, Los Alamos among them. These celebrations weaken with each passing year. The real celebration is how much budget the Lab has, and how many employees that can support.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

― Peter F. Drucker

The days of technical competence and scientific accomplishment are over. This foundation for American greatness has been overrun by risk aversion, fear and compliance with a spirit of commonness. I use the word “greatness” with gritted teeth because of the perversion of its meaning by the current President. This perversion is acute in the context of science because he represents everything that is destroying the greatness of the United States. Rather than “making America great again” he is accelerating every trend that has been eroding the foundation of American achievement. The management he epitomizes is the very thing that is the blunt tool bludgeoning American greatness into a bloody pulp. Trump’s pervasive incompetence masquerading as management expertise will surely push numerous American institutions further over the edge into mediocrity. His brand of management is all to prevalent today and utterly toxic to quality and integrity.

In my life, the erosion of American greatness in science is profound, evident and continual. I had a good decade of basking in the greatness of Los Alamos before the forces of mediocrity descended upon the Lab and proceeded to spoil, distort and destroy every bit of greatness in sight. A large part of the destruction was the replacement of technical excellence with management. The management is there to control the “butthead cowboys” and keep them from fucking up. Put differently, the management is there to destroy any individuality and make sure no one ever achieves anything great because no one can take a risk sufficient to achieve something miraculous. Anyone expressing individuality is a threat and needs to be chained up. We replaced stunning World class technical achievement with controlled staff, copious reporting, milestone setting, project management and compliance all delivered with mediocrity. This is bad enough by itself, but for an institution responsible for maintaining our nuclear weapons stockpile, the consequences are dire. Los Alamos isn’t remotely alone. Everything in the United States is being assaulted by the arrayed forces of mediocrity. It is reasonable to ask whether the responsibilities the Labs are charged with continue to be competently achieved.

There is nothing so useless as doing efficiently that which should not be done at all.

― Peter F. Drucker

The march of the United States toward a squalid mediocrity had already begun years earlier. Management has led the way at every stage of the transformation. For scientific institutions, the decline began in the 1970’s with the Department of Defense Labs. Once these Labs were shining beacons of achievement, but management unleashed on them put a stop to this. Since then we have seen NASA, Universities, and the DOE Labs all brought under the jack boots of management. All of this management was brought in to enforce a formality of operations, provide a safe or secure workplace, and keep scandals at bay. The Nation has decided that phenomenal success and great achievements aren’t worth the risks or side-effects of being successful. The management is the delivery vehicle for the mediocrity inducing control. The power and achievement of the technical class is the causality. Management is necessary, but today the precious balance between control and achievement is completely lost.

The managers aren’t evil, but neither are most of the people who simply carry out the orders of their superiors. Most managers are good people who simply carry out awful things because they are expected to do so. We now put everything except technical achievement as a priority. Doing great technical work is always the last priority. It can always get pushed out by something else. The most important thing is compliance with all the rules and regulations. Management stands there to make sure it all gets done. This involves lots of really horrible training designed to show compliance but teach people almost nothing. We have project management to make sure we are on time and budget. Since the biggest maxim of our pathetic management culture is never making a mistake, risks are the last thing you can take. It helps a lot when we really aren’t accomplishing anything worthwhile. When the fix is in and technical standards disappear, it doesn’t matter how terrible the work is. All work is World class by definition. Eventually everyone starts to believe the bullshit. The work is great, right, of course it is.

All of this is now blazoned across the political landscape with an inescapable sense that America’s best days are behind us. The deeply perverse outcome of the latest National election is a president who is a cartoonish version of a successful manager. We have put our abuser and a representative of the class that has undermined our Nation’s true greatness in the position of restoring that greatness. What a grand farce! Every day produces evidence that the current efforts toward restoring greatness are using the very things undermining it. The level of irony is so great as to defy credulity. The current administration’s efforts are the end point of a process that started over 20 years ago, obliterating professional government service and hollowing out technical expertise in every corner. The management class that has arisen in their place cannot achieve anything but moving money and people. Their ability to create the new and wonderful foundation of technical achievement is absent.

Greatness is a product of hard work, luck and taking appropriate risks. In science it is grounded upon technical achievements arising from intellectual labors along with a lot of failures, false starts and mistakes. Today’s highly managed World everything that leads to greatness is undermined. Hard work is taxed by a variety of non-productive actions that compliance demands. Appropriate risks are avoided as a matter of course because risks court failure and failure of any sort is virtually outlawed. False starts never happening any more in today’s project managed reality. Mistakes are fatal for careers. Risk, failure and mistakes are all necessary for learning, and ultimately producing unique and advanced ideas come from the intellectual product of a healthy environment. An environment that cannot tolerate failure and risk is unhealthy. It is stagnant and unproductive. This is exactly where today’s workplace has arrived.

Money is a great servant but a bad master.

― Francis Bacon

With the twin pillars of destruction coming from money’s stranglehold on science and the inability to take risks, peer review has been undermined. Our current standards of peer review lack any integrity whatsoever. Success by definition is the rule of the day. A peer review cannot point out flaws without threatening the reviewers with dire consequences. This has fueled a massive downward spiral in the quality of technical work. Why take risks necessary for progress, when success can be so much more easily faked. Today peer review is so weak that bullshitting your way to success has become the norm. To point out real shortcomings in work has become unacceptable and courts scandal. It puts monetary issues at risk and potentially produces consequences for the work that management cannot accept. In the current environment scientific achievement does not happen because achievement is invariably risk prone. Such risks cannot be taken because of the hostile environment toward any problems or failures. Without failure, we are not learning, and learning at its apex is essentially research. Weak peer review is a large contributor to the decline in technical achievement and the loss of importance for the technical contributor.

Perhaps the greatest blow to science was the end of the Cold War. The Soviet bloc represented a genuine threat to the West and a worthy adversary.  Technical and scientific competence and achievement was a key aspect in the defense of the West. Good work couldn’t be faked, and everyone knew that the West needed to bring their “A” game, or risk losing. When the Soviet bloc crumbled, so did a great deal of the unfettered support for science. Society lost its taste for the sorts of risks necessary for high levels of achievement. To some extent, the loss of ability to take risks and accept failures was already underway with the end of the Cold War simply providing a hammer blow to support for science. It ended the primacy of true achievement as a route to National security. It might be useful to note that the science behind “Star Wars” was specious from the beginning. In a very real way the bullshit science of Star Wars was a trail blazer for today’s rampant scientific charlatans. Rather than give science a free reign to seek breakthroughs along with the inevitable failure, society suddenly sought guaranteed achievement at a reduced cost. In reality it got neither achievement or economized results. With the flow of money being equated to quality as opposed to results, the combination has poisoned science.

How do you defeat terrorism? Don’t be terrorized.

― Salman Rushdie

This transformation was already bad enough then the war on terror erupted to further complicate matters. The war on terror was a new cash cow for the broader defense establishment but came with all the trappings of guaranteed safety and assured results. It solidified the hold of money as the medium for science. Since terrorists represent no actual threat to society, technical success was unnecessary for victory. The only risk to society from terrorism is the self-inflicted damage we do to ourselves, and we’ve done the terrorists work for them masterfully. In most respects the only thing that matters at the Labs is funding. Quality, duty, integrity and virtually anything is up for sale for money. Money has become the sole determining factor for quality and the dominant factor in every decision. Since the managers are the gate keepers for funding they have uprooted technical achievement and progress as the core of organizational identity. It is no understatement to say that the dominance of financial concerns is tied to the ascendency of management and the decline of technical work. At the same time the desire for assured results produced a legion of charlatans who began to infest the research establishment. This combination has produced the corrosive effect of reducing the integrity of the entire system where money rules and results can be finessed to outright fabricated. Standards are so low now that it doesn’t really matter.

Government has three primary functions. It should provide for military defense of the nation. It should enforce contracts between individuals. It should protect citizens from crimes against themselves or their property. When government– in pursuit of good intentions tries to rearrange the economy, legislate morality, or help special interests, the cost come in inefficiency, lack of motivation, and loss of freedom. Government should be a referee, not an active player.

― Milton Friedman

One of the key trends impacting our government funded Labs and research is the languid approach to science by the government. Spearheading this systematic decline in support is the long-term Republican approach to starving government that really took the stage in 1994 with the “Contract with America”. Since that time the funding for science has declined in real dollars along with a decrease in the support for professionalism by those in government. Over time the salaries and level of professional management has been under siege as part of an overall assault on governing. A compounding effect has been an ever-present squeeze on the rules related to conducting science. On the one hand we are told that the best business practices will be utilized to make science more efficient. Simultaneously, best practices in support for science have denied us. The result is no efficiency along with no best practices and simply a decline in overall professionalism for the Labs. All of this is deeply compounding the overall decline in support for research.

Rank does not confer privilege or give power. It imposes responsibility.

― Peter F. Drucker

What can be done to fix all this?

Sometimes the road back to effective and productive technical work seems so daunting as to defy description. I’d say that a couple of important things are needed to pave the road. Mostly importantly, the purpose and importance of the work needs centrality to the identity of science. Purpose and service needs to replace money as the key organizing principle. A high-quality product needs to replace financial interests as the driving force in managing efforts. This step alone would make a huge difference and drive most of the rest of the necessary elements for a return to technical focus. First and foremost, among these elements is an embrace of risk. We need to take risks and concomitantly accept failures as an essential element in success. We must let ourselves fail in attempting to achieve great progress through thoughtful risks. Learning, progress and genuine expertise need to become the measure of success and the lifeblood for our scientific and technical worlds. Management needs to shrink into the background where it becomes a service to technical achievement and an enabler for those producing the work. The organizations need to celebrate the science and technical achievements as the zenith of their collective identity. As part of this we need to have enough integrity to hold ourselves to high standards, welcoming and demanding hard hitting critiques.

In a nutshell we need to do almost the complete opposite of everything we do today.

We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress.

― Richard Feynman

# Curing the Plague of Meetings

If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be “meetings.”

― Dave Barry

Meetings. Meetings, Meetings. Meetings suck. Meetings are awful. Meeting are soul sucking, time wasters. Meetings are a good way to “work” without actually working. Meetings absolutely deserve the bad rap they get. Most people think that meetings should be abolished. One of the most dreaded workplace events is a day that is completely full of meetings. These days invariably feel like complete losses, draining all productive energy from what ought to be a day full of promise. I say this as an unabashed extrovert knowing that the introvert is going to feel overwhelmed by the prospect.

Meetings are a symptom of bad organization. The fewer meetings the better.

– Peter Drucker

All of this is true, and yet meetings are important, even essential to a properly functioning workplace. As such, meetings need to be the focus of real effort to fix while minimizing unnecessary time spent there. Meetings are a vital humanizing element in collective, collaborative work. Deep engagement with people is enriching, educational, and necessary for fulfilling work. Making meetings better would produce immense benefits in quality, productivity and satisfaction in work.

Meetings are at the heart of an effective organization, and each meeting is an opportunity to clarify issues, set new directions, sharpen focus, create alignment, and move objectives forward.

― Paul Axtell

If there is one thing that unifies people at work, it is meetings, and how much we despise them. Workplace culture is full of meetings and most of them are genuinely awful. Poorly run meetings are a veritable plague in the workplace. Meetings are also an essential human element in work, and work is a completely human and social endeavor. A large part of the problem is the relative difficulty of running a meeting well, which exceeds the talent and will of most people (managers). It is actually very hard to do this well. We have now gotten to the point where all of us almost reflexively expect a meeting to be awful and plan accordingly. For my own part, I take something to read, or my computer to do actual work, or the old stand-by of passing time (i.e., fucking off) on my handy dandy iPhone. I’ve even resorted to the newest meeting past-time of texting another meeting attendee to talk about how shitty the meeting is. All of this can be avoided by taking meetings more seriously and crafting time that is well spent. If this can’t be done the meeting should be cancelled until the time is well spent.

The least productive people are usually the ones who are most in favor of holding meetings.

― Thomas Sewell

There are a few uniform things that can be done to improve the impact of meetings on the workplace. If a meeting is mandatory, it will almost surely suck. It will almost always suck hard. No meeting should ever be mandatory, ever. By forcing people to go to mandatory meetings, those running the meeting have no reason to make the meeting enjoyable, useful or engaging. They are not competing for your time, and this allows your time to be abused. A meeting should always be trying to make you want to be there, and honestly compete for your time. A fundamental notion that makes all meetings better is a strong sense that you know why you are at a meeting, and how you are participating. There is no reason for attendance to a meeting where you passively absorb information without any active role. If this is the only way to get the information, we highlight deeper problems that are all too common! Everyone should have an active role in the meeting’s life. If someone is not active, they probably don’t need to be there.

Meetings at work present great opportunities to showcase your talent. Do not let them go to waste.

― Abhishek Ratna

There are a lot of types of meetings, and generally speaking all of them are terrible, and they don’t need to be. None of them really have to be awful, but they are. Some of the reasons are a tremendously deep issue with the modern workplace. It is only a small over reach to say that better meetings would go a huge distance to improve the average workplace and provide untold benefits in terms of productivity and morale. So, to set the stage, let’s talk about the general types of meetings that most of us encounter:

• Conferences, Talks and symposiums
• Informational Meetings
• Organizational Meetings
• Project Meetings
• Reviews
• Phone, Skype, Video Meetings
• Working meetings
• Training Meetings

All of these meetings can stand some serious improvement that would have immense benefits.

Meetings are indispensable when you don’t want to do anything.

–John Kenneth Galbraith

The key common step to a good meeting is planning and attention to the value of people’s time. Part of the planning is a commitment to engagement with the meeting attendees. Do those running the meeting know how to convert the attendees to participants? Part of the meeting is engaging people as social animals and building connections and bonds. The worst thing is a meeting that a person attends solely because they are supposed to be there. Too often our meetings drain energy and make people feel utterly powerless. A person should walk out of a meeting energized and empowered. Instead, meeting are energy and morale sucking machines. A large part of the meeting’s benefit should be a feeling of community and bonding with others. Collaborations and connections should arise naturally from a well run meeting. All of this seems difficult and it is, but anything less does not honor the time of those attending and the great expense their time represents. In the end, the meeting should be a valuable expenditure of time. More than simply valuable, the meeting should produce something better, a stronger human connection and common purpose of all those attending. If the meeting isn’t a better expenditure of people’s time, it probably shouldn’t happen.

A meeting consists of a group of people who have little to say – until after the meeting.

― P.K. Shaw

Conferences, Talks and symposiums. This is a form of meeting that generally works pretty well. The conference has a huge advantage as a form of meeting. Time spend at a conference is almost always time well spent. Even at their worst, a conference should be a banquet of new information and exposure to new ideas. Of course, they can be done very poorly and the benefits can be undermined by poor execution and lack of attention to detail.Conversely, a conference’s benefits can be magnified by careful and professional planning and execution. One way to augment a conference significantly is find really great keynote speakers to set the tone, provide energy and engage the audience. A thoughtful and thought-provoking talk delivered by an expert who is a great speaker can propel a conference to new heights and send people away with renewed energy. Conferences can also go to greater lengths to make the format and approach welcoming to greater audience participation especially getting the audience to ask questions and stay awake and aware. It’s too easy to tune out these days with a phone or laptop. Good time keeping and attention to the schedule is another way of making a conference work to the greatest benefit. This means staying on time and on schedule. It means paying attention to scheduling so that the best talks don’t compete with each other if there are multiple sessions. It means not letting speaker filibuster through the Q&A period. All of these maxims hold for a talk given in the work hours, just on a smaller and specific scale. There the setting, time of the talk and the time keeping all help to make the experience better. Another hugely beneficial aspect of meetings is food and drink. Sharing food or drink at a meeting is a wonderful way for people to bond and seek greater depth of connection. This sort of engagement can help to foster collaboration and greater information exchange. It engages with the innate human social element that meeting should foster (I will note that my workplace has mostly outlawed food and drink helping to make our meetings suck more uniformly). Too often aspects of the talk or conference that would make the great expense of people’s time worthwhile are skimped on undermining and diminishing the value.

Highly engaged teams have highly engaged leaders. Leaders must be about presence not productivity. Make meetings a no phone zone.

― Janna Cachola

Informational Meetings. The informational meeting is one of the worst abuses of people’s time. Lots of these meetings are mandatory, and force people to waste time witnessing evidence of what kind of shit show they are part of. This is very often a one-way exchange where people are expected to just sit and absorb. The information content is often poorly packaged, and ham handed in delivery. The talks usually are humorless and lack any soul. The sins are all compounded with a general lack of audience engagement. Their greatest feature is a really good and completely work appropriate time wasting exercise. You are at work and not working at all. You aren’t learning much either, it is almost always some sort of management BS delivered in a politically correct manner. Most of the time the best option is to completely eliminate these meetings. If these meetings are held, those conducting them should spend some real effort into making them worthwhile can valuable. They should seek a format that engages the audience and encourages genuine participation.

When you kill time, remember that it has no resurrection.

― A.W. Tozer

Organizational Meetings. The information’s meeting’s close relative is the organizational meeting. Often this is an informational meeting is disguise. This sort of meeting is called for an organization of some size to get together and hear the management give them some sort of spiel. These meeting happen at various organizational levels and almost all of them are awful. Time wasting drivel is the norm. Corporate or organizational policies, work milestones, and cheesy awards abound. Since these meeting is more personal than the pure informational meeting there is some soul and benefit to them. The biggest sin in these meetings is the faux engagement. Do the managers running these meetings really want questions, and are they really listening to the audience. Will they actually do anything with the feedback? More often than not, the questions and answers are handled professionally then forgotten. The management generally has no interest in really hearing people’s opinions and doing anything with their views, it is mostly a hollow feel good maneuver. Honest and genuine engagement is needed and these days management needs to prove that its more than just a show.

People who enjoy meetings should not be in charge of anything.

― Thomas Sowell

Project Meetings. In many places this is the most common meeting type. It is also tending to be one of the best meeting types where everyone is active and participating. The meeting involves people working to common ends and promotes genuine connection between efforts. These can take a variety of forms such as the stand-up meeting where everyone participates by construction. An important function of the project meeting is active listening. While this form of meeting tends to be good, it still needs planning and effort to keep it positive. If the project meeting is not good, it probably reflects quite fully on the project itself. Some sort of restructuring of the project is a cure. What are the signs that a project meeting is bad? If lots of people are sitting like potted plants and not engaged with the meeting, the project is probably not healthy. The project meeting should be time well spent, if they aren’t engaged, they should be doing something else.

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

Reviews. A review meeting is akin to a project meeting, but has an edge that makes it worse. Reviews often teem with political context and fear. A common form is a project team, reviewers and then stakeholders. The project team presents work to the reviewers, and if things are working well, the reviewers ask lots of questions. The stakeholders sit nervously and watch rarely participating. The spirit of the review is the thing that determines whether the engagement is positive and productive. The core value about which value revolves is honesty and trust. If honesty and trust are high, those being reviewed are forthcoming and their work is presented in a way where everyone learns and benefits. If the reviewers are confident in their charge and role, they can ask probing questions and provide value to the project and the stakeholders. Under the best of circumstances, the audience of stakeholders can be profitably engaged in deepening the discussion, and themselves learn greater context for the work. Too often, the environment is so charged that honesty is not encouraged, and the project team tends to hide unpleasant things. If reviewers do not trust the reception for a truly probing and critical review, they will pull their punches and the engagement will be needlessly and harmfully moderated. A sign that neither trust nor honesty is present comes from an anxious and uninvolved audience.

I think there needs to be a meeting to set an agenda for more meetings about meetings.

― Jonah Goldberg

Phone, Skype, Video Meetings. These meetings are convenient and often encouraged as part of a cost saving strategy. Because of the nature of the medium these meetings are often terrible. Most often it turns into a series of monologs usually best suited for reporting work. Such meetings are rarely good places to hear about work. This comes from two truths: the people on the phone are often disengaged and listening while attending to other things. It is difficult to participate in any dynamic discussion, it happens, but it is rare. Most of the content is limited to the spoken word, and lacks body language and visual content. The result is much less information being transmitted, along with a low bandwidth of listening. For the most part these meeting should be done away with. If someone has something really interesting and very timely it might be useful, but only if we are sure the audience is paying real attention. Without dynamic participation one cannot be sure the attention is actually being paid.

Working meetings. These are the best meetings, hands down. They are informal, voluntary and dynamic. The people are there because they want to get something done that requires collaboration. If other types of meetings could incorporate the approach and dynamic of a working meeting, all of them would improve dramatically. Quite often these meetings are deep on communication and low on hierarchical transmission. Everyone in the meeting is usually engaged and active. People are rarely passive. They are there because they want to be there, or they need to be there. In many ways all of meeting could benefit mightily by examining working meetings, and adopting their characteristics more broadly.

Training Meetings. The use of a meeting to conduct training is common, as they are bad. These meetings could be improved greatly by adopting the principles from education. A good training is educational. Again dynamic, engaged meeting attendees are a benefit. If they are viewed as students, good outcomes can be had. Far too often the training is delivered in a hollow mandatory tone that provides little real value for these receiving it. We have a lot of soulless compliance training that simply pollutes the workplace with time wasting. Compliance is often associated with hot-button issues where the organization has no interest in engaging the employees. They are simply forced to do things because those in power say so. A real discussion on this sort of training is likely to be difficult and cast doubt. The conversations are difficult and likely to be confrontational. It is easier to passively waste people’s time and get it over with. This attitude is some blend of mediocrity and cowardice that has a corrosive impact on the workplace.

One source of frustration in the workplace is the frequent mismatch between what people must do and what people can do. When what they must do exceeds their capabilities, the result is anxiety. When what they must do falls short of their capabilities, the result is boredom. But when the match is just right, the results can be glorious. This is the essence of flow.

― Daniel H. Pink

Better meetings are a mechanism where our workplaces have an immense ability to improve. A broad principle is that a meeting needs to have a purpose and desired outcome that is well known and communicated to all participants. The meeting should engage everyone attending, and no one should be a potted plant, or otherwise engaged. Everyone’s time is valuable and expensive, the meeting should be structured and executed in a manner fitting its costs. A simple way of testing the waters are people’s attitudes toward the meeting and whether they are positive or negative. Do they want to go? Are they looking forward to it? Do they know why the meeting is happening? Is there an outcome that they are invested in? If these questions are answered honestly, those calling the meeting will know a lot and they should act accordingly.

The cure for bad meetings is recognition of their badness, and a commitment to making the effort necessary to improve them. Few things have a greater capacity to make the workplace better, more productive and improve morale.

When employees feel valued, and are more productive and engaged, they create a culture that can truly be a strategic advantage in today’s competitive market.

― Michael Hyatt

# Total Variation Diminishing (TVD) Schemes; Their Essential Contribution to Progress in Methods

Mathematics is the door and key to the sciences.

— Roger Bacon

It is time to return to great papers of the past. The past has clear lessons about how progress can be achieved. Here, I will discuss a trio of papers that came at a critical juncture in the history of numerically solving hyperbolic conservation laws. In a sense, these papers were nothing new, but provided a systematic explanation and skillful articulation of the progress at that time. In a deep sense these papers represent applied math at its zenith, providing a structural explanation along with proof to accompany progress made by others. These papers helped mark the transition of modern methods from heuristic ideas to broad adoption and common use. Interestingly, the depth of applied mathematics ended up paving the way for broader adoption in the engineering world. This episode also provides a cautionary lesson about what holds higher order methods back from broader acceptance, and the relatively limited progress since.

The three papers I will focus on are:

Harten, Ami. “High resolution schemes for hyperbolic conservation laws.” Journal of computational physics 49, no. 3 (1983): 357-393.

Harten, Ami. “On a class of high resolution total-variation-stable finite-difference schemes.” SIAM Journal on Numerical Analysis 21, no. 1 (1984): 1-23.

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21, no. 5 (1984): 995-1011.

The first two are by the late Ami Harten providing a proof of the monotone behavior seen with the heuristic methods existing at that time. The proofs provided some confidence to many that had been lacking from the truly innovative, but largely heuristic invention of the methods. The third paper by Peter Sweby provided a clear narrative and an important graphical tool for understanding these methods and displaying limiters, the nonlinear mechanism that produced the great results. The “Sweby diagram” was the reduction of these complex nonlinear methods to a nonlinear function. The limiter was then a switch between two commonly used classical methods. The diagram produced a simple way of seeing whether any given limiter was going to give second-order non-oscillatory results. Together these three papers paved the way for common adoption of these methods.

Mathematics is the art of giving the same name to different things.

– Henri Poincaré

In the 1970’s three researchers principally invented these nonlinear methods, Jay Boris, Bram Van Leer, and Vladimir Kolgan.  Of these three Boris and Van Leer achieved fame and great professional success. The methods were developed heuristically and worked very well. Each of these methods explicitly worked to overcome Godunov’s barrier theorem that says a second-order linear method cannot be monotone. Both made the methods nonlinear through adapting the approximation based on the local structure of the solution. Interestingly Boris and Van Leer were physicists, Kolgan was an engineer (Van Leer went on to work extensively in engineering). Kolgan was a Russian in the Soviet Union and died before his discovery could take its rightful place next to Boris and Van Leer (Van Leer has gone to great effort to correct the official record).

[Mathematics] is security. Certainty. Truth. Beauty. Insight. Structure. Architecture. I see mathematics, the part of human knowledge that I call mathematics, as one thing—one great, glorious thing. Whether it is differential topology, or functional analysis, or homological algebra, it is all one thing. … They are intimately interconnected, they are all facets of the same thing. That interconnection, that architecture, is secure truth and is beauty. That’s what mathematics is to me.

― Paul R. Halmos

The problem with all these methods was a lack of mathematical certainty on the quality of results along with proofs and structured explanations of their success. This made the broader community a bit suspicious of the results. In a flux corrected transport (FCT, Boris’ invention) commemorative volume this suspicion is noted. At conferences, there were questions raised about the results that implied that the solutions were faked. The breakthrough with these new methods was that good, too good to be true. Then the explanations came and made a strong connection to theory. The behavior seen in the results had a strong justification in mathematics, and the trust in the methodology grew. Acceptance came on the heals of this trust and widespread adoption.

Harten and others continued to search for even better methods after introducing TVD schemes. The broad category of essentially non-oscillatory (ENO) methods was invented. It has been a broad research success, but never experienced the wide spread adoption that these other methods enjoyed. Broadly speaking, the TVD methods are used in virtually every production code for solving hyperbolic conservation laws. In the physics world, many use Van Leer’s approach and engineering uses Harten-Sweby’s formalism broadly. FCT is used somewhat in the physics world, but its adoption is far less common. Part of the reason for this disparity comes down to the power of mathematical proof and the faith it gives. The lack of success of follow-on methods to get adoption and have success comes from the lack of strong theory with its requisite confidence. Faith, confidence and systematic explanation are all provided by well executed applied mathematics.

What is TVD the theory and how does it work?

(Note: WordPress’ Latex capability continues to frustrate, I cannot get them to typeset so if you can read TeX the equations will make sense)

In a nutshell, TVD is a way of extending the behavior of monotone methods (upwind for the purposes of this discussion) to high-order nonlinear methods. Upwind methods have the benefit of positive coefficients in their stencil. If we write this down for a scalar advection equation, $u_t + a u_x = 0$, we get the following form, $u_j^{n+1} = u_j^n – C_{j-1/2} \left( u_j^n – u_{j-1}^n \right) + D_{j+1/2} \left(u_{j+1}^n – u_j^n \right)$. The key for the methods is the positivity of the functions $C_{j-1/2} \ge 0$ and $D_{j+1/2} \ge 0$. For example, an upwind method will give constants for these functions, $latex C_{j-1/2} = a \Delta t/\Delta x = \nu$ and $D_{j+1/2} = 0$ for $a > 0$. The coefficient is the famous CFL (Courant-Friedrichs-Lewy) number. For the TVD methods, these functions become nonlinear functions of the solution itself, but satisfy the inequalities. Harten had done other work that connected monotone methods to entropy satisfying (i.e., physically relevant solutions), which then implies that TVD methods would be a route to similar results (this would seem to be true, but definitive proofs are lacking). Still the connections are all there and close enough to provide faith in the methodology. This is where Sweby’s work comes in and provides a crucial tool for broad acceptance of this methodology.

What Sweby did was provide a wonderful narrative description of TVD methods, and a graphical manner to depict them. In the form that Sweby described, TVD methods were a nonlinear combination of classical methods: upwind, Lax-Wendroff and Beam Warming. The limiter was drawn out of the formulation and parameterized by the ratio of local finite differences. The limiter is a way to take an upwind method and modify it with some part of the selection of second-order methods and satisfy the inequalities needed to be TVD. This technical specification took the following form, $C_{j-1/2} = \nu \left( 1 + 1/2\nu(1-\nu) \phi\ledt(r_{j-1/2}\right) \right)$ and $D_{j+1/2} =1/2\nu(1-\nu) \phi\left(r_{r+1/2}\right)$ for $a > 0$ and $r_{j-1/2} = \frac{ u_{j}^{n} – u_{j-1}^{n} }{ u_{j-1}^{n} – u_{j-2}^{n}}$. This produced a beautiful and simple diagram that usefully displayed how any given method compared to others. This graphical means was probably the essential step for broad acceptance (my opinion, but for visual people it was essential and a lot of technical folks are visual).

Beyond the power of applied mathematics, other aspects of the technical problem have contributed to the subsequent lack of progress. The biggest issue is the quantum leap in performance from first- to second-order accuracy. The second order methods produce results that seem turbulent because first-order methods produce a truncation error that laminarizes flows. The second-order method produces results for complex problems that have the look and feel of real flows (this may also be quantitatively true, but the jury is out). Important flows are turbulent, high energy with very large Reynolds numbers. First-order schemes cannot produce these realistically at all. Second-order methods can, and for this reason the new schemes unleashed utility upon the World. With these methods, the solutions took on the look, feel and nature of reality. For this reason, these schemes became essential for codes.

The second reason is the robustness of these methods. First-order monotone methods like upwind are terribly robust. These methods produce physically admissible solutions and do not fail often. Codes run problems to completion. The reason is their extremely dissipative nature. This makes them very attractive for difficult problems and almost guarantees a solution for the calculation. The same dissipation also destroys almost every structure in the solution and smears out all the details that matter. You get answer, but an answer that is fuzzy and inaccurate. These first order methods end up being as extremely expensive when accuracy is desired. Harten’s TVD methods provided a systematic connection of the new second-order methods to the old reliable first-order methods. The new methods were almost as reliable as the first-order methods, but got rid of much of the smearing dissipation that plagued them. Having a structured and expertly produced explanation for the behavior of these methods with clear connections to things people already knew produced rapid adoption by practitioners.

Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.

― George Pólya

The follow-on efforts with higher than second-order methods have lacked these clear wins. It is clear that going past second-order does not provide the same sort of quantum leap in results. The clear connection and expectations of robustness is also lacking. The problems do not stop there. The essentially non-oscillatory methods select the least oscillatory local approximation, which also happens to be quite dissipative by its very nature. Quite often the high-order method is actually not threatening oscillations at all yet a less accurate approximation is chosen needlessly reducing accuracy. Furthermore, the adaptive approximation selection can preferentially choose unstable approximation in an evolutionary sense, which can result in catastrophe. The tendency to produce the worst of both Worlds has doomed their success and broad adoption. Who wants dissipative and fragile? No one! No production code would make these choices, ever!

Recent efforts have sought to rectify this shortcoming. Weighted ENO methods (WENO) have provided far less intrinsically dissipative methods that also enhance the accuracy. These methods are still relatively dissipative compared to the best TVD methods and invoke their expensive approximations needlessly in regions of the solution where the nonlinear mechanisms are unnecessary. Efforts have produced positivity preserving methods that avoid the production of inherently unphysical results with high-order methods. These developments are certainly a step in the right direction. The current environment of producing new legacy codes is killing any other the energy to stewart these methods into broad adoption. The expense, overly dissipative nature and relatively small payoff all stand in the way.

What might help in making progress past second-order methods?

The first thing to note is that TVD methods are mixed in their order of accuracy. They are second-order in a very loose sense and only when one takes the most liberal norm for computations (L1 for you nerds out there). For the worst-case error, TVD methods are still first-order (L-infinity, and multiple dimensions). This is a pretty grim picture until one also realizes that for nonlinear PDEs with general solutions, first-order accuracy is all you get anyway unless you are willing to track all discontinuities. These same conditions hold for high-order methods we might like to adopt. The accuracy from the new methods is always quite limited and puts a severe constraint on the efficiency of the methods, and a challenge to development and progress. The effort that it takes to get full accuracy for nonlinear problems is quite large, and if this accuracy is not realized, the effort is not worth it. We do know that some basic elements of high-order methods yield substantial benefits, but these benefits are limited (an example are high-order edge values used in the piecewise parabolic method – PPM).

I asked myself, what worked so well for TVD? To me there is a clear and unambiguous connection to what worked in the past. The past was defined by the combination of upwind, Lax-Wendroff, and Beam-Warming methods. These methods along with largely ad hoc stabilization mechanisms provided the backbone of production codes preceding the introduction of these methods. Now TVD schemes form the backbone of production codes. It would seem that new higher order methods should preserve this sort of connection. ENO and WENO methods did not do this, which partially explains their lack of adoption. My suggestion would be a design of methods where one uses a high-order method that can be shown to be TVD, or the high-order method closest to a chosen TVD scheme. This selection would be high-order accurate by construction, but would also produce oscillations at third-order. This is not the design principle that ENO methods use where the unproven assertion is oscillations at the order of approximation. The tradeoff between these two principles is larger potential oscillations with less dissipation and a more unambiguous connection to the backbone TVD methods.

1. Everyone is entitled to their opinion about the things they read (or watch, or listen to, or taste, or whatever). They’re also entitled to express them online.

2. Sometimes those opinions will be ones you don’t like.

3. Sometimes those opinions won’t be very nice.

4. The people expressing those may be (but are not always) assholes.

5. However, if your solution to this “problem” is to vex, annoy, threaten or harrass them, you are almost certainlya bigger asshole.

6. You may also be twelve.

7. You are not responsible for anyone else’s actions or karma, but you are responsible for your own.

[Bad Reviews: I Can Handle Them, and So Should You(Blog post, July 17, 2012)]

John Scalzi

My own connection to this work is a nice way of rounding out this discussion. When I started looking at modern numerical methods, I started to look at the selection of approaches. FCT was the first thing I hit upon and tried. Compared to the classical methods I was using, it was clearly better, but its lack of theory was deeply unsatisfying. FCT would occasionally do weird things. TVD methods had the theory and this made is far more appealing to my technically immature mind. After the fact, I tried to project FCT methods onto the TVD theory. I wrote a paper documenting this effort. It was my first paper in the field. Unknowingly, I walked into a veritable mine field and complete shit show. All three of my reviewers were very well-known contributors to the field (I know it is supposed to be anonymous, and the shit show that unveiled itself, unveiled the reviewers too).

The end result was that the paper was never published. This decision occurred five years after it was submitted, and I had simply moved on. My first review was from Ami Harten who basically said this paper is awesome and publish it. He signed the review and sent me some lecture notes on the same topic. I was over the moon, and did call Ami and talk briefly. Six months later my second review came in. It was as different as possible from Ami’s. It didn’t say this exactly, but in a nutshell, it said the paper was a piece of shit. It still remains the nastiest and most visceral review I’ve ever gotten. It was technically flawless on one hand and thoroughly unprofessional in tone on the other. My third review came a year later and was largely editorial in nature. I revised the paper and resubmitted. While all this unfolded Ami died, and the journal it was submitted to descended into chaos partially due to the end of the cold war and its research largess. When it emerged from chaos, I decided to publish the work was largely pointless and not worth the effort.

Some commentary about why this shit show happened is worth explaining. It is all related to the holy war between two armed camps that arose via the invention of these methods and who gets the credit. The paper was attempting to bridge the FCT and TVD worlds, and stepped into the bitter fighting around previous publications. In retrospect, it is pretty clear that FCT was first, and others like Kolgan and Van Leer came after. Their methodologies and approaches were also fully independent, and the full similarity was not clear at the time. While the fullness of time sees these approaches are utterly complementary, at the time of development it was seen as a competition. It was definitely not a collaborative endeavor, and the professional disagreements were bitter. They poisoned the field and people took sides viewing the other side with vitriolic fury. A friend and associate editor of the Journal of Computational Physics quipped that this was one of the nastiest sub-communities in the Journal, and why did I insist on working in this area. It is also one of the most important areas in computational physics working on a very difficult problem. The whole field also hinges upon expert judgement and resists a firm quantitative standard of acceptance.

What an introduction to the field and its genuinely amazing that I continue to work in it at all. If I didn’t enjoy the technical content so much, and not appreciated the importance of the field, I would have run. Perhaps greater success professionally would have followed such a departure. In the long run this resistance and the rule of experts works to halt progress.

If you can’t solve a problem, then there is an easier problem you can solve: find it.

― George Pólya

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

Boris, Jay P., and David L. Book. “Flux-corrected transport. I. SHASTA, a fluid transport algorithm that works.” Journal of computational physics 11, no. 1 (1973): 38-69.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. II. Monotonicity and conservation combined in a second-order scheme.” Journal of computational physics 14, no. 4 (1974): 361-370.\

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” Journal of computational physics 71, no. 2 (1987): 231-303.

Harten, Ami, and Stanley Osher. “Uniformly high-order accurate nonoscillatory schemes. I.” SIAM Journal on Numerical Analysis 24, no. 2 (1987): 279-309.

Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

# 10 Better Things for Scientific Computing to focus on in 2018

What I cannot create, I do not understand.

– Richard Feynman

We are in deep danger of relying upon science and associated software we do not understand because we have stopped the active creation of knowledge so broadly. I open with one of my favorite quotes by the great physicist Richard Feynman, who also wrote about Cargo Cult Science (https://en.wikipedia.org/wiki/Cargo_cult_science). It is a bold, but warranted assertion to note that much of our science work today is taking on the character of Cargo Cult Science. We are not all the way there, but we have moved a long way toward taking on all of the characteristics of this pathology. In this assertion money is the “cargo” that pseudo-scientific processes are chasing. It is noexaggeration to say that getting funding for science has replaced the conduct and value of that science today. This is broadly true, and particularly true in scientific computing where getting something funded has replaced funding what is needed or wise. The truth of the benefit of pursuing computer power above all else is decided upon a priori. The belief was that this sort of program could “make it rain” and produce funding because this sort of marketing had in the past. All results in theprogram must bow to this maxim, and support its premise. All evidence to the contrary is rejected because it is politically incorrect and threatens the attainment of the cargo, the funding, the money. A large part of this utterly rotten core of modern science is the ascendency of the science manager as the apex of the enterprise. The accomplished scientist and expert is merely now a useful and necessary detail, the manager reigns as the peak of achievement.

The first principle is that you must not fool yourself — and you are the easiest person to fool.

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

– Richard Feynman

If one looks at the scientific computing landscape today, one sees a single force for progress: the creation of a new more powerful supercomputer that is much faster than anything we have today. The United States, Europe and China are all pursuing this path for advancing scientific computing. It is a continuation of a path we have pursued for the last 25 years, but our future is not remotely like the last 25 years. This approach to progress can be explained simply and marketed to the naïve and untechnical. This works because our National leadership is increasingly naïve, witless and obsessively anti-intellectual lacking any technical sophistication. We are in the midst of a tide of low information leadership who are swayed by sweet sounding bullshit far more easily than hard-nosed facts.

The farther backward you can look, the farther forward you are likely to see.

― Winston S. Churchill

In this putrid environment, faster computers seem an obvious benefit to science. They are a benefit and pathway to progress, this is utterly undeniable. Unfortunately, it is an expensive and inefficient path to progress, and an incredibly bad investment in comparison to alternative. The numerous problems with the exascale program are subtle, nuanced, highly technical and pathological. As I’ve pointed out before the modern age is no place for subtlety or nuance, we live it an age of brutish simplicity where bullshit reigns and facts are optional. In such an age, exascale is an exemplar, it is a brutally simple approach tailor made for the ignorant and witless. If one is willing to cast away the cloak of ignorance and embrace subtlety and nuance, a host of investments can be described that would benefit scientific computing vastly more than the current program. If we followed a better balance of research, computing to contribute to science far more greatly and scale far greater heights than the current path provides.

Applications that matter to something big would create a great deal of this focus naturally. The demands of doing something real and consequential would breed a necessity to focus progress in an organic way. Last week I opined that such big things are simply not present today in science or society’s broader narrative. Society is doing nothing big or aspirational or challenging to drive progress forward with genuine purpose. To be more pointed, the push for exascale is not big at all, it is rather an exemplar of the lack of vision and consequence. There is a bit of chicken and egg argument to all this. The bottom line is a general lack of underlying and defining purpose to our efforts in computing. Exascale is what we do when we want to market something as “feeling” big, when it is actually doing something small and inconsequential.

Those who do not move, do not notice their chains.

― Rosa Luxemburg

How can I say such a thing?

In a nutshell computing speed is one of the least efficient and least effective ways to improve computational science. It has only been an enabler because computing speed came for free with Moore’s law for most of the last half century. That free lunch is over and past, yet we mindfully ignore this reality (http://herbsutter.com/welcome-to-the-jungle/ ). Even with Moore’s law fully in effect, it was never the leading contributor for progress, progress was paced by numerical methods and algorithmic scaling. Moreover, computing speed cannot fix modeling that is wrong (methods and algorithms don’t fix this either). If a model is wrong, the wrong answer is simply computed much faster. Of course, we know that every model is wrong and the utility of any model is determined via V&V. Issues associated with the use of computing, naïve code users, the loss of expertise, and understanding are simply overlooked, or worse yet made more intractable due to inattention.

Each of these advances has been mentioned before in the guise of a full blog post, but it is useful to put things together to see the wealth of unused opportunity.

80% of results come from 20% of effort/time

― Vilfredo Pareto

1. Modernizing modeling ought to be a constant and consistent emphasis in science. Computational science is no different. For some reason, the modeling advances have simply stopped. Our basic models of reality are increasingly fixed and immutable, and ever less fit for future purpose. The models of reality have become embedded in computer codes, and ultimately central to the codes structure in numerous respects. As such we start to embed a framework for modeling whose foundation becomes invariant. We can’t change the model without developing an entirely different code. We reduce our modeling to submodels and closure of existing models while the staying within a fundamental modeling framework. This is another area where progress is phenomenally risky to approach and substantially prone to failures and misguided efforts. Without the failure, the ability to learn and produce new and improved model is virtually impossible. https://wjrider.wordpress.com/2015/02/02/why-havent-models-of-reality-changed-more/, https://wjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/ , https://wjrider.wordpress.com/2017/07/07/good-validation-practices-are-our-greatest-opportunity-to-advance-modeling-and-simulation/
2. Modernizing methods is not happening. Since methods are one of the best ways to improve the efficiency and effective solution of models, progress is harmed in a manner that cannot be easily recovered by other means. Usually when a model is decided upon, a method is used to solve the model numerically. The numerical method is only slightly less code specific and invariant than the model itself. By virtue of this character, the basic numerical method for a model becomes indistinguishable from the code. If we preserve the code base, we preserve old methods, which means no progress. We are stuck using relatively low-order methods with crude stability mechanisms. The ability to use high-order methods with enhanced accuracy and efficiency is not advancing. The research in numerical methods and the practical application of numerical methods is becoming increasingly divorced from one another. The gap has grown into a chasm, and numerical methods research is losing relevance. Part of the problem is related to the standards of success where methods research allows success to be found on easier problems rather than keeping the problem difficulty fixed. This is yet another place where the inability to accept failure as a necessary element (or even fuel) for success is fatal. https://wjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://wjrider.wordpress.com/2016/07/25/a-more-robust-less-fragile-stability-for-numerical-methods/,
3. Algorithmic scaling is the most incredible thing we could achieve in terms of computational performance. The ability to change the scaling exponent on how much work it takes to solve a problem can have a magical impact. Linear algebra is the posterchild for this effect. A breakthrough in scaling can make the impossible problem, possible and even routine to solve. The classical naïve scaling for matrix inversion has the work scaling with the cube of the problem size. Even small problems quickly become utterly intractable and almost no amount of computer power can fix this. Change the scaling to quadratic and new problems suddenly become routine, change the scaling to linear and the problems that can be tackled routinely were unimaginable before. We are stuck at linear, although some fields are starting to see sublinear algorithms. Could these breakthroughs be more common and useful? If they could the impact on computational science would overwhelm the capacity of exascale easily. Today we aren’t even trying to make these advances. In my view, such work is generically risky and prone to failure, can failure is something that has become intolerable, thus success if sacrificed. https://wjrider.wordpress.com/2015/05/29/focusing-on-the-right-scaling-is-essential/
4. Today supercomputing is completely at odds with the commercial industry. After decades of first pacing advances in computing hardware, then riding along with increases in computing power, supercomputing has become separate. The separation occurred when Moore’s law died at the chip level (in about 2007). The supercomputing world has become increasingly disparate to continue the free lunch, and tied to an outdated model for delivering results. Basically, supercomputing is still tied to the mainframe model of computing that died in the business World long ago. Supercomputing has failed to embrace modern computing with its pervasive and multiscale nature moving all the way from mobile to cloud. https://wjrider.wordpress.com/2017/12/15/scientific-computings-future-is-mobile-adaptive-flexible-and-small/
5. Verification & validation – If the scientific computing efforts are to be real scientific endeavors, V&V is essential. Computational modeling is still modeling and comparison with experiment is the gold standard for modeling, but with computational work the comparison has numerous technical details needing serious attention.  In a very complete way V&V is the scientific method in action within the context of modeling and simulation. This energizes a top to bottom integration of scientific activities and essential feedback up and down this chain. The process produces actionable evidence of how progress is being made and where the bottlenecks to progress exist. The entirety of the V&V work provides a deep technical discourse on the breadth of computational science. The whole of computational science can be improved by its proper application. By weakly supporting V&V, current efforts are cutting themselves off from the integration of the full scientific enterprise and impact into the use of computation scientifically. https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/
6. Expansive uncertainty quantification – too many uncertainties are ignored rather than considered and addressed. Uncertainty is a big part V&V, a genuinely hot topic in computational circles, and practiced quite incompletely. Many view uncertainty quantification as only being a small set of activities that only address a small piece of the uncertainty question. Too much benefit is achieved by simply ignoring a real uncertainty because the value of zero that is implicitly assumed is not challenged. This is exacerbated significantly by a half funded and deemphasized V&V effort in scientific computing. Significant progress was made several decades ago, but the signs now point to regression. The result of this often willful ignorance is a lessening of impact of computing and limiting the true benefits. https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/
7. Data integration and analysis – one of the latest hot topics is big data and data analysis. The internet and sensors are creating massive amounts of data, and its use is a huge technical problem. The big data issue is looking for significant and actionable understanding from the oceans of data. A related and perhaps more difficult problem is small data where there isn’t enough data, or the enough of the data you want. Lots of science and engineering is data limited to a degree that scientific understanding is limited. Modeling and simulation offers a vehicle to augment this data and fill in the gaps. Doing this in a manner that is credible will be huge challenge. The ways forward with credibility use V&V and intensive uncertainty quantification. The proper use of codes and the role of calibration also becomes critical to success.  https://wjrider.wordpress.com/2016/07/10/10-big-things-for-the-future-of-computational-science/
8. Multidisciplinary, multiscale science – one of the hot topics a quarter century ago was better Multiphysics methods to replace the pervasive use of operator splitting for complex codes. This effort has utterly failed. We have made very little progress forward. Part of the issue is the inability to produce computational algorithms that are efficient enough to compete. A fully coupled method ends up being so expensive that any accuracy increases from the improved coupling are rendered ineffective. A second and perhaps more powerful reason for lack of progress are the computer codes. Old computer codes are still being used, and most of them use operator splitting. Back in the 1990’s a big deal was made regarding replacing legacy codes with new codes. The codes developed then are still in use, and no one is replacing them. The methods in these old codes are still being used and now we are told that the codes need to be preserved. The codes, the models, the methods and the algorithms all come along for the ride. We end up having no practical route to advancing the methods. https://wjrider.wordpress.com/2016/09/16/is-coupled-or-unsplit-always-better-than-operator-split/
9. Complete code refresh – we have produced and now we are maintaining a new generation of legacy codes. A code is a storage for vast stores of knowledge in modeling, numerical methods, algorithms, computer science and problem solving. When we fail to replace codes, we fail to replace knowledge. The knowledge comes directly from those who write the code and create the ability to solve useful problems with that code. Much of the methodology for problem solving is complex and problem specific. Ultimately a useful code becomes something that many people are deeply invested in. In addition, the people who originally write the code move on taking their expertise, history and knowledge with them. The code becomes an artifact for this knowledge, but it is also a deeply imperfect reflection of the knowledge. The code usually contains some techniques that are magical, and unexplained. These magic bits of code are often essential for success. If they get changed the code ceases to be useful. The result of this process is a deep loss of expertise and knowledge that arises from the process of creating a code that can solve real problems. If a legacy code continues to be used it also acts to block progress of all the things it contains starting with the model and its fundamental assumption. As a result, progress stops because even when there is research advances, it has no practical outlet. This is where we are today. https://wjrider.wordpress.com/2015/10/30/preserve-the-code-base-is-an-awful-reason-for-anything/ https://wjrider.wordpress.com/2016/01/01/are-we-really-modernizing-our-codes/ https://wjrider.wordpress.com/2016/01/14/a-response-to-criticism-are-we-modernizing-our-codes/ https://wjrider.wordpress.com/2014/03/20/legacy-code-is-terrible-in-more-ways-than-advertised/
10. Democratization of expertise – the manner in which codes are applied has a very large impact on solutions. The overall process is often called a workflow, encapsulating activities starting with problem conception, meshing, modeling choices, code input, code execution, data analysis, visualization. One of the problems that has arisen is the use of codes by non-experts. Increasingly code users are simply not sophisticated and treat codes like black boxes. Many refer to this as the democratization of the simulation capability, which is generally beneficial. On the other hand, we increasingly see calculations conducted by novices who are generally ignorant of vast swaths of the underlying science. This characteristic is keenly related to a lack of V&V focus and loose standards of acceptance for calculations. Calibration is becoming more prevalent again, and distinctions between calibration and validation are vanishing anew. The creation of broadly available simulation tools must be coupled to first rate practices and appropriate professional education. In both of these veins the current trends are completely in the wrong direction. V&V practices are in decline and recession. Professional education is systematically getting worse as the educational mission of universities is attacked, and diminished along with the role of elites in society. https://wjrider.wordpress.com/2016/12/02/we-are-ignoring-the-greatest-needs-opportunities-for-improving-computational-science/

One of the key aspects of this discussion is recognizing that these activities are all present to some small degree in exascale, but all of them are subcritical. The program basically starves all of these valuable activities and only supports them in fashion that creates a “zombie-like” existence. As a result, the program is turning its back on a host of valuable avenues for progress that could actually make an exascale computer actually far more useful. Our present path has genuine utility, but represents an immense opportunity cost if you factor in what could have been accomplished instead with better leadership, vision and technical sophistication. The way we approach science more broadly is permeated with these inefficiencies meaning our increasingly paltry investments in science are further undermined by our pathetic execution. At the deepest level our broader societal problems revolving around trust, expertise, scandal and taste for failure may doom any project unless they are addressed. For example, the issues related to the preservation of code bases (i.e., creating new legacy codes) are creating deep problems with advancing on the essential fronts of modeling, methods and algorithms. Everything is woven together into a tapestry whose couplings cannot be ignored. This is exactly the sort of subtlety and nuance our current time finds utterly incomprehensible.

Postscript:

It is sometimes an appropriate response to reality to go insane.

― Philip K. Dick

Healey’s First Law Of Holes: When in one, stop digging.

― Denis Healey

Last week I tried to envision a better path forward for scientific computing. Unfortunately, a true better path flows invariably through a better path for science itself and the Nation as a whole. Ultimately scientific computing, and science more broadly is dependent on the health of society in the broadest sense. It also depends on leadership and courage, two other attributes we are lacking in almost every respect. Our society is not well, the problems we are confronting are deep and perhaps the most serious crisis since the Civil War. I believe that historians will look back to 2016-2018 and perhaps longer as the darkest period in American history since the Civil War. We can’t build anything great when the Nation is tearing itself apart. I hope and pray that it will be resolved before we plunge deeper into the abyss we find ourselves. We see the forces opposed to knowledge, progress and reason emboldened and running amok. The Nation is presently moving backward and embracing a deeply disturbing and abhorrent philosophy. In such an environment science cannot flourish, it can only survive. We all hope the darkness will lift and we can again move forward toward a better future; one with purpose and meaning where science can be a force for the betterment of society as a whole.

Everything passes, but nothing entirely goes away.

― Jenny Diski