Scientific Computing’s Future Is Mobile, Adaptive, Flexible and Small

Without deviation from the norm, progress is not possible.

― Frank Zappa

titanThere is something seriously off about working on scientific computing today. Once upon a time it felt like working in the future where the technology and the work was amazingly advanced and forward-looking. Over the past decade this feeling has changed dramatically. Working in scientific computing is starting to feel worn-out, old and backwards. It has lost a lot of its sheen and it’s no longer sexy and fresh. If I look back 10 years everything we then had was top of the line and right at the “bleeding” edge. Now we seem to be living in the past, the current advances driving computing are absent from our work lives. We are slaving away in a totally reactive mode. Scientific computing is staid, immobile and static, where modern computing is dynamic, mobile and adaptive. If I want to step into the modern world, now I have to leave work. Work is a glimpse into the past instead of a window to the future. It is not simply the technology, but the management systems that come along with our approach. We are being left behind, and our leadership seems oblivious to the problem.

For most of the history of computing in the 20th and into the 21st Century, scientific computing was at the forefront of technology. That is starting to change. Even today scientific computing remains exotic in terms of hardware and some aspects of software, but it also feels antiquated and antique. We get to use cutting edge computer chips and networking hardware that demand we live on the ragged edge technologically. This is only half the story. We also remain firmly entrenched in the “mainframe” era with corporate computing divisions that seem more “Mad Men” and less “Star Trek” than ever. The distance between the computers we use to execute our leading edge scientific investigations and our offices or our personal lives are diverging at warp speed. It has become hopelessly ironic in many ways. Worse than ironic, the current state of things is unhealthy and lessens the impact of scientific computing on today’s World.

Unknown-2Even worse than the irony is the price this approach is exacting on scientific computing. For example, the computing industry used to beat a path to scientific computing’s door, and now we have to basically bribe the industry to pay attention to us. A fair accounting of the role of government in computing is some combination of being a purely niche market, and partially pork barrel spending. Scientific computing used to be a driving force in the industry, and now lies as a cul-de-sac, or even pocket universe, divorced from the day-to-day reality of computing. Scientific computing is now a tiny and unimportant market to an industry that dominates the modern World. In the process, scientific computing has allowed itself to become disconnected from modernity, and hopelessly imbalanced. Rather than leverage the modern World and its technological wonders many of which are grounded in information science, it resists and fails to make best use of the opportunity. It robs scientific computing of impact in the broader World, and diminishes the draw of new talent to the field.

It would be great to elaborate on the nature of the opportunities, and the cost of the pileofshitpresent imbalances. If one looks at the modern computing industry and its ascension to the top of the economic food chain, two things come to mind: mobile computing – cell phones – and the Internet. Mobile computing made connectivity and access ubiquitous with massive penetration into our lives. Networks and apps began to create new social connections in the real world and lubricated communications between people in a myriad of ways. The Internet became both a huge information repository, and commerce. but also an engine of social connection. In short order, the adoption and use of the internet and computing in the broader human World overtook and surpassed the use by scientists and business. Where once scientists used and knew computers better than anyone, now the World is full of people for whom computing is far more important than for science. Science once were in the lead, and now they are behind. Worse yet, science is not adapting to this new reality.

Those who do not move, do not notice their chains.

― Rosa Luxemburg

The core of the problem with scientific computing is its failure to adapt and take advantage of the opportunity defined by this ascendency of computing. A core of science’s issue with computing is the lost sense that computers are merely a tool. Computers are a tool that may be used to do science. Instead of following this maxim, we simply focus on the older antiquated model of scientific computing firmly grounded in the mainframe era. Our mindset has not evolved with the rest of the World. One of the clear consequences of the mindset is a creeping degree of gluttony and intellectual laziness with high performance computing. All problems reduce to simply creating faster computers and making problems submit to the raw power of virtually limitless computations. We have lost sight of the lack of efficiency of this approach. A renewed focus on issues of modeling, methods and algorithms could be deeply enlivened by the constraints imposed by limited computing resources. Moreover, the benefits of solving problems more efficiently with smaller computing resources would yield innumerable benefits in the setting of big iron. This could be achieved without the very real limitations of having big iron be the sole focus of our efforts.

Cielo rotator

Scientific computing could be arranged to leverage the technology that is advancing the World today. We could look at a mobile, adaptive platform for modeling, simulation and data analysis that harnessed the best of technology. We could move through the cloud using technology in an adaptive, multiscale manner. One of the biggest challenges is letting go of the power dynamic that drives thinking today. Scientific computing has been addicted to Moore’s law for too long. The current exascale push is symptomatic of this addiction. Like any addiction it is unhealthy and causes the subject to avoid real cures for their problem. We see progress as equivalent to raw power with a single computer. The huge stunt calculation as a vehicle for science is a manifestation of this addiction. Science is done with many calculations along with an adaptive examination of problems or mindful interrogation of results. Power can also be achieved through mobility, ubiquity and flexibility. The big iron we pursue has become tantamount to progress because it’s the only route we can envision. The problem is that technology, and the arc of progress is working against us instead of with us. It is past time to change our vision of what the future can be. The future needs to be different by embracing a different technological path. On one hand, we won’t be swimming against the current of computing technology, but on the other hand we will need to invest in different solutions to make it work.

Flexibility is an art of creating way outs within the cul-de-sacs!
― Mehmet Murat ildan

Mobility is power, and it has made computing ubiquitous. When the broader computing industry embraced the death of Moore’s law, it switched its attention to cell phones. Instead of simply being phones, they became mobile computers and mobile extensions of the Internet. In doing so we unleashed a torrent of creativity and connection. All of a sudden, we saw computers enable the level of social connection that the Internet always had promised, but never delivered. The mobile computing revolution has reshaped the World in a decade. In the process, the mobile market overwhelmed the entire computing industry and created economic dominance on an unparalleled scale. The killer piece of technology was the iPhone. It combined a focus on user interface along with software that enabled everything. We also need to recognize that each phone is more powerful than the fastest computer in the World 25 years ago. We have tremendous power at our fingertips.

One of the really clear messages of the recent era in computing is a change in the nature of value and power. For a long time, power was measured by hardware gains in speed, memory and capability, but now application innovation and flexibility rule. Hardware is largely a fixed and slowly changing commodity and represents a level playing field. The software in the applications and the user interface are far more important. Algorithms that direct information and attention are dominating the success in computing. Providing the basis of connection and adaption to the needs of the users has become the medium for creating new markets. At the same time these algorithms have come under fire for how they manipulate people and data. These mobile computers have become a massive issue for society as a whole. We are creating brand new social problems and side-effects we need to effectively solve. The impact of this revolution in computing on society as a whole has been incredible.

A whole cadre of experts is fading from the field of play in computing. In taking the tact of focusing on mainframe computing, scientific computing is sidelining itself. Instead of this enormously talented group of people playing in the area that means the most to society, they are focused on a cul-de-sac grounded in old and outdated models of success. Our society would benefit by engaging these experts in making mobile computing more effective in delivering value in new innovative ways. We could be contributing to solving some of the greatest problems facing us rather than seeing our computing as a special niche serving a relatively small segment of society’s needs. In the past, scientific computing has provided innovative and dynamic solutions that ultimately made their way into the general computing. A perfect example is Google. The problem new-google-algorithmthat Google solved is firmly grounded in scientific computing and applied mathematics. It is easy to see how massive the impact of this solution is. Today we in scientific computing are getting further and further from relevance to society. This niche does scientific computing little good because it is swimming against a tide that is more like a tsunami. The result is a horribly expensive and marginally effective effort that will fail needlessly where it has the potential to provide phenomenal value.

You never change things by fighting the existing reality.

To change something, build a new model that makes the existing model obsolete.

― R. Buckminster Fuller

We are long passed the time to make a change in scientific computing’s direction and strategy. Almost everywhere else the mainframe era died decades ago. Why is scientific computing tied to this model? Why are scientists resisting the conclusions so nakedly obvious? In today’s risk, adverse environment making a change to the underlying model of this branch of science is virtually impossible. Even when the change is dramatically needed and overdue by years the resistance is strong. The status quo is safe and firmly entrenched. In a time when success can be simply asserted and largely manufactured, this unacceptable state of affairs will persist far longer than it should. Sooner or later someone will take the plunge, and success will follow them. They will have the winds of progress at their backs solving most of the problems easily that we throw billions of dollars at with meager success.

The measure of intelligence is the ability to change.

― Albert Einstein

 

Advertisements

What’s going wrong and why

If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.

― Albert Einstein

A few years ago, I was challenged to examine the behavior of void in continuum hydrocodes. A senior colleague suggested looking at problems that might allow us to understand how the absence of material would be treated in a code. The simplest version of this problem would solve the expansion of a real gas into a void. With an ideal gas this problem has an exact solution that can be found with a Riemann solution. In the process, we have discovered that these problems are not solved well by existing methods. We approximate the void with a very low density and pressure material, and we have found as the material approaches an actual void, the solutions seem to become non-convergent, and prone to other significant numerical difficulties. Even when using extremely refined meshes with many 1000’s of cells in one dimension, convergence is not observed for a broad class of methods. These methods have solved many difficult problems and we believe them to be robust and reliable. These problems persist for all methods tested including our fail-safe methods (e.g., first order Godunov).

What is going on?

I’ll just say in passing that this post is a bit of a work in progress conversation with myself (or myself to you). My hope is that it will shake lose my thinking. It is patterned on the observation that sometimes you can solve a problem by carefully explaining it to someone else.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

This slideshow requires JavaScript.

One of the difficulties of this problem is the seemingly bad behavior coming from our most reliable and robust methods. When we want a guaranteed a good solution to a problem, we unleash a first-order Godunov method on it, and if use an exact Riemann solver we can expect the solution to be convergent. The results we see with void seemingly violate this principle. We are getting terrible solutions in a seemingly systematic manner. To make matters worse, the first-order Godunov method is the basis, and fallback position for the more important second- or third-order methods we practically want to use. We can conclude that this problem is exposing some rather serious problems with our workhorse methods and the potential for wholesale weakness in our capability.

There are no facts, only interpretations.

― Friedrich Nietzsche

Smalljumps-1stg

First order Godunov with 1000 cells. Plotting the maximum velocity over time shows the convergence for 100 and 1000:1 jumps. The velocioty peaks and decays to the correct solution.

Let’s look at what happens for the approximate-void problem. We approximate the void with a gas that has a density and pressure of twelve orders of magnitude smaller than the “real” material. This problem has a solution that almost gives the expansion into vacuum solution to the Euler equations (where the head of the rarefaction and the contact discontinuity collapse into a single structure that separates material from nothing). The problem is dominated by an enormous rarefaction that takes the density down by many orders of magnitude. What we see is a solution that appears to get worse and worse under mesh refinement. In other words, it diverges under mesh refinement. Actually, the behavior we see is a bit more complex than this. At very low resolutions, the solution is behind the exact solution, and as we refine the mesh, the solution catches up to and, then passes the exact solution. Then as we add more and more mesh, the solution just gets worse and worse. This is not supposed to happen. This is a very bad thing that needs focused attention.

Methods-compare-1000

Comparing first order, PLM and PPM solutions for the 1000:1 jump. The high order methods converge much faster than the first-order method.

So maybe backing away from the extreme problem is worth doing. I ran a sequence of shock tube problems varying the jump in pressure and density starting at 10:1 and slowly going up to the extreme jump that approximates an expansion into void. The shock tube is a self-similar problem, meaning that we can swap time and space through a similarity transformation. Thus, the very early time evolution on a very fine grid is essentially the same as a late time solution on a very coarse grid. What I noticed is the same pattern over and over. More importantly, the problem gets worse and worse as the jumps get larger and larger. By examining the trend as the jumps become very large, we start to see the nature of our problem. As the jump becomes larger and larger, the solution converges more and more slowly. We can start to estimate the mesh resolution needed for a good result and we can see that the situation becomes almost hopeless in the limit. I believe the solution will eventually converge given enough mesh, but the size of the mesh needed to get a convergent solution becomes completely absurd.

large-jumps-compare

For the large jumps of a million to a trillion convergence is lost at 1000 cells. The solution hasn’t even reached its peak value to decay toward the correct solution.

In summary, the problem with a factor of a million jump converges with modestly unreasonable mesh. As the jump grows in size, the convergence requires a mesh that is prohibitive for any practical work. If we are going to accurately solve this class of problems some other approach is needed. To make things worse the when the problem converges, the rate of convergence under refinement of the mesh is painfully slow, and incredibly expensive as a result.

Everywhere is walking distance if you have the time.

― Steven Wright

The second issue we see is a persistent velocity glitch at the head of the rarefaction. It is fair to say that the glitch has heretofore been viewed as a cosmetic problem. This velocity peak looks like a meaningfully wrong solution to the equations locally. It produces a shock like solution in the sense that produces a violation of Lax’s entropy condition, where the characteristics locally converge in a shock-like manner in a rarefaction where the characteristics should diverge locally. We might expect that this problem would hurt the physically meaningful solution. Not all-together surprisingly the solution can also violate the second law of thermodynamics when using higher than first-order methods. Moreover, this character simply gets worse and worse as the problem gets closer to a void. A reasonable supposition is that this feature in the numerical solution is a symptom of difficulties in rarefactions. Usually this feature can be categorized as a nuisance and relatively small contributor to error, but may be a sign of something deeper. Perhaps this nuisance becomes a significant issue as the rarefaction becomes stronger, and ultimately dominates the numerical character of the solution. We might be well-served by removing it from the solution. One notion we might add to the treatment of the glitch is its diminishing size as the mesh is refined. Having this anomalous shock-like character allows dissipation to damp the spike and improve the solution. The counter-point to this solution is not creating the glitch in the first place.

convergence-plot-1e8

For the jump of 100 million we get convergence with 2000 and 4000 cells. This also shows that the curves are quite close to self-similar In addition the slow convergence is evident in the behavior.

At this point it’s useful to back away from the immediate problem to a broader philosophical point. The shock capturing methods are naturally focused on computing shocks. Shock waves were a big challenge for numerical methods. They remain a large challenge, and failure to treat them effectively can be fatal for a calculation. If the shock wave was not treated with care, the numerical can fail catastrophically, or significantly damaged. Even when the results are not catastrophic, poor treatment of a shock can result in significant corruption of the solution that often spreads from the shock to other areas in the solution. For this reason, the shock wave and its numerical treatment has been an enduring focus of numerical methods for compressible flows. Conversely rarefactions have largely been an afterthought. Rarefactions are benign smooth structures that do not directly threaten a calculation. A few bad things can happen in rarefactions, but they are rarely fatal to the calculation. A few have been so cosmetically problematic that major effort has ensued (the rarefaction shock). Problems in rarefactions are generally just a nuisance, and only become a focal point when the details of the solution are examined. One aspect of the details is the convergence character of the solution. Shock tube problems are rarely subjected to a full convergence analysis. The problem we focus on here is dominated by a rarefaction thus magnifying any problems immensely. What we can conclude is that strong rarefactions are not computed with high fidelity.

The trick to forgetting the big picture is to look at everything close up.

― Chuck Palahniuk

One of the key ways of dealing with shock waves are upwind methods. A clear manner of treating these waves and getting an upwind solution is the use of a discontinuous basis to define the spatial discretization. This discontinuous basis is also used with high-order methods, and the first order solution becomes the fallback position for the methods. This approach is very well suited to computing shocks; a discontinuous approximation for a discontinuous phenomenon. By the same token, a discontinuous basis is not well suited for a continuous phenomenon like a rarefaction. One hypothesis to explore is different types of approximations to the problem where the rarefaction dominates the solution. We may find that we can solve this class of problem far more efficiently with a continuous basis getting asymptotically convergent solutions far sooner. What we observe is an ever slower approach to a convergent behavior in the code. For this class of problems we see a consistent pattern, the solution starts out being under-resolved and the velocity rises, it then overshoots the correct analytical result, then slowly decays toward the correct solution. As the rarefaction becomes stronger and stronger, we see that the mesh resolution needed to capture the full rise, its achievement of the peak overshoot value take place at a finer and finer mesh.  Ultimately, the mesh required to get a solution that converges becomes absurdly refined.

If this proposition is indeed correct, it implies that we need to define a hybrid approach where the basis is adaptively chosen. At discontinuous structures, we want to choose discontinuous approaches, and at continuous structures we want continuous structures. This is almost obvious, but carrying this out in practice is difficult. Clearly the current adaptive approaches are not working well enough as evidenced by the painful and absurd degree of mesh needed to get a reasonable solution. It would seem that the answer to this problem lies in developing a new method capable of solving extreme rarefactions on reasonable meshes.  We need to have methods that can solve strong, but continuous waves with higher fidelity. In all reality, these methods might need to effectively compute shocks albeit less effectively than methods using a discontinuous basis. The bottom line from attacking a challenging problem like this is the demonstration that our methods today are not sufficient to all our challenges.

Creativity consists of coming up with many ideas, not just that one great idea.

― Charles Thompson

Is the code part of the model?

Yes.

Of course, it’s not really that simple, but yes, the code is part of the model. If it isn’t, one has to provide a substantial burden of proof.

We have no idea about the ‘real’ nature of things … The function of modeling is to arrive at descriptions which are useful.

– Richard Bandler and John Grinder

images-2Ideally, it should not be, but proving that ideal is a very high bar that is almost never met. A great deal of compelling evidence is needed to support an assertion that the code is not part of the model. The real difficulty is that the more complex the modeling problem is, the more the code is definitely and irreducibly part of the model. These complex models are the most important uses of modeling and simulation. The complex models of engineered things, or important physical systems have many submodels each essential to successful modeling. The code is often designed quite specifically to model a class of problems. The code then becomes are clear part of the definition of the problem. Even in the simplest cases, the code includes the recipe for the numerical solution of a model. This numerical solution leaves its fingerprints all over the solution of the model. The numerical solution is imperfect and contains errors that influence the solution. For a code, there is the mesh and geometric description plus boundary conditions, not to mention the various modeling options employed. Removing the specific details of the implementation of the model in the code from consideration as part of the model becomes increasingly intractable.

The word model is used as a noun, adjective, and verb, and in each instance it has a slightly different connotation. As a noun “model” is a representation in the sense in which an architect constructs a small-scale model of a building or a physicist a large-scale model of an atom. As an adjective “model” implies a degree of perfection or idealization, as in reference to a model home, a model student, or a model husband. As an adjective “model” implies a degree or perfection or idealization, as in reference to a model home, a model student, or a model husband. As a verb “to model” means to demonstrate, to reveal, to show what a thing is like.

– Russell L. Ackoff

The word model itself is deeply problematic. Model is one of those words that can mean many different things whether its used a noun or verb (I’ll note in passing much like the curse word, “fuck” is so flexible as to be wonderful and confusing all at once). Its application in a scientific and engineering context is common and pervasive. As such, we need to inject some precision into how it is being used. For this reason, some discourage the use of “model” in discussion. On the other hand, models and modeling is so 16376102935_002fea8384_zcentral to the conduct of science and engineering that it should be dealt with head on. It isn’t going away. We model our reality when we want to make sure we understand it. We engage in modeling when we have something in the Real World, we want to demonstrate an understand of. Sometimes this is for the purpose of understanding, but ultimately this gives way to manipulation, the essence of engineering. The Real World is complex and effective models are usually immune to analytical solution.

Essentially, all models are wrong, but some are useful.

– George E. P. Box, Norman R. Draper

You view the world from within a model.

― Nassim Nicholas Taleb

Computational science comes to the rescue, and opens the doors to solving these complex models via numerical approximations. It is a marvelous advance, but brings new challenges because the solutions are imperfect. This adds a new layer of imperfection to modeling. We already should recognize that models are generically approximate versions of reality (i.e., wrong), and necessarily imperfect mathematical representations of the Real World. Solving this imperfect model, imperfectly via an approximate method makes the modeling issue even more fraught. Invariably for any model with complexity, the numerical solution of the model, and its detailed descriptionjohn-von-neumann-2 are implemented in computer code, or “a computer code”. The details and correctness of the implementation become inseparable from the model itself. It becomes quite difficult to extract the model as any sort of pure mathematical construct; the code is part of it intimately.

Evidence of the model’s nature and correctness is produced in the basic conduct of verification and validation with uncertainty quantification. Doing a full accounting of the credibility of modeling, including pedigree of the model will not help to exclude the code from the model, simply define the extent of this connection. Properly speaking, the code is always part of the model, but the extent or magnitude of its impact can be small, or even considered minor or negligible. This evidence is contained within the full assessment of the predictive quality of the simulation including a quantitative assessment. Among these activities verification is the most important for the question at hand. Do we have evidence that the mathematical model desired is correctly solved? Do we have evidence that the numerical errors in the solution are small? Can all the aspects of the model be well described by clearly articulated mathematics?

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

― Stephen Hawking

A model is not the operating system for the universe. Reality is not determined by these mathematical abstractions; the mathematics is designed to describe what we observe. As such, the models are always flawed and imperfect representations to some level. Determining the flaws and the quantitative level of imperfection is difficult worlogok requiring detailed verification and validation. It is an abstraction and representation of the processes we believe produce observable physical effects. We theorize that the model explains how these effects are produced. Some models are not remotely this high minded; they are nothing, but crude empirical engines for reproducing what we observe. Unfortunately, as phenomena become more complex, these crude models become increasingly essential to modeling. They may not play a central role in the modeling, but still provide necessary physical effects for utility. These submodels necessary to produce realistic simulations become ever more prone to include these crude empirical engines as problems enter the engineering realm. As the reality of interest becomes more complicated, the modeling becomes elaborate and complex being a deep chain of efforts to grapple with these details.

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

― Arthur Conan Doyle

Validation of a model occurs when we take the results of solving the model and compare them directly with observations from the Real World. A key aspect of the validation exercise is characterizing the uncertainty in both the observations and the model. When all this assessment is in hand, we can render a judgment of whether the model represents the observed reality well enough for the purposes we intend. This use is defined by a question we want to answer with the modeling. The answer needs to have a certain fidelity, and certainty that provides the notion of precision to the exercise. The certainty of the observations defines a degree of agreement that can be demanded. The model’s uncertainties define the model’s precision, but includes the imagesimpact of numerical approximation. The numerical uncertainty needs to be accounted for to isolate the model. This uncertainty defines the level of approximation in the solution to the model, and a deviation from the mathematical idealization the model represents. In actual validation work, we see a stunning lack of this essential step from validation work presented. Another big part of the validation is recognizing the subtle differences between calibrated results and predictive simulation. Again, calibration is rarely elaborated in validation to the degree that it should.

We should always expect the model to deviate from observations to some degree. If we are capable of producing more accurate observations of reality, we can more accurately determine how wrong the model is. In a sense, we can view this as a competitive race. If our model is quite precise, we are challenged in being able to observe nature well enough to expose its innate flaws. Conversely, if we can observe nature with extreme precision, we can define the model’s imperfections clearly. Progress can be made by using this tension to push one or the other. The modeling uncertainty is compounded by approximate numerical solution implemented in a computer code (including the correctness of the code). Verification and validation activities are a systematic manner to collect evidence so that the comparison can be made in a complete and compelling manner.

Computer codes serve two very important roles in modeling: the model is contained in the code including geometry, boundary condition, and a host of ancillary models for complex situations, and solving the model numerically. Both of these characteristics are essential in the conduct of modeling, but numerical solutions are far more subtle and complex. Many people using codes for modeling do not have a background sufficient to understand the subtleties of numerical methods and their impact on solutions. Moreover, the fiction that numerical methods and codes are so reliable that detailed understanding is not essential, persists and grows. Our high performance computing programs work to fuel this fiction. The most obvious aspect of the numerical solution is the meshing and the time integration with the error’s proportionality to this detail. Producing evidence of the correctness and error characteristics is produced through verification. In addition, most advanced codes solve linear and nonlinear equations in an iterative manner. Iterative solutions have a finite tolerance in their solution, which can impact solutions. This is particularly true for nonlinear equation solvers where the error tolerance that can be achieved by some popular solvers is extremely loose. This looseness can produce significant physical effects in solutions. Most verification work does not examine these aspects closely although they should. Again, the code and its capabilities and methods are extremely important, if not essential, to the model produced. In many cases fantastic modeling work is polluted by naïve numerical methods, thus a wonderful model can be wiped out by a terrible code.

You’ve baked a really lovely cake, but then you’ve used dog shit for frosting.

― Steve Jobs

So, when can we exclude the code? The big thing to focus on in this question is verification evidence. Code verification is necessary to be confident that the mathematical model intended is provably present in the code. It asks whether the mathematical abstraction that the model is based on is correctly solved by the code. Code verification can be completely satisfactory and successful, and the code can still be important. Code verification does not say that the numerical error is small, it says that numerical error is ordered and the model equations desired to be solved are indeed solved. The second half of verification is solution (calculation) verification determines the errors in solving the model. The question is how large (or small) the numerical errors in the solution of the model are? Ultimately, these errors are a 6767444295_259ef3e354strong function of the discretization and solver used in the code. The question of whether the code matters comes down to asking if another code used skillfully would produce a significantly different result. This is rarely, if ever, the case. To make matters worse, verification evidence tends to be flimsy and half-assed. Even if we could make this call and ignore the code, we rarely have evidence that this is a valid and defensible decision.

Truth can only be found in one place: the code.

― Robert C. Martin

In closing, the code IS part of the model unless evidence can be found otherwise. This can happen more easily where the model is simple. In general, the exclusion of the code is an ideal that cannot be reached. As models become complex detaching the model from the code becomes nearly intractable, and indefensible. Evidence will almost invariably point to the code being an important contributor to the model’s picture of reality.

For the scientist a model is also a way in which the human though processes can be amplified. This method often takes the form of models that can be programmed into computers. At no point, however, the scientist intend to loose control of the situation because off the computer does some of his thinking for him. The scientist controls the basic assumptions and the computer only derives some of the more complicated implications.

– C. West Churchman

 

 

11 Things in Computational Science that Sound Awesome, but are Actually Terrible

From the bad things and bad people, you learn the right way and right direction towards the successful life.

― Ehsan Sehgal

Computational science is an extremely powerful set of disciplines for conducting scientific investigations. The end result of computational science is usually grounded in the physical sciences, and engineering, but depends on a chain of expertise spanning much of modern science. Doing computational science well completely depends on all of these disparate disciplines working in concert. A big area of focus these days are the supercomputers being used. The predicate for acquiring a these immensely expensive machines is the improvement in scientific and engineering product arising from their use. While this should be true, getting across this finish line requires a huge chain of activities to be done correctly.

Unknown-2Let’s take a look at all the things we need to do right. Computer engineering and computer science are closest to the machines needed for computational science. These disciplines make these exotic computers accessible and useful for domain science and engineering. A big piece of this work is computer programming and software engineering. The computer program is a way of expressing mathematics in a way for the computer to operate on. Efficient and correct computer programs are a difficult endeavor all by themselves. Mathematics is the language of physics and engineering and essential for the conduct of computing. Mathematics is a middle layer of work between the computer and their practical utility. It is a deeply troubling and ironic trend that applied mathematics is disappearing from computational science. As the bridge between the computer and its practical use, it forms the basis for conducting and believing the computed results. Instead of being an area of increased focus, the applied math is disappearing into either the maw of computer programming or domain science/engineering. It is being lost as a separate contributor. Finally, we have the end result in science and engineering. Quite often we lose sight of computers and computing as a mere tool that must follow its specific rules for quality, reliable results. Too often the computer is treated like it is a magic wand.

imgresAnother common thread to horribleness is the increasing tendency for science and engineering to be marketed. The press release has given way to the tweet, but the sentiment is the same. Science is marketed for the masses who have no taste for the details necessary for high quality work. A deep problem is that this lack of focus and detail is creeping back into science itself. Aspects of scientific and engineering work that used to be utterly essential are becoming increasingly optional. Much of this essential intellectual labor is associated with the hidden aspects of the investigation. Things related to mathematics, checking for correctness, assessment of error, preceding work, various doubts about results and alternative means of investigation. This sort of deep work has been crowded out by flashy graphics, movies and undisciplined demonstrations of vast computing power.

Some of the terrible things we discuss here are simply bad science and epileofshitngineering. These terrible things would be awful with or without a computer being involved. Other things come from a lack of understanding of how to add computing to an investigation in a quality focused manner. The failure to recognize the multidisciplinary nature of computational science is often at the root of many of the awful things I will now describe.

Fake is the new real, You gotta keep a lot a shit to yourself.

― Genereux Philip

Without further ado, here are some terrible things to look out for. Every single item on the list will be accompanied by a link to a full blog post expanding on the topic.

  1. If one follows high performance computing online (institutional sites, Facebook, Twitter) you might believe that the biggest calculations on the fastest computers are the very best science. You are sold that these massive calculations have the greatest impact on the bottom line. This is absolutely not the case. These calculations are usually one-off demonstrations with little or no technical value. Almost everything of enduring value happens on the computers being used by the rank and file to do the daily work of science and engineering. These press release calculations are simply marketing. They almost never have the pedigree or hard-nosed quality work necessary for good science and engineering. – https://wjrider.wordpress.com/2016/11/17/a-single-massive-calculation-isnt-science-its-a-tech-demo/, https://wjrider.wordpress.com/2017/02/10/it-is-high-time-to-envision-a-better-hpc-future/
  2. The second thing you come across is the notion that a calculation with larger-finer mesh is better than one with a coarser mesh. In the naïve pedestrian analysis, this would seem to be utterly axiomatic. The truth is that computational modeling is an assembly of many things all working in concert. This is another example of proof by brute force. In the best circumstances this would hold, but most modeling is hardly taking places under the best conditions. The proposition is that the fine mesh allows one to include all sorts of geometric details, so the computational world looks more like reality. This is a priori What isn’t usually discussed is where the challenge is in modeling. Is geometric detail driving uncertainty? What is biggest challenge, and is the modeling focused there? – https://wjrider.wordpress.com/2017/07/21/the-foundations-of-verification-solution-verification/, https://wjrider.wordpress.com/2017/03/03/you-want-quality-you-cant-handle-the-quality/, https://wjrider.wordpress.com/2014/04/04/unrecognized-bias-can-govern-modeling-simulation-quality/
  3. In concert with these two horrible trends, you often see results presented as the result of single massive calculation that magically unveils the mysteries of the universe. This is computing as a magic wand, and has very little to do with science or engineering. This simply does not happen. Real science and engineering takes 100’s or 1000’s of calculations to happen. There is an immense amount of dag006background work needed to create high quality results. A great deal of modeling is associated with bounding uncertainty or bounding the knowledge we possess. A single calculation is incapable of this sort of rigor and focus. If you see a single massive calculation as the sole evidence of work, you should smell and call “bullshit”. – https://wjrider.wordpress.com/2016/11/17/a-single-massive-calculation-isnt-science-its-a-tech-demo/
  4. One of the key elements in modern computing is the complete avoidance of discussing how the equations in the code are being solved. The notion is that this detail has no importance. On the one hand, this is evidence of progress, our methods for solving equations are pretty damn good. The methods and the code itself is still an immensely important detail, and constitute part of the effective model. There seems to be a mentality that the methods and codes are so good that this sort of thing can be ignored. All one needs are a sufficiently fine mesh, and the results are pristine. This is almost always false. What this almost willful ignorance shows are lack of sophistication. The methods are immensely important to the results, and we are a very long way from being able to apply the sort of ignorance of this detail that is rampant. The powers that be want you to believe that the method disappears from importance because the computers are so fast. Don’t fall for it. – https://wjrider.wordpress.com/2017/05/19/we-need-better-theory-and-understanding-of-numerical-errors/, https://wjrider.wordpress.com/2017/05/12/numerical-approximation-is-subtle-and-we-dont-do-subtle/
  5. The George Box maxim about models being wrong, but useful is essential to keep in mind. This maxim is almost uniformly ignored in the high-performance computing bullshit machine. The politically correct view is that the super-fast computers will solve the models so accurately that we can stop doing experiments. The truth is that eventually, if we are doing everything correct, the models will be solved with great accuracy and their incorrectness will be made evident. I strongly expect that we are already there in many cases; the models are being solved too accurately and the real answer to our challenges is building new models. Model building as an enterprise is being systematically disregarded in favor of chasing faster computers. We need far greater balance and focus on building better models worthy of the computers they are being solved on. We need to build the models that are needed for better science and engineering befitting the work we need to do. –https://wjrider.wordpress.com/2017/09/01/if-you-dont-know-uncertainty-bounding-is-the-first-step-to-estimating-it/
  6. Calculationa03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0ael error bars are an endangered species. We never see them in practice even though we know how to compute them. They should simply be a routine element of modern computing. They are almost never demanded by anyone, and their lack never precludes publication. It certainly never precludes a calculation being promoted as marketing for computing. If I was cynically minded, I might even day that error bars when used are opposed to marketing the calculation. The implicit message in the computing marketing is that the calculations are so accurate that they are basically exact, no error at all. If you don’t see error bars or some explicit discussion of uncertainty you should see the calculation as flawed, and potentially simply bullshit. – https://wjrider.wordpress.com/2017/07/07/good-validation-practices-are-our-greatest-opportunity-to-advance-modeling-and-simulation/, https://wjrider.wordpress.com/2017/09/22/testing-the-limits-of-our-knowledge/, https://wjrider.wordpress.com/2017/04/06/validation-is-much-more-than-uncertainty-quantification/
  7. One way for a calculation to seem really super valuable is to declare that it is direct numerical simulation (DNS). Sometimes this is an utterly valid designator. The other term that follows DNS is “first principles”. Each of these terms seeks to endow the calculation with legitimacy that it may, or may not deserve. One of the biggest problems with DNS is the general lack of evidence for quality and legitimacy. There is a broad spectrum of the technical World that seems to be OK with treating DNS as equivalent (or even better) with experiments. This is tremendously dangerous to the scientific process. DNS and first principles is still based on solving a model, and models are always wrong. This doesn’t say that DNS isn’t useful, but this utility needs to be proven and bounded by uncertainty. – https://wjrider.wordpress.com/2017/11/02/how-to-properly-use-direct-numerical-simulations-dns/
  8. Most press releases are rather naked in the implicit assertion that the bigger computer gives a better answer. This is treated as being completely axiomatic. As such there is no evidence provided to underpin this assertion. Usually some colorful graphics, or color movies beautifully rendered accompany the calculation. Their coolness is all the proof we need. This is not science or engineering even though this mode of delivery dominates the narrative today. –https://wjrider.wordpress.com/2017/01/20/breaking-bad-priorities-intentions-and-responsibility-in-high-performance-computing/, https://wjrider.wordpress.com/2014/09/19/what-would-we-actually-do-with-an-exascale-computer/, https://wjrider.wordpress.com/2014/10/03/colorful-fluid-dynamics/
  9. Modeling is the use of mathematics to connect reality to theory and understanding. Mathematics is translated into methods and algorithms implemented in computer code. It is ironic that the mathematics that forms the bridge between physical world and the computer is increasingly ignored by science. Applied mathematics has been a tremendous partner for physics, engineering and computing throughout the history of computational science. This partnership has waned in priority over the last thirty years. Less and less applied math is called upon and happens being replaced by computer programming or domain science and engineering. Our programs seem to think that the applied math part of the problem is basically done. Nothing could be further from the truth. – https://wjrider.wordpress.com/2014/10/16/what-is-the-point-of-applied-math/, https://wjrider.wordpress.com/2016/09/27/the-success-of-computing-depends-on-more-than-computers/
  10. A frequent way of describing a computation is to describe the mesh as defining the solution. Little else is given about the calculation such as the equations being solved or how the equations are being approximated. Frequently, the fact that the solutions are approximated is left out. This fact is damaging to the accuracy narrative of massive computing. The designed message is that the massive computer is so powerful that the solution to the equations is effectively exact. The equations themselves basically describe reality without error. All of this is in service of saying computing can replace experiments, or real-world observations. The entire narrative is anathema to science and engineering doing each great disservice. – https://wjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/
  11. Computational science is often described in terms that are not consistent with the rest of science. We act like it is somehow different in a fundamental way. Computers are just tools for doing science, and allowing us to solve models of reality far more generally than analytical methods. With all of this power comes a lot of tedious detail needed to do things with quality. This quality comes from the skillful execution of this entire chain of activities described at the beginning of this Post. These details all need to be done right to get good results. One of the biggest problems in the current computing narrative is ignorance to the huge set of activities bridging a model of reality and the computer itself. The narrative wants to ignore all of this because it diminishes the sense that these computers are magical in their ability. The power isn’t magic, it is hard work, success is not a forgone conclusion, and everyone should ask for evidence, not take their word for it. – https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/

csm_group1_2c3e352676Taking the word of the marketing narrative is injurious to high quality science and engineering. The narrative seeks to defend the idea is that buying these super expensive computers is worthwhile, and magically produces great science and engineering. The path to advancing the impact of computational science dominantly flows through computing hardware. This is simply a deeply flawed and utterly naïve perspective. Great science and engineering is hard work and never a foregone conclusion. Getting high quality results depends on spanning the full range of disciplines associated with computational science adaptively as evidence and results demand. We should always ask hard questions of scientific work, and demand hard evidence of claims. Press releases and tweets are renowned for simply being cynical advertisements and lacking all rigor and substance.

One reason for elaborating upon things that are superficially great, but really terrible is cautionary. The current approach allows shitty work to be viewed as successful by receiving lots of attention. The bad habit of selling horrible low-quality work as success destroys progress and undermines accomplishing truly high-quality work. We all need to be able to recognize these horrors and strenuously reject them. If we start to effectively police ourselves perhaps this plague can be driven back, and progress can flourish.

The thing about chameleoning your way through life is that it gets to where nothing is real.

― John Green

 

 

The Piecewise Parabolic Method (PPM)

A method which can solve this problem well should be able to handle just about anything which can arise in one-dimensional pure hydrodynamic flow. PPM is such a scheme.

– P.R. Woodward

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

This is one of the most important methods in the early history of the revolutionaryfig9 developments for solving hyperbolic PDEs in the 1980’s. For a long time this was one of the best methods available to solve the Euler equations. It still outperforms most of the methods in common use today. For astrophysics, it is the method of choice, and also made major inroads to the weather and climate modeling communities. In spite of having over 4000 citations, I can’t help but think that this paper wasn’t as influential as it could have been. This is saying a lot, but I think this is completely true. This partly due to its style, and relative difficulty as a read. In other words, the paper is not as pedagogically effective as it could have been. The most complex and difficult to understand version of the method is presented in the paper. The paper could have used a different approach to great effect by perhaps providing a simplified version to introduce the reader and deliver the more complex approach as a specific instance. Nonetheless, the paper was a massive milestone in the field.

It was certainly clear that high-order schemes were not necessarily bringing greater accuracy so physics would have to step in to shore up the failing numerics.

– Jay Boris

Part of the problem with the paper is the concise and compact introduction to the two methods used in the accompanying review article, PPMLR and PPMDE. The LR stands for Lagrange-Remap where the solution is solved on a Lagrangian grid and then remapped back to the original grid for an utterly Eulerian solution. Both the Lagrangian and Eulerian grids are unevenly spaced, and this results in far more elaborate formulas. As a result it is hard to recognize the simpler core method lurking inside the pages of the paper. The DE stands for direct Eulerian, which can be very simple for the basic discretization. Unfortunately, the complication for the DE flavor of PPM comes with the Riemann solver, which is far more complex in the Eulerian frame. The Largangian frame Riemann solver is very simple and easy to evaluate numerically. Not so for the Eulerian version, which has many special cases and requires some exceedingly complex evaluations of the analytical structure of the fig1_350Riemann solution. Advances that occurred later greatly simplified and clarified this presentation. This is a specific difficulty of being an early adopter of methods, the clarity of presentation and understanding is dimmed by purely narrative effects. Many of these shortcomings have been addressed in the recent literature discussed below.

The development of the PPM gas dynamics scheme grew out of earlier work in the mid 1970s with Bram van Leer on the MUSCL scheme. The work of Godunov inspired essential aspects of MUSCL.

– Paul R. Woodward

The paper had a host of interesting and important sub-techniques for solving hyperbolic PDEs. Many of these “bells” and “whistles” are not part of the repertoire for most methods today. The field actually suffers from some extent by not adopting most of these strategies for attacking difficult problems. It is useful to list the special approaches along with a description and context that might make them easier to adopt more broadly (https://wjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://wjrider.wordpress.com/2017/06/30/tricks-of-the-trade-making-a-method-robust/, https://wjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/). The paper is written in such a way that these algorithms seem specifically tailored to PPM, but they are far broader in utility. Generalizing their use more broadly would serve the quality of numerical solutions immensely. To a large extent Phil Colella extended many of these techniques to piecewise linear methods that form the standard approach in production codes today.ParabolicExtrap

  • Shock flattening – Shocks are known to be horrifically nonlinear and difficult both forgiving and brutal. This technique acknowledges this issue by blending a bit of safe first order method with the nonlinearly adaptive high-order methods when a strong shock is encountered. The principle is to use a bit more first-order when the shock is strong because oscillations can escape. For weak shocks this is unnecessary. Rather than penalize the solution everywhere the method is made locally more dissipative where the danger is the greatest.
  • Contact steepening – contact discontinuities will smear out without limit if dissipation is applied to them. In other words, errors made in their solution are with you forever. To keep this from happening, the amount of dissipation applied at these waves is minimized. This sort of technique must be applied with great caution because at a shock wave this is exceedingly dangerous. Additionally, the method to limit the dissipation can produce a very good interface tracking method that is far simpler than the elaborate methodology using interface geometry. It is a useful pragmatic way to move interfaces with little dissipation along with relative simplicity. This basic approach is the actual interface tracking method in many production codes today although few use methods as elaborate or as high quality as that used in the original PPM.
  • Extra dissipation – Monotonicity preservation and Riemann solvers are two 41598_2017_13484_Fig11_HTMLelaborate ways of producing dissipation while achieving high quality. For very nonlinear problems this is not enough. The paper describes several ways of adding a little bit more, one of these is the shock flattening, and another is an artificial viscosity. Rather than use the classical Von Neumann-Richtmyer approach (that really is more like the Riemann solver), they add a small amount of viscosity using a technique developed by Lapidus appropriate for conservation form solvers. There are other techniques such as grid-jiggling that only really work with PPMLR and may not have any broader utility. Nonetheless, there may be aspects of the thought process that may be useful.
  • High-order edges – One of PPM’s greatest virtues is the use of formally higher order principles in the method. Classic PPM uses fourth-order approximations for its edge values. As a result, as the Courant number goes to zero, the method becomes formally fourth-order accurate. This is a really powerful aspect of the method. It is also one of the clear points where the method can be generalized. We can use whatever high-order edge value we like for PPM. One of the maxims to take from this approach is the power of including very high-order discretizations even with otherwise lower order approximation methods. The impact of the high-order is profoundly positive.
  • Steepened edge values – For horrible nonlinear problems, the simple use of high-order differencing is not advisable. The nature of the high-order approximation can be decomposed into several pieces, and the approximation can be built more carefully and appropriately for complex problems. In this way, the high order edge values are a bit hierarchical. This is partially elaboration, but also reflects a commitment to quality that is imminently laudable.

Generalized Monotonicity – PPM uses a parabola and as a result the limiters so well-known don’t work to provide monotone results. As a result, the limiter for PPM takes two steps instead of the single step needed for a linear profile. I don’t like the original presentation in the paper and recast the limiter into an equivalent algorithm that uses two applications of the median function per edge. The first step makes sure the edge value being used is bounded by the cell averages adjacent to it. The second step asks whether the parabola is monotone in the cell and limits it to one that is by construction should it not be (https://wjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/, https://wjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/ https://wjrider.wordpress.com/2015/08/06/a-simple-general-purpose-limiter/https://wjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/, https://wjrider.wordpress.com/2015/08/07/edge-or-face-values-are-the-path-to-method-variety-and-performance/ ).

Before launching into a systematic description of the PPM algorithm, it is worthwhile to first explain the goals and constraints that have influenced its design. These are:

  1. Directional operator splitting.
  2. Robustness for problems involving very strong shocks.
  3. Contact discontinuity steepening.
  4. Fundamental data in the form of cell averages only.
  5. Minimal dissipation.
  6. Numerical errors nevertheless dominated by dissipation, as opposed to dispersion.
  7. Preservation of signals, if possible, even if their shapes are modified, so long as they travel at roughly the right speeds.
  1. Minimal degradation of accuracy as the Courant number decreases toward 0.

– Paul R. Woodward

Over time PPM has mostly been interpreted monolithically as opposed to some basic principles. PPM is really a wonderful foundation with the paper only providing a single instantiation of a panoply of powerful methods. This aspect has come to the fore more recently, but would have served the community better far earlier. Some of these comments are the gift of 2020 hindsight. A great deal of the pedagogical clarity with regard to Godunov-type methods is the result of its success, and only came to common use in the late 1980’s, if not the 1990’s. For example, the language to describe 1-s2.0-S0021999109003830-gr17Riemann solvers with clarity and refinement hadn’t been developed by 1984. Nevertheless, the monolithic implementation of PPM has been a workhorse method for computational science. Through Paul Woodward’s efforts it is often the first real method to be applied to brand new supercomputers, and generates the first scientific results of note on them.

The paper served as a companion to the adjacent paper that reviewed the performance of numerical methods for strong shocks. The review was as needed as it was controversial. The field of numerical methods for shock waves as set to explode into importance and creative energy. The authors Phil Colella and Paul Woodward would both play key roles in the coming revolution in methods. Woodward had already made a huge difference by spending time in Europe with Bram van Leer. Paul helped Bram with implementing advanced numerical methods using methodologies Paul learned at the Livermore Labs. Bram exposed Paul to his revolutionary ideas about numerical methods chronicled in Bram’s famous series of papers (https://wjrider.wordpress.com/2014/01/11/designing-new-schemes-based-on-van-leers-ideas/, https://wjrider.wordpress.com/2014/01/06/van-leers-1977-paper-paper-iv-in-the-quest-for-the-ultimate/, https://wjrider.wordpress.com/2014/01/05/review-of-the-analysis-of-van-leers-six-schemes/). The ideas therein were immensely influential in changing how hyperbolic equations were solved.

One of the great successes in numerical methods for hyperbolic conservation laws has been the use of nonlinear hybridization techniques, known as limiters, to maintain positivity and monotonicity in the presence of discontinuities and underresolved gradients.

– Michael Sekora and Phil Collela

Bram’s ideas created a genuine successor to Godunov’s method. The methods he created were novel in producing a nonlinearly adaptive numerical method where the method would adapt locally to the nature of the solution. This overcame the limitations of Godunov’s theorem regarding the accuracy of numerical methods for hyperbolic equations. Bram’s ideas were geometric in nature, and reflected the approach of the physicist. Paul being a physicist gravitated into the same view, and added a genuine does of pragmatism. Bram also wasn’t the first person to overcome Godunov’s theorem. He may have actually been the third (or fourth). The first is most likely to have been Jay Boris who invented the flux-corrected transport (FCT) method in 1971. In addition, Kolgan in the Soviet Union and Ami Harten might lay claims to overcoming Godunov’s barrier theorem. Some of these different methods played a role in the comparison in the review article by Woodward and Colella. In the light of history many of the differences in the results were more due to the approaches to systems of equations and related difficulties than the nonlinearly adaptive principles in the methods.

The strong, fluid-dynamic shock problem had become the number one computational roadblock by the fall of 1970 so I was urged to concentrate on the problem full time, finally developing the FCT convection algorithm in the winter.

– Jay Boris

In totality, the methods developed by three or four men in the early 1970’s set the stage for revolutionary gains in method performance. At the time of the developments, the differences in the methods were fiercely debated and hotly contested. The reviews of the papers were contentious and resulted in bitter feelings. Looking back with the virtues of time and perspective several things stand out. All the methods represented a quantum leap in performance, and behavior over the methods available prior.  The competition and ideas so hotly contested probably helped to spur developments, but ultimately became counter-productive as the field matured. It seems clear that the time was ripe for the breakthrough. There was a combination of computers, mathematics and applications that seeded the developments. For the same basic idea to arise independently in a short period of time means the ideas were dangling just out of reach. The foundations for the breakthrough were common and well-known.

Paul Woodward is an astrophysicist, and PPM found its most common and greatest use in his field. For a long time the nature of PPM’s description meant that the exact versions of the method described in the canonical 1984 paper were the exact method used in other codes. Part of this results from PPM being a highly tuned, high-performance method with a delicate balance between high-resolution methodology and various safety measures needed for difficult highly nonlinear problems. In a manner of speaking it is a recipe that produces really great results. Imagine PPM as something akin to the Toll House chocolate chip cookie recipe. The cookies you get by following the package exactly are really, really good. At the same time, you can modify the recipe to produce something even better while staying true to the basic framework. The basic cookies will get you far, but with some modification you might just win contests or simply impress your friends. PPM is just like that.

At this point I’ve said quite little about the method itself. The core of the method is a parabolic representation of the solution locally in a cell. The method is totally one-dimensional in nature. This parabola is determined by the integral average in a cell and the point values of the quantity at the edge of the cell. What is not so widely appreciated is the connection of PPM to the fifth scheme in Van Leer’s 1977 paper. This method is interesting because the method evolves both cell averages like any finite volume code, and the point values at the cell boundary. It is compact and quite supremely accurate compared with other third-order methods. The PPM is a way of getting some of the nice properties of this method from a finite volume scheme. Rather than evolve the point values on the edge, they are recovered from the finite volumes.

Rather than belabor the technical details of PPM, I’ll point to the recent trends that have extended the method beyond its classical form. One of the original authors has used the parabola to represent valid extrema in the solution rather than damping them by forcing monotonicity. I did the same thing in my own work largely paralleling Phil’s work. In addition, the change in the high-order edge reconstruction has been recognized and implemented to good effect by both Phil, Paul, myself and others. The connection of Riemann solvers has also been generalized. All of this reflects the true power of the method when projected onto the vast body of work that arose after the publication of this paper.  Even today PPM remains one of the very best methods in existence especially with the modifications recently introduced.

Personally, I’ve come to know both Phil and Paul personally and professionally. In the numerical solution of hyperbolic PDEs both men have played a significant personal role and witnessed history being made. They helped make CFD what it is today. It’s always an interesting experience to read someone’s work then come to know the person. A big part of a deeper appreciation is finding out the underlying truths of the paper. You start to realize that the written, published record is a poor reflection of the real story. Some of this comes through the hard work of reading and re-reading a paper, then deriving everything in it for yourself. A deeper appreciation came from expressing the same method in my own language and mathematics. Finally taking each of these expressions into conversations with the authors who clarified most of the remaining questions. The academic literature is a scrubbed and largely white-washed reflection of reality. What we are allowed to read and see is not the truth, but an agreed upon distortion.

When the numerics fails, substitute the physics.

– Steve Zalesak

the scientists who use such algorithms must have both input to and knowledge of their design. There may come a day when we no longer hold to this view, when the design of such algorithms can be left to expert numerical analysts alone, but that day has not yet arrived.

– Steve Zalesak

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Carpenter Jr, Richard L., Kelvin K. Droegemeier, Paul R. Woodward, and Carl E. Hane. “Application of the piecewise parabolic method (PPM) to meteorological modeling.” Monthly Weather Review 118, no. 3 (1990): 586-612.

Woodward, Paul R. “Piecewise-parabolic methods for astrophysical fluid dynamics.” In Astrophysical Radiation Hydrodynamics, pp. 245-326. Springer Netherlands, 1986.

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Plewa, Tomasz, and Ewald Mueller. “The consistent multi-fluid advection method.” arXiv preprint astro-ph/9807241 (1998).

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection.” Journal of computational physics 23, no. 3 (1977): 276-299.

Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Rider, William J. “Reconsidering remap methods.” International Journal for Numerical Methods in Fluids 76, no. 9 (2014): 587-610.

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

J. P. Boris “A Fluid Transport Algorithm That Works,” Proceedings of the seminar course on computing as a language of physics, 2-20 August 1971, InternationalCentre for Theoretical Physics, Triest, Italy.

 

 

We are all responsible for this mess; It is everyone’s fault

Never attribute to malevolence what is merely due to incompetence

― Arthur C. Clarke

shutterstock_318051176-e1466434794601-800x430A year ago, I sat in one of my manager’s office seething in anger. After Trump’s election victory, my emotions shifted from despair to anger seamlessly. At that particular moment, it was anger that I felt. How could the United States possibly have elected this awful man President? Was the United States so completely broken that Donald Trump was a remotely plausible candidate, much less victor.

Is ours a government of the people, by the people, for the people, or a kakistocracy rather, for the benefit of knaves at the cost of fools?

― Thomas Love Peacock

fig10_roleApparently, the answer is yes, the United States is that broken. I said something to the effect that we too are to blame for this horrible moment in history. I knew that both of us voted for Clinton, but felt that we played our own role in the election of our reigning moron-in-chief. Today a year into this national nightmare, the nature of our actions leading to this unfolding national and global tragedy is taking shape. We have grown to accept outright incompetence in many things, and now we have a genuinely incompetent manager as President. Lots of incompetence is accepted daily without even blinking, I see it every single day. We have a system that increasingly renders, the competent, incompetent by brutish compliance with directives born of broad-based societal dysfunction.

In a hierarchy, every employee tends to rise to his level of incompetence.

― Laurence J. Peter

What does the “Peter Principle” say about the United States? The President is incompetent. Not just a little bit, he is utterly and completely unfit for the job he has. He is the living caricature of a leader, not actually one. His whole shtick is loudly and brashly sounding like what a large segment of the population thinks a leader should be. Under his leadership, our government has descended into the theatre of the absurd. He doesn’t remotely understand our system of government, economics, foreign policy, maxresdefaultscience, or really anything other than marketing himself. His is an utterly self-absorbed anti-intellectual completely lacking empathy and the basic knowledge we should expect him to have. The societal destruction wrought by this buffoon-in-chief is profound. Our most important institutions are being savaged. Divisions in society are being magnified and we stand on the brink of disaster. The worst thing is that this disaster is virtually everyone’s fault whether you stand on the right or the left, you are to blame. The United States was in a weakened state and the Trump virus was poised to infect us. Our immune system was seriously compromised and failed to reject this harmful organism.

I love the poorly educated.

– Donald Trump

Sorry losers and haters, but my I.Q. is one of the highest -and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.

– Donald Trump

Trump is making everything worse. One of the keys to understanding the damage being done to the United States is seeing the poor condition of Democracy prior to the election. A country doesn’t just lurch toward such a catastrophic decision overnight, we were already damaged. In a sense, the body politic was already weakened and ripe for infection. We have gone through a period of more than 20 years of massive dysfunction led by the dismantling of government as a force for good in society. The Republican party is committed to small government, and part of their approach is to attack it. Government is viewed as an absolute evil. Part of the impact of this is the loss of competence in governing. Any governmental incompetence supports their imagesargument about the need to diminish it. The result has been a steady march toward dysfunction and poor performance along with deep seated mistrust, if not outright distain.

All of this stems from deeper wounds left in our history. The deepest wound is the Civil War and the original national sin of slavery. The perpetuation of institutional racism is one of the clearest forces driving our politics. We failed to heal the wounds of this war, and continue to wage a war against blacks. First through the scourge of Jim Crow laws, and now with the war on drugs with its mass incarceration. Our massive prison population is driven by our absurd and ineffective efforts to combat drug abuse. We actively avoid taking actions that would be effective in battling drug addiction. While it is a complete failure as a public health effort, it is a massively effective tool of racial oppression. More recent wounds were left by the combination of the Vietnam war and Civil rights movement in the 1960’s along with Watergate and Nixon’s corruption. The Reagan revolution and the GOP attacks on the Clinton’s were their revenge for progress. In a very real way the country has been simmering in action and reaction for the last 50 years. Trump’s election was the culmination of this legacy and our inability to keep the past as history.

Government exists to protect us from each other. Where government has gone beyond its limits is in deciding to protect us from ourselves.

― Ronald Reagan

Part of the hardest aspect of accepting what is going on comes in understanding how Trump’s opposition led to his victory. The entire body politic is ailing. The Republican party is completely inept at leading, unable to govern. This shouldn’t come as any surprise; the entire philosophy of the right is that government is bad. When your a priori assumption is that government is inherently bad, the nature of your governance is half-hearted. A natural outgrowth of this philosophy is rampant incompetence in governance. Couple this to a natural tendency toward greed as a core value, and you have the seeds of corruption. Corruption and incompetence is an apt description of the Republican party. The second part of this toxic stew is hate and fear. The party has spent decades stoking racial and religious hatred, and using fear of crime and terrorism to build their base. The result is a governing coalition that cannot govern at all. They are utterly incompetent, and no one more embodies their incompetence than the current President.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The Democrats are no better other than some basic human capacity for empathy. For example, the Clintons were quite competent, but competence is something we as a nation don’t need any more, or even believe in. Americans chose the incompetent candidate for President over the competent one. At the same time the Democrats feed into the greedy and corrupt nature of modern governance with a fervor only exceeded by the Republicans. They are what my dad called “limousine liberals” and really cater to the rich and powerful first and foremost while appealing to some elements of compassion (it is still better than “limousine douchebags” on the right). As a result the Democratic party ends up being only slightly less corrupt than the Republican while offering none of the cultural red meat that drives the conservative culture warriors to the polls.

In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule.

― Friedrich Nietzsche

The thing that sets the Democratic party back is a complete lack unity or discipline. They are fractious union of special interests that can barely tolerate one another. They cannot unify to help each other, and each faction is single issue group that can’t be bothered to form an effective coalition. The result is a party that is losing despite holding a majority of the votes. Many of the Democratic voters can’t be bothered to even vote. This losing coalition has let GOP driven fear and hate win along with a systematic attack on our core values as a democratic republic (vast sums of money in politics, voter rights, voter suppression, and gerrymandering). They are countered by a Republican party that is unified and supporting of their factions. The different factions work together to form a winning coalition in large part through accepting each other’s extreme views as part of their rubric of beliefs.\

maxresdefault copyWhile both parties cater to the greedy needs of the rich and powerful, the differences in the approach is completely seen in the approach to social issues. The Republicans appeal to traditional values along with enough fear and hate to bring the voters out. They stand in the way of scary progress and the future as the guardians of the past. They are the force that defends American values, which means white people and Christian values. With the Republicans, you can be sure that the Nation will treat those we fear and hate with violence and righteous anger without regard to effectiveness. We will have a criminal justice system that exacts vengeance on the guilty, but does nothing to reform or treat criminals. The same forces provide just enough racially biased policy to make the racists in the Republican ranks happy.

The Democrats stand for a progressive and empathic future that is represented by many different groups each with their own specific grievances. One of the biggest problems on both sides is intolerance. This might be expected on the right, after all white supremacy is hardly a tolerant world view. The left helps the right out by being even less tolerant. The left’s factions cannot tolerate any dissent, on any topic. We hear endless whining about micro-aggressions, and cultural appropriation along with demands for politicalblamedemotivator correctness. They are indeed “snowflakes” who are incapable of debate and standing up for their beliefs. When they don’t like what someone has to say, they attack them and completely oppose the right to speak. The lack of tolerance on the left is one of the forces that powered Trump to the White House. It did this through a loss of any moral high ground, and the production of a divided and ineffective liberal movement. The left has science, progress, empathy and basic human decency on their side yet continue to lose. A big part of their losing strategy is the failure to support each other, and engage in an active dialog on the issues they care so much about.

A dying culture invariably exhibits personal rudeness. Bad manners. Lack of consideration for others in minor matters. A loss of politeness, of gentle manners, is more significant than is a riot.

― Robert A. Heinlein

The biggest element in Trump’s ascension to the Presidency is our acceptance of incompetence in our leaders. We accept incompetence too easily; incompetence is promoted across society. We have lost the ability to value and reward expertise and competence. Part of this can be blamed on the current culture where marketing is more important than substance. Trump is pure marketing. His entire brand is himself, sold to people who have lost the ability to smell the con. A big part of the appeal of Trump was the incompetence of governing that permeates the Republican view.

This is where the incompetence and blame comes to work. Success at work depends little on technical success because technical success can be faked. What has become essential at work is compliance with rules and control of our actions. Work is not managed, our compliance with rules is managed.  Increasingly the incompetence of the government is breeding incompetence at my work. The government agency that primarily runs my Lab is a complete disaster. We have no leadership either management orimages science. Both are wrought by the destructive tendency of the Republican party that makes governing impossible. They are a party of destruction, not creation. When Republicans are put in power they can’t do anything, their entire being is devoted to taking things apart. The Democrats are no better because of their devotion to compliance, regulation and compulsive rule following without thought. This tendency is paired with the liberal’s inability to tolerate any discussion or debate over a litany of politically correct talking points.

The management incompetence has been brewing for years. Our entire management construct is based lack of trust. The Lab itself is not to be trusted. The employees are not to be trusted. We are not trusted by the left or the right albeit for different reasons. The net result of all of this lack of trust is competence being subservient to lack-of-trust-based compliance with oversight. We are made to comply and heel to the will of the government. This is the will of a government that is increasingly completely incompetent and unfit to run anything, much less a nuclear weapons enterprise! The management of the Lab is mostly there to launder money and drive the workforce into a state of compliance with all directives. The actual accomplishment of high quality technical work is the least important thing we do. Compliance is the main thing. We want to be managed to never ever fuck up, ever. Ipeter_nanosf you are doing anything of real substance and performing at a high level, fuck ups are inevitable. The real key to the operation is the ability of technical competence to be faked. Our false confidence in the competent execution of our work is a localized harbinger of “fake news”.

Fox treats me well, it’s that Fox is the most accurate.

– Donald Trump

We have non-existent peer review and this leads to slack standards. Our agency tells us that we cannot fail (really, we effectively have to succeed 100% of the time). The way to not fail is lower our standards, which we have done in response. We aid our lower standards by castrating the peer review we ought to depend on. We now have Labs that cannot stand to have an honest critical peer review because of the consequences. In addition, we have adopted foolish financial incentives for executive management to compound problems. Since the executive bonuses are predicated on successful review, reviews have become laughable. Reviewers don’t dare raise difficult issues unless they never want to be invited back. We are now graded on a scale where everyone gets an “A” without regard to actual performance. Our excellence has become a local version of “fake news”.

At the very time that we need to raise our standards, we are allowing them to plummet lower and lower. Our reviews have become focused on spin and marketing of the work. Rather than show good work, provide challenges, and receive honest feedback, we form a message focused on “everything is great, and there is nothing to worry about”. Let’s be clear, the task of caring for nuclear weapons without testing them is incredibly challenging. To do this task correctly we need to be focused5064 on raising our level of excellence across the board in science and engineering. Our technical standards should be higher than ever because of the difficulty and importance of this enterprise. Requiring 100% success might seem to be a way to do this, but it isn’t.

If you are succeeding 100% of the time, you are not applying yourself. When one is working at a place where you are mostly succeeding, but occasionally failing (and learning/growing), the outcomes are optimal. This is true in sports, business, science and engineering. Organizations are no different to do the best work possible, you need to fail and be working on the edge of failure. Ideally, we should be doing our work in a mode where we succeed 70-80% of the time. Our incompetent governance and leadership does not understand how badly they are undermining the performance of this vital enterprise. So, the opposite has happened, and the people leading us in the government are too fucking stupid to realize it. Our national leadership has become more obsessed with appearances than substance. All they see is the 100% scores and they conclude everything is awesome while our technical superiority is crumbling. Greatness in America today is defined by declaring greatness and refusing to accept evidence to the contrary.

Look at the F-35 as an example of our current ability to execute a big program. This aircraft is a completely corrupt massive shit storm. It is a giant, hyper-expensive fuckup. Rather than a working aircraft the F-35 was a delivery vehicle for pork barrel spending. God knows how much bullshitting went into the greenlighting of the program over the years. The bottom line is that the F-35 costs a huge amount of money, while being a complete failure as a weapon’s system. My concern that the F-35 is an excellent representative of our current technical capability. If it is, we are in deep trouble. We are expensive, corrupt and incompetent (sounds like a description of the President!). I’m very glad that we never ask our weapon’s lab to fly. Given our actual ability, we can guess the result.

160908_pol_trump-forum-jpg-crop-promo-xlarge2-1This is the place where we get to the core of the accent of Trump. When we lower our standards on leadership we get someone like Trump. The lowering of standards has taken place across the breadth of society. This is not simply National leadership, but corporate and social leadership. Greedy, corrupt and incompetent leaders are increasingly tolerated at all levels of society. At the Labs where I work, the leadership has to say yes to the government, no matter how moronic the direction is. If you don’t say yes, you are removed and punished. We now have leadership that is incapable of engaging in active discussion about how to succeed in our enterprise. The result are labs that simply take the money and execute whatever work they are given without regard for the wisdom of the direction. We now have the blind leading the spineless, and the blind are walking us right over the cliff. Our dysfunctional political system has finally shit the bed and put a moron in the White House. Everyone knows it, and yet a large portion of the population is completely fooled (or simply to foolish or naïve to understand how bad the situations is).

We are a paper tiger; a real opponent may simply destroy us. Our national superiority militarily and technically may already be gone. We are vastly confident of our outright superiority. This superiority requires our nation to continually bring their best to the table. We have almost systematically undermined our ability to apply our best to anything. We’ve already been attacked and defeated in the cyber-realm by Russia. Our society and democracy was assaulted by the Russians, and we were routed. Our incompetent governance has done virtually nothing. The seeds of our defeat have been sown for years all across our society. We are too incompetent to even realize how vulnerable we are.

I will admit that this whole line of thought might be wrong. The Labs where I work might be local hotbeds of incompetent management. What we see locally is not indicative of broader national trends. This seems very unlikely. What is more terrifying is the prospect that the places where I work are well managed comparatively. If this is true then it is completely plausible for us to have an incompetent President. So, the reality we have is stark incompetence across society that has set the stage for national tragedy. Our institutions and broad societal norms are under siege. Every single day of the Trumptrump_fired_tw-865x452 administration lessens the United States’ prestige. The World had counted on the United States for decades, but cannot any longer. We have made a decision as a nation that disqualifies us from a position of leadership. The Republican party has the greatest responsibility for this, but the Democrats are not blameless. Our institutional leadership shares the blame too. Places like the Labs where I work are being destroyed one incompetent step at a time. All of us need to fix this.

We have a walking, talking, tweeting example of our incompetence leading us, and it is everyone’s fault. We all let this happen. We are all responsible. We own this.

Ask not what your country can do for you; ask what you can do for your country.

― John F. Kennedy

 

 

 

 

 

How to properly use direct numerical simulations (DNS)

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

Nothing stokes the imagination for the power of computing to shape scientific discovery like direct numerical simulation (DNS). Imagine using the magic of the computer to unveil the secrets of the universe. We simply solve the mathematical equations that describe nature accurately at immense precision, and magically truth comes out the other end. DNS also stokes the demand for computing power, the bigger the Unknown-2computer, the better the science and discovery. As an added bonus the visualizations of the results are stunning almost Hollywood-quality and special effect appealing. It provides the perfect sales pitch for the acquisition of the new supercomputer and everything that goes with it. With a faster computer, we can just turn it loose and let the understanding flow like water bursting through a dam. With the power of DNS, the secrets of the universe will simply submit to our mastery!

If science were only that easy.  It is not and this sort of thing is a marketing illusion for the naïve and foolish.

vyxvbzwxThe saddest thing about DNS is the tendency for scientist’s brains to almost audibly click into the off position when its invoked. All one has to say is that their calculation is a DNS and almost any question or doubt leaves the room. No need to look deeper, or think about the results, we are solving the fundamental laws of physics with stunning accuracy! It must be right! They will assert, “this is a first principles” calculation, and predictive at that. Simply marvel at the truths waiting to be unveiled in the sea of bits. Add a bit of machine learning, or artificial intelligence to navigate the massive dataset produced by DNS, (the datasets are so fucking massive, they must have something good! Right?) and you have the recipe for the perfect bullshit sandwich. How dare some infidel cast doubt, or uncertainty on the results! Current DNS practice is a religion within the scientific community, and brings an intellectual rot into the core computational science. DNS reflects some of the worst wishful thinking in the field where the desire for truth, and understanding overwhelms good sense. A more damning assessment would be a tendency to submit to intellectual laziness when pressed by expediency, or difficulty in progress.

mellado_turb_mixing_01Let’s unpack this issue a bit and get to the core of the problems. First, I will submit that DNS is an unambiguously valuable scientific tool. A large body of work valuable to a broad swath of science can benefit from DNS. We can study our understanding of the universe in myriad ways at phenomenal detail. On the other hand, DNS is not ever a substitute for observations. We do not know the fundamental laws of the universe with such certainty that the solutions provide an absolute truth. The laws we know are models plain and simple. They will always be models. As models, they are approximate and incomplete by their basic nature. This is how science works, we have a theory that explains the universe, and we test that theory (i.e., model) against what we observe. If the model produces the observations with high precision, the model is confirmed. This model confirmation is always tentative and subject to being tested with new or more accurate observations. Solving a model does not replace observations, ever, and some uses of DNS are masking laziness or limitations in observational (experimental) science.

To acquire knowledge, one must study;

but to acquire wisdom, one must observe.

― Marilyn Vos Savant

One place where the issue of DNS comes to a head is validation. In validation, a code (i.e., model) is compared with experimental data for the purposes of assessing the model’s ability to describe nature. In DNS, we assume that nature is so well understood that our model can describe it in detail, the leap too far is saying that the model can replace observing nature. This presumes that the model is completely and totally validated. I find this to be an utterly ludicrous prospect. All models are tentative descriptions of reality, and intrinsically limited in some regard. The George Box maxim immediately comes to mind “all models are wrong”. This is axiomatically true, and being wrong, models cannot be used to validate. With DNS, this is suggested as a course of action violating the core principles of the scientific method for the sake of convenience. We should not allow this practice for the sake of scientific progress. It is anathema to the scientific method.

dag006This does not say that DNS is not useful. DNS can produce scientific results that may be used in a variety of ways where experimental or observational results are not available. This is a way of overcoming a limitation of what we can tease out of nature. Realizing this limitation should always come with the proviso that this is expedient, and used in the absence of observational data. Observational evidence should always be sought and the models should always be subjected to tests of validity. The results come from assuming the model is very good and provides value, but cannot be used to validate the model. DNS is always second best to observation. Turbulence is a core example of this principle, we do not understand turbulence; it is an unsolved problem. DNS as a model has not yielded understanding sufficient to unveil the secrets of the universe. They are still shrouded. Part of the issue is the limitations of the model itself. In turbulence DNS almost always utilizes an unphysical model to describe fluid dynamics with a lack of thermodynamics and infinitely fast acoustic waves. Being unphysical in its fundamental character, how can we possibly consider it a replacement for reality? Yet in a violation of common sense driven by frustration of lack of progress, we do this all the time.

One of the worst aspects of the entire DNS enterprise is the tendency to do no assessment of uncertainty with its results. Quite often the results of DNS are delivered without any uncertainty of approximation or the model. Most often no uncertainty at all is included, estimated or even alluded to. The results of DNS are still numerical approximations with approximation error. The models while detailed and accurate are always approximations and idealizations of reality. This aspect of the modeling must be included for the work to be used for high consequence work. If one is going to use DNS as a stand-in for experiment, this is the very least that must be done. The uncertainty assessment should also include the warning that the validation is artificial and not based on reality. If there isn’t an actual observation available to augment the DNS in the validation, the reader should be suspicious, and the smell of bullshit should alert one to deception.

Some of our models are extremely reliable, and have withstood immense scrutiny. These models are typically the subject of DNS. A couple of equations are worth discussing in depth, Schrödinger’s equations for quantum physics, molecular & atomic dynamics and the Navier-Stokes equations for turbulence. These models are frequent topics of DNS investigations, and all of them are not reality. The equations are mathematics and a logical constructive language of science, but not actual reality. These equations are unequal in terms of their closeness to fundamentality, but our judgment should be the same. The closeness to “first principles” should be reflected in the assessment of uncertainty, which also reflects the problem being solved by the DNS. None of these equations will yield truths so fundamental as to not be questioned or free of uncertainty.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

― Arthur C. Clarke

Another massive problem with DNS is the general lack of uncertainty assessment. It is extremely uncommon to see any sort of uncertainty assessment accompanying DNS. If we accept the faulty premise that DNS can replace experimental data, the uncertainty associated with these “measurements” must be included. This almost universally shitty practice further undermines the case of using DNS as a replacement for experiment. Of course, we are accepting far too many experimental results without their own error bars these days. Even if we make the false premise that the model being solved DNS is true to the actual fundamental laws, the solution is still 2621-fluid-dynamics-look-two-giantsapproximate. The approximate solution is never free of numerical error. In DNS, the estimate of the magnitude of approximation error is almost universally lacking from results.

Let’s be clear, even when used properly DNS results must come with an uncertainty assessment. Even when DNS is used simply as a high-fidelity solution of a model, the uncertainty of the solution is needed for assessment of the utility of the results. This utility is ultimately determined by some comparison with observations with phenomena seen in reality. We may use DNS to measure the power of a simpler model to provide consistency with the more fundamental model included in DNS. This sort of utility is widespread in turbulence, material science or constitutive modeling, but credibility of the use must always be determined with experimental data. The observational data always has primacy and DNS should always be subservient to realities results.

Cielo rotatorUnfortunately, we also need to address an even more deplorable DNS practice. Sometimes people simply declare that their calculation is a DNS without any evidence to support this assertion. Usually this means the calculation is really, really, really, super fucking huge and produces some spectacular graphics with movies and color (rendered in super groovy ways). Sometimes the models being solved are themselves extremely crude or approximate. For example, the Euler equations are being solved with or without turbulence models instead of Navier-Stokes in cases where turbulence is certainly present.  This practice is so abominable as to be almost a cartoon of credibility. This is the use of proof by overwhelming force. Claims of DNS should always be taken with a grain of salt. When the claims take the form of marketing they should be met with extreme doubt since it is a form of bullshitting that tarnishes those working to practice scientific integrity.

The world is full of magic things, patiently waiting for our senses to grow sharper.

― W.B. Yeats

logoPart of doing science correctly is honesty about challenges. Progress can be made with careful consideration of the limitations of our current knowledge. Some of these limits are utterly intrinsic. We can observe reality, but various challenges limit the fidelity and certainty of what we can sense. We can model reality, but these models are always approximate. The models encode simplifications and assumptions. Progress is made by putting these two forms of understanding into tension. Do our models predict or reproduce the observations to within their certainty? If so, we need to work on improving the observations until they challenge the models. If not, the models need to be improved, so that the observations are produced. The current use of DNS short-circuits this tension and acts to undermine progress. It wrongly puts modeling in the place of reality, which only works to derail necessary work on improving models, or work to improve observation. As such, poor DNS practices are actually stalling scientific progress.

I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.

― Isaac Asimov

Verification and Numerical Analysis are Inseparable

Tiny details imperceptible to us decide everything!

― W.G. Sebald

downloadThe standards of practice in verification of computer codes and applied calculations are generally appalling. Most of the time when I encounter work, I’m just happy to see anything at all done to verify a code. Put differently, most of the published literature accepts a slip shod practice in terms of verification. In some areas like shock physics, the viewgraph norm still reigns supreme. It actually rules supreme in a far broader swath of science, but you talk about what you know. The missing element in most of the literature is the lack of quantitative analysis of results. Even when the work is better and includes detailed quantitative analysis, the work usually lacks a deep connection with numerical analysis results. The typical best practice in verification only includes the comparison of the observed rate of convergence with the theoretical rate of convergence. Worse yet, the result is asymptotic and codes are rarely practically used with asymptotic meshes. Thus, standard practice is largely superficial, and only scratches the surface of the connections with numerical analysis.

The Devil is in the details, but so is salvation.

― Hyman G. Rickover

The generic problem is that it rarely occurs at all much less being practiced well, then we might want to do it with genuine excellence. Thus, the first step to take is regular pedestrian application of standard analysis. Thus, what masquerades as excellence today is quite threadbare. We verify order of convergence in code verification under circumstances that usually don’t meet the conditions where they formally apply. The theoretical order of convergence only applies in the limit where the mesh is asymptotically fine. Today, the finite size of the discretization is not taken directly into account. This can be done, I’ll show you how below. Beyond this rather great leap of faith, verification does not usually focus on the magnitude of error, numerical stability, or the nature of the problem being solved. All of these are available results through competent numerical analysis, in many cases via utterly classical techniques.

A maxim of verification that is important to emphasize is that the results are a combination of theoretical expectations, the finite resolution and the nature of the problem being solved. All of these factors should be considered in interpreting results.

Before I highlight all of the ways we might make verification a deeper and more valuable investigation, a few other points are worth making about the standards of practice. The first thing to note is the texture within verification, and its two flavors. Code verification is used to investigate the correctness of a code’s implementation. This is accomplished by solving problems with an analytical (exact or nearly-exact) solution. The key is to connect the properties of the method defined by analysis with the observed behavior in the code. The “gold standard” is verifying that the order of convergence observed matches that expected from analysis.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The second flavor of verification is solution (calculation) verification. In solution verification, the objective is to estimate the error in the numerical solution of an applied problem. The error estimate is for the numerical component in the overall error separated from modeling errors. It is an important component in the overall uncertainty estimate for a calculation. The numerical uncertainty is usually derived from the numerical error estimate. The rate or order of convergence is usually available as an auxiliary output of the process. Properly practiced the rate of convergence provides context for the overall exercise.

imagesOne of things to understand is that code verification also contains a complete accounting of the numerical error. This error can be used to compare methods with “identical” orders of accuracy for levels of numerical error, which can be useful in making decisions about code options. By the same token solution verification provides information about the observed order of accuracy. Because the applied problems are not analytical or smooth enough, they generally can’t be expected to provide the theoretical order of convergence. The rate of convergence is then an auxiliary result of the solution verification exercise just as the error is an auxiliary result for code verification. It contains useful information on the solution, but it is subservient to the error estimate. Conversely, the error provided in code verification is subservient to the order of accuracy. Nonetheless, the current practice simply scratches the surface of what could be done via verification and its unambiguous ties to numerical analysis.

Little details have special talents in creating big problems!

― Mehmet Murat ildan

If one looks at the fundamental (or equivalence( theorem of numerical analysis, the two aspects of theorem are stability and consistency implying convergence (https://wjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/ ). Verification usually uses a combination of error estimation and convergence testing to imply consistency. Stability is merely assumed. This all highlights the relatively superficial nature of the current practice. The result being tested is completely asymptotic, and the stability is merely assumed and never really strictly tested. Some methods are unconditionally stable, which might also be tested. In all cases the lack of stress testing the results of numerical analysis is short-sighted.

One of the most important results in numerical analysis is the stability of the approximation. Failures of stability are one of the most horrific things to encounter in practice. Stability results should be easy and revealing to explore via verification. It also offers the ability to explore what failure of a method looks like, and the sharpness of the estimates of stability. Tests could be devised to examine the stability of a method and confirm this rather fundamental aspect of a numerical method. In addition to confirming this rather fundamental behavior, the character of instability will be made clear if it should arise. Generally, one would expect calculations to diverge under mesh refinement and the instability to manifest itself earlier and earlier as the mesh is refined.  I might suggest that stability could be examined via mesh refinement, and observing the conditions where the convergence character changes.

One of the most unpleasant issues with verification is the deviations of the observed rate of convergence from what is expected theoretically. No one seems to have a good answer to how close, is close enough? Sometimes we can observe that we systematically get closer and closer as the mesh is refined. This is quite typical, but systematic

feuler

The expected convergence rate for a single time step using forward Euler for a linear ODE

deviations are common. As I will show, the deviations are expected and may be predicted by detailed numerical analysis. The key is to realize that the effects of finite resolution can be included in the analysis. As such for simple problems we can predict the rate of convergence observed and its deviations for the asymptotic rate. Beyond the ability to predict the rate of convergence, this analysis provides a systematic explanation for this oft-seen results.

This can be done very easily using classical methods for numerical analysis (see previous blog post https://wjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/). We can start with the knowledge that detailed numerical analysis uses an analytical solution to the equations as its basis. We can then analyze the deviations from the analytical and their precise character including the finite resolution. As noted in that previous post, the order of accuracy is examined via a series expansion in the limit where the step size or mesh is vanishingly small. We also know that this limit is only approached and never actually reached in any practical calculation.

For the simple problems amenable to these classical analyses, we can derive the exact rate of convergence for a given step size (this result is limited to the ideal problem central to the analysis). The key part of this approach is using the exact solution to the model equation and the numerical symbol providing an error estimate. Consider the forward Euler method for ODE’s, u^{n+1} = u^n + h \lambda u^n, the error is E(h) = |1 + h \lambda - \exp(h \lambda) |. We can now estimate the error for any step size and analytically estimate the convergence rate we would observe in practice. If we employ the relatively standard practice of mesh halving for verification, we get the estimate of the rate of convergence, n(h) = \log\left[E(h)/E(h/2)\right]/\log(2). A key point to remember is that the solution with the halved time step takes twice the number of steps. Using this methodology, we can easily see the impact of finite resolution. For the forward Euler method, we can see that steps larger than zero raise the rate of convergence above the theoretical value of one. This is exactly what we see in practice.

feuler10

The expected convergence rate for a ten time steps using forward Euler for a linear ODE

When one starts to examine what we expect through analysis, a number of interesting things can be observed. If the coarsest step size is slightly unstable, the method will exhibit very large rates of convergence. Remarkably, we see this all the time. Sometimes results of verification produces seemingly absurdly high rates of convergence. Rather than being indicative of everything being great, it is an indication that the calculation is highly suspect. The naïve practitioner will often celebrate the absurd result as being triumphant when it is actually a symptom of problems requiring greater attention. With the addition of a refined analysis, this sort of result can be seen as pathological.

feuler100

The expected convergence rate for one hundred time steps using forward Euler for a linear ODE

Immediately recognize that we have yielded a significant result with the analysis of perhaps the simplest numerical method in existence. Think of the untapped capacity for explaining the behavior observed in computational practice. Moreover, this significant result explains a serious and far pernicious problem in verification, the misreading of results. Even where the verification practice is quite good, the issue of deviation of convergence rates from the theoretical rates is pervasive. We can easily see that this is a completely expected behavior that falls utterly in line with expectations. This ought to “bait the hook” to conducting more analysis, and connecting it to the verification results.

There is a lot more that could be done here, I’ve merely scratched the surface.

The truth of the story lies in the details.

― Paul Auster

 

Our Silence is Their Real Power

When truth is replaced by silence, the silence is a lie.

― Yevgeny Yevtushenko

We all live in a golden age for the abuse of power. Examples abound and touch everyone’s life. We see profound abuses in politics daily. Recently the decades of abuse committed by Harvey Weinstein have come to light. Apparently, it barely exist100_weinsteinted in the shadows for years and years as one of Hollywood’s worst kept secrets. Weinstein preyed on women with virtual impunity with his power and prestige acting to keep his actions in the dark. The promise and threat of his power in that industry gave him virtual license to act. The silence of the myriad of insiders who knew about the pattern of abuse allowed the crimes to continue unabated. Only after the abuse came to light broadly and outside the movie industry did the unacceptability arise. When the abuse stayed in the shadows, and its knowledge limited to industry insiders, it continued.

Cowardice rightly understood begins with selfishness and ends with shame.

― José Rizal

The power of the online world and our free press should be a boon to exposing and bringing these monsters down. People can be alerted to unacceptable behavior and demand action to remove these abominations from power. It is not working out this way. Instead the power of information has been turned on its head, and the modern world of information has empowered them to new heights. We only need to look at the occupant of the Oval Office for proof. People in power have access to resources and talent not available to others. This power can be turned to marketing and shaping the message to allow them to retain power. Power has its privileges and among these are access to wealth, information and (sexual) favors, most of us can’t even dare to dream of. The abusers turn all of this into a machine that retains and even grows their power. The modern world of interconnection is the latest tool in their arsenal of power. The powerful have largely controlled the media forever, but this control has taken on a new character with Facebook and Twitter.

People follow leaders by choice. Without trust, at best you get compliance.

― Jesse Lyn Stoner

I see it at work in small ways. Sometimes it’s the desire of those in power to keep their poor leadership or stewardship from being open to criticism, i.e., an honest peer review. More recently we have seen ourselves subjected to training on information security that was merely an excuse to be threatened by the United States Attorney General, Jeff Sessions. It was a gross waste of resources to provide a platform for abuse of power (at the cost many millions of dollars to threaten people, and help crush morale). Ostensibly the training was to highlight the importance of protecting sensitive and classified information. This is a topic that we are already trained heavily on, and we are acutely aware of in our daily work. Given our ongoing knowledge of the topic, the whole training was provided to silence the critics of the administration, who will now misuse information control to hide their crimes.

Compliance” is just a subset of “governance” and not the other way around.

― Pearl Zhu

The United States has gone through a virtual orgy of classification since 9/11. This is an extremely serious issue and its tendrils permeate this entire topic. I’ve written in the past about our problems in this regard. Our government, related organizations, and contractors are running wild classifying everything in sight. Increasingly the classification is used to hide and bury information. Quite often things are labeled with the “official use only” category because it is basically unregulated. There is no standard and the tendency is to simply hide as much as possible. It is primed for wide scale abuse. I’m fairly certain that the abuse is already happening on an enormous scale. It is quite often used to keep embarrassing, criminal or simply slip-shod work out of sight and away from scrutiny. It is exactly the sort of thing the current (and past) administrations would use to hide stuff from view. Of course, higher levels of classification have been used similarly. The prime example is the cover-up of the slaughter of innocents by the military in Iraq central to the whole Chelsea Manning case. It wasn’t classified, it was criminal and embarrassing, yet classification was used to attempt to bury the evidence.

Those who lack the courage will always find a philosophy to justify it.

― Albert Camus

DMgfsliWkAAzZ_-Our current President is serial abuser of power whether it be the legal system, women, business associates or the American people, his entire life is constructed around abuse of power and the privileges of wealth. Many people are his enablers, and nothing enables it more than silence. Like Weinstein, his sexual misconducts are many and well known, yet routinely go unpunished. Others either remain silence or ignore and excuse the abuse a being completely normal.

I better use some Tic Tacs just in case I start kissing her. You know, I’m automatically attracted to beautiful — I just start kissing them. It’s like a magnet. Just kiss. I don’t even wait. And when you’re a star, they let you do it. You can do anything…. Grab ’em by the pussy. You can do anything.

– President Donald Trump

We are all enablers. We knew what kind of monster we were electing to the highest office, and people stood by silent. Worse yet they gave him their tacit approval. Weinstein is no different. Sessions is no different either, no sexual misconduct known there, but ignorance and racism is a clear part of his repertoire. In addition, he has direct conflicts of interests with vast prison stocks in his portfolio, and the power to improve his stock holdings through his office and his direct actins. Since his boss is a walking monument to conflict of interest, thus nothing will happen. He will abuse the power of his office with impunity. At this point the entire Nation has forgiven their crimes through willful ignorance. The mandated threat masquerading as training is simply the latest, and by no means the biggest violation of standards of conduct. The threat is designed to silence any documentation of violations and assist in their continued violation through our continued silence and tacit acceptance of their power.

The standards of conduct under the Trump Administration is headed straight to hell. The only thing that they are opposed to is threats to their pdownloadower and ability to abuse it. They are an entire collection of champion power abusers. Like all abusers, they maintain their power through the cowering masses below them. When we are silent their power is maintained. They are moving the squash all resistance. My training was pointed at the inside of the institutions and instruments of government where they can use “legal” threats to shut us up. They have waged an all-out assault against the news media. Anything they don’t like is labeled as “fake news” and attacked. The legitimacy of facts has been destroyed, providing the foundation for their power. We are now being threatened to cut off the supply of facts to base resistance upon. This training was the act of people wanting to rule like dictators in an authoritarian manner.

I am personally quite concerned about how easily we accept this authoritarian approach to leadership. We seem all too willing and able to simply salute and accept the commands of corrupt overlords. We are threatened with extreme consequences, and those in power can do as they please with virtual impunity. For those abusing power, trump_fired_tw-865x452the set-up is perfect. They are the wolves and we, the sheep, are primed for slaughter. Recent years have witnessed an explosion in the amount of information deemed classified or sensitive. Much of this information is controlled because it is embarrassing or uncomfortable for those in power. Increasingly, information is simply hidden based on non-existent standards. This is a situation that is primed for abuse of power. People is positions of power can hide anything they don’t like. For example, something bad or embarrassing can be deemed to be proprietary or business-sensitive, and buried from view. Here the threats come in handy to make sure that everyone keeps their mouths shut. Various abuses of power can now run free within the system without risk of exposure. Add a weakened free press and you’ve created the perfect storm.

The mantle of National security and patriotism works to compliment the systematic abuse of power. One of the primary forms of abuse is financial gain. The decision making behind the flow of money is typically hidden. No one benefiting from the flow of money is too keen on the details of who got the money and why getting out. All one has to do is look at the AG’s finances. He benefits greatly through other people’s misery. More and longer prison sentences raise the stock prices he holds and enriches download-1him. No one even asks the question, and the abuse of power goes unchecked. Worse yet, it becomes the “way things are done”. This takes us full circle to the whole Harvey Weinstein scandal. It is a textbook example of unchecked power, and the “way we do things”.

The real division is not between conservatives and revolutionaries but between authoritarians and libertarians.

― George Orwell

The only way to rid a system of these abuses is the exposure to clear view. When people see the way that those in power abuse the system, the abusers need to change their way or lose their power. This is the exact reason why the abusers are so keen to squash any disclosures (i.e., leaks). They like being able to run their shadow empires with impunity. Without the risk of exposure, the abusers can simply take their abuses to new levels. For systems needs genuine information control and security, the threat of abuse of power is extreme. If the system is not run with the highest level of ethics and integrity, the abuse of power, and the genuine need for secrecy are on a collision course. In taking my training, the threats from someone completely lacking any visible ethics or integrity is chilling. Simply put, the Attorney General is proven racist, white supremacist apologist, and serial abuser of power. He has no ethical basis to issue his threats, only unbridled power. He has been given free reign purely on ideological grounds.

woody-allen-harvey-weinstein-gty-jt-171015_31x13_992

Democracy can exist only in the countries where people are brave! Coward nations always live under the authoritarian regimes!

― Mehmet Murat ildan

This value system is in complete and total collision with the values that the United States is supposedly based on. This value system is unfortunately consistent with the actual values in the United States today. We are struggling between who we should be and who we are. At work this runs headlong into the fundamental character of the institutions and the people employed. We have a generally docile workforce who are easily cowered by power. We have installed authoritarian monsters in positions of power who are more than willing to abuse this power. So the sheep bow their heads to the wolves and simply welcome the slaughter. Our institutions are similarly cowered by money. The federal government pays the bills, and with that buys what is moral and ethical. A good and logical question is where does this end. What is the point where we rise up and say “NO!”? What is the point where the actions by those in power are too much to tolerate? We aren’t there yet, and I shudder thinking of where that line is. Worse yet, I’m not entirely sure there is a line that our employees, management or the institutions themselves would enforce. This is truly a terrifying prospect.

The strategic adversary is fascism… the fascism in us all, in our heads and in our everyday behavior, the fascism that causes us to love power, to desire the very thing that dominates and exploits us.

― Michel Foucault

No matter the reasons for the training I took, the tone and approach set by those governing my workplace should be chilling to contemplate. We are creating an environment where unethical conduct and rampant abuse of power go unchecked. Abusers can thrive and run free. We can be sure that their power will grow and extend to other areas. Without the checks and balances that exposure brings, the abuse of power is free to run wild and grow. We have installed an utterly repugnant serial abuser as President. He has a history of financial, personal and sexual ethics violations. He now has more power than ever, and is actively destroying the vehicles for oversight. He has surrounded himself with people of similar morality and ethics, or parasitic enablers who trade their own morality and ethics for power (like VP Pence, who like the religious right make their bed with this devil freely).

A fascist is one whose lust for money or power is combined with such an intensity of intolerance toward those of other races, parties, classes, religions, cultures, regions or nations as to make him ruthless in his use of deceit or violence to attain his ends.

― Henry A. Wallace

This sort of deal making isn’t simply for the rampant misogyny of the right wing to answer for. Those on the left have their own crimes to answer for as the Weinstein scandal makes clear. All those people standing up for Women’s rights in the World, but tolerating his private abuse of women are no better. All of their public stands for liberal causes are made shallow through the acts of private cowardice. They are equally guilty and no better than the so-called Christians embracing Trump. Some things are not acceptable, no matter who does them or their belief system. If you are a liberal Hollywood elite and you stood by while Weinstein abused and assaulted women, your morality is for sale. If you’re an evangelical Christian who voted for Trump, you are no better. Both men are monstrous abusers and morally unacceptable. Tmaxresdefaultoo often we make the case that their misdeeds are acceptable because of the power they grant to your causes through their position. This is exactly the bargain Trump makes with the right wing, and Weinstein made with the left.

Of course, I ask myself, am I really any better? I take my paycheck with the assumption that the terms of my employment mean they own me. What is the price of my silence? I have duty to my loved ones and my support for them. This keeps me as compliant as I am. I need to ask myself what too far looks like? I’ve asked what the limits for my employers are, and I fear there is no limit; I fear they will comply to almost anything. We are rapidly approaching a moral chasm if we haven’t already gone over the edge. Will we simply fall in, and let our nation become a kleptocracy with a wink and a nod toward our standards, ethics and morality while standing by and letting the abusers run wild. For the greater part, I think that we are already there. It is terrifying to think about how much worse it’s going to get.

Terror is a powerful means of policy and one would have to be a hypocrite not to understand this.

― Leon Trotsky

 

 

 

 

 

 

 

 

I am 9 of 13. Resistance is Futile.

Those who deny freedom to others, deserve it not for themselves

― Abraham Lincoln

ec8b47d56535c9d7b4cc3aa068943e7b001f3fbd0f07e5449db27f76381c4809I’d like to be independent empowered and passionate about work, and I definitely used to be. Instead I find that I’m generally disempowered compliant and despondent these days. The actions that manage us have this effect; sending the clear message that we are not in control; we are to be controlled, and our destiny is determined by our subservience. With the National environment headed in this direction, institutions like our National Labs cannot serve their important purpose. The situation is getting steadily worse, but as I’ve seen there is always somewhere worse. By the standards of most people I still have a good job with lots of perks and benefits. Most might tell me that I’ve got it good, and I do, but I’ve never been satisfied with such mediocrity. The standard of “it could be worse” is simply an appalling way to live. The truth is that I’m in a velvet cage. This is said with the stark realization that the same forces are dragging all of us down. Just because I’m relatively fortunate doesn’t mean that the situation is tolerable. The quip that things could be worse is simply a way of accepting the intolerable.

What is going on, and how did we get here?

When you read management theory, and I do, you would think that good management would do the opposite. I certainly like the feeling of being empowered and valued, it makes me feel like coming to work and doing my best. It is good to feel a real sense of purpose and value in work. I have had this experience and it is incredible. When I am creative, my voice is heard and my actions lead to positive outcomes, work is a real joy. I have definitely experienced this, but not recently. If working well, a management system would strive to engage people in this manner. The current management approach acts are pretty much completely opposite to this end. The entire system seems to be geared to putting us into a confined and controlled role. Our actions are limited because of fear, and the risk that something “bad” might happen. We are simply members of a collective and our individuality is more of a threat than a resource.

Today as always, men fall into two groups: slaves and free men. Whoever does not have two-thirds of his day for himself, is a slave, whatever he may be: a statesman, a businessman, an official, or a scholar.

― Friedrich Nietzsche

Why are we managed in such an inherently destructive short-sighted and inhumane manner?

Star Trek introduced us to the Borg, a race of semi-robotic aliens who simply absorbhqdefault beings (people) into a hive where their basic humanity and individuality is lost. Everything is controlled and managed for the good of the collective. Science Fiction is an allegory for society, and the forces of depersonalized control embodied by the Borg have only intensified in our world. Even people working in my chosen profession are under the thrall of a mindless collective. Most of the time it is my maturity and experience as an adult that is called upon. My expertise and knowledge should be my most valuable commodity as a professional, yet they go unused and languishing. They come to play in an almost haphazard catch-what-catch-can manner. Most of the time it happens when I engage with someone external. It is never planned or systematic. My management is much more concerned about me being up on my compliance training than productively employing my talents. The end result is the loss of identity and sense of purpose, so that now I am simply the ninth member of the bottom unit of the collective, 9 of 13.

Disobedience is the true foundation of liberty. The obedient must be slaves.

― Henry David Thoreau

Increasingly, the forces that have demoralized the blue collar working class world and propelled White Nationalism to the forefront of American politics have worked their way to me. The forces doing this are relentless and bipartisan both the right and left are doing this, but in different ways. Conservatives prize control and order with a horrid authoritarian streak naturally leading to Trump. Fear is a political tool wielded like a scalpel, enslaving the population to the security state. Generally speaking, the rise of the management class at work comes from this side of the spectrum along with their devaluation of the rank and file people. We see a tendency toward command-driven management, and being told what to do. Workers are simply meaningless details interchangeable and disposable. The management class is the heart of importance, and value. The rest of us aren’t really worth much effort simply being necessary cogs to get the work done.images-1

The left has their own crimes to answer for. Much of the right-ward movement is a reaction to the systematic over-reach of the bureaucratic state. Political correctness and the thought police also serve to undermine societal confidence and tolerance in the “elite”. Management is part of this elite so derided to today and each subgroup within the elite has their own axe to grind. The big crime of the left is that they seem to think that every ill and danger can be regulated out of existence. Little or no thought is put into the cost of the regulation or the opportunity lost in the process. This is similar to the behavior of the right with respect to the National Security state. In the process, the individual is lost; the individual is not valued; the individual is not trusted. The value of work and the dignity of labors toward support of the family and the good of society is not honored. Work becomes a complete waste of time. Productivity and meaning in work ceases to be prioritized. Life is too precocious to waste doing this.

A big part of the overall problem is the value of my time. For every single thing I do, I trade it against doing something else. Increasingly, my time is spend doing unproductive and useless things. Every useless thing I have to do displaces something else. Time is a valuable resource, and today my management, my institutions treat my time with flagrant disregard and outright disrespect. This is the rotten core of the problem, the disregard for the cost of making me do stupid useless things. We engage it pointless, wasteful box checking exercises rather than reject pointless activities. It is not the stupid things as much as the valuable things they displace. Almost all the time at work I could spend doing something more valuable, or more gratifying, or more satisfying, or simply something that brings me happiness and joy. I could create, solve problems, learn and find meaning and value. Instead I am enslaved to someone’s idea of what I should do. I am saddled with numerous “terms of employment” based tasks that have no value or meaning. Those saddling me always have the excuse of “it is a good idea to do this”. This sentiment is valid except it completely and utterly discounts what that time could be spent doing that is better.

The difference between technology and slavery is that slaves are fully aware that they are not free

― Nassim Nicholas Taleb

I spend complete days doing nothing, but bullshit. It is mostly what other people think is a good idea, or worse yet some sort of ass covering exercise. I can spend an entire day doing nothing productive at all, and yet I’m doing exactly what I supposed to do. This is a huge problem! Managers do this almost every day. They rarely do what they need to do, 1-1-the-borgactually manage the work going on and the people doing the work. They are managing our compliance and control, not the work; the work we do is mere afterthought that increasingly does not need me any competent person would do. At one time work felt good and important with a deep sense of personal value and accomplishment. Slowly and surely this sense is being under-mined. We have gone on a long slow march away from being empowered and valued as contributing individuals. Today we are simply ever-replicable cogs in a machine that cannot tolerate a hint of individuality or personality.

All over the place, from the popular culture to the propaganda system, there is constant pressure to make people feel that they are helpless, that the only role they can have is to ratify decisions and to consume.

― Noam Chomsky

Work place education exemplifies all of these trends. My training is almost 100% compliance oriented. Nothing I am ever trained on is job related, it is all telling me what I shouldn’t do. This training is a good avatar for priorities, and my actual work is not a priority at all. All the training that develops a person is management related. For the rank and file personal development is completely optional and hardly prioritized. We are there to get our stuff done, and the stuff we do is increasingly shit. They have lots and lots of training, and from what I see use almost none of it. It is full of the best theory that cannot be applied to the workplace. Their training would tell them to harness my passion and empower me, yet none of that ever happens. The entire system is completely oriented toward the opposite. The training system signals very clear values to all of us, the rank and file need to comply and submit, managers are the only employees’ worth developing even if the development is all delusional.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum…

― Noam Chomsky

The management literature is full of the gospel of the value of human talent. It sounds 07d3f7fgreat, and I believe in it. Management should be the art of enabling and working to get the most out of employees. If the system was working properly this would happen. For some reason society has removed its trust for people. Our systems are driven and motivated by fear. The systems are strongly motivated to make sure that people don’t fuck up. A large part of the overhead and lack of empowerment is designed to keep people from making mistakes. A big part of the issue is the punishment meted out for any fuck ups. Our institutions are mercilessly punished for any mistakes. Honest mistakes and failures are met with negative outcomes and a lack of tolerance. The result is a system that tries to defend itself through caution, training and control of people. Our innate potential is insufficient justification for risking the reaction a fuck up might generate. The result is an increasingly meek and subdued workforce unwilling to take risks because failure is such a grim prospect.

People get used to anything. The less you think about your oppression, the more your tolerance for it grows. After a while, people just think oppression is the normal state of things. But to become free, you have to be acutely aware of being a slave.

― Assata Shakur

One of the key things that drives the system is a complete lack of cost-benefit analysis

Elliott Erwitt

A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

. The cost of a given measure to control a risk is rarely accounted for. The TSA is a prime example. One asshole tries to blow up a shoe, and forever we can’t take toothpaste on a plane. It is patently absurd on the face of it. We give up freedom, we give up time and we expend enormous effort to control minuscule risks. In the process, we have made a wonder of technology and the modern world, something to be hated. So much of the wonder of the modern world is being sacrificed to fear designed to control risks that are so small to be ridiculous. In the process, the vast benefits of modernity are lost. The vast benefits of easy and convenient air travel are overrun by a system designed to control irrational fears. Our fears are things that are completely out of control, and the vast opportunity cost is never considered. The result is a system that destroys our time and productivity in a disproportionate manner.

If one is forever cautious, can one remain a human being?

― Aleksandr Solzhenitsyn

73094e062316fa1224d2661b4c761a22--movie-posters-quote-postersThe same thing is happening to our work. Fear and risk is dominating our decision-making. Human potential, talent, productivity, and lives of value are sacrificed at the altar of fear. Caution has replaced boldness. Compliance has replaced value. Control has replaced empowerment. In the process work has lost meaning and the ability for an individual to make a difference has disappeared. Resistance is futile, you will be assimilated.

To be given dominion over another is a hard thing; to wrest dominion over another is a wrong thing; to give dominion of yourself to another is a wicked thing.

― Toni Morrison