That Brutal Problem Broke My Code, What do I do?

The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.

― Anaïs Nin

Let’s say you’ve completely bought into my advice and decide to test the hell out of your giphycode. You found some really good problems that “go to eleven.” If you do things right, your code will eventually “break” in some way. The closer you look, the more likely it’ll be broken. Heck your code is probably already broken, and you just don’t know it! Once it’s broken, what should you do? How do you get the code back into working order? What can you do to figure out why it’s broken? How do you live with the knowledge of the limitations of the code? Those limitations are there, but usually you don’t know them very well. Essentially you’ve gone about the process of turning over rocks with your code until you find something awfully dirty-creepy crawly underneath. You then have mystery to solve, and/or ambiguity to live with from the results.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

I’ll deal with the last question first, how to live with the knowledge of limitations. This is a sort of advice to grow up, and be an adult about things. Any method or code is going to be limited in what it can do. Some of these limitations are imposed by theory, or by practicality, or expense, some of the limitations simply get at the limits of our knowledge and technology today. One of the keys to being an expert in a given field is a deep


“INSIDE OUT” (Pictured) ANGER. ©2014 Disney•Pixar. All Rights Reserved.

understanding of what can and cannot be done, and why. What better way of becoming an expert than purposefully making “mistakes?” Do you understand what the state of the art is? Do you know what is challenging to the state of the art? What are the limits imposed by theory? What do other people do to solve problems and what are the pros and cons to their approach? Exploring these questions, and stepping well outside the confines of comfort provided by success drive a deep knowledge of all of these considerations. When properly applied the art of testing codes delves into a deep knowledge of failures as the fuel for learning and the acquisition of knowledge.

So how might you see your code break? In the worst case the problem might induce an outright instability and the code will blow up. Sometimes the blow up will happen through the production of wild solutions, or even violations of the floating number limits of the computer (NaN’s or not a number will appear in output). Other problems look like corrupted data where the solution doesn’t blow up, but the solution is clearly very wrong. Moving down the chain of bad things we might simply see solutions outside the bounds of what is reasonable or admissible for valid solutions. As we walk through this gallery of bad things each succeeding step is subtler than the last. In our approach to breakage, an analytical solution to a problem can prove invaluable because it provides an unambiguous standard for the solution that can be as accurate as you please.

Next, we simply see solutions that are oscillatory or wiggly. These wiggles can go all the way from dangerous to cosmetic in their character. Sometimes the wiggles might interact with genuinely physical features in a model, and the imperfection is a real modeling problem. Next, we get into the real weeds of solution problems, and start to see download-1failures that can go unnoticed without expert attention. One of the key things is the loss of accuracy in a solution. This could be the numerical level of error being wrong, or the rate of convergence of the solution being outside the theoretical guarantees for the method (convergence rates are a function of the method and the nature of the solution itself). Sometimes this character is associated with an overly dissipative solution where numerical dissipation is too large to be tolerated. At this subtle level we are judging failure by a high standard based on knowledge and expectations driven by deep theoretical understanding. These failings generally indicate you are at a good level of testing and quality.

Once the code is broken in some way, it is time to find out why. The obvious breakage where the solution simply falls apart is the best case to deal with because the failings are so obvious. The first thing you should always do is confirm that you’re solving the problem you think you are, and you’re solving it the way you think you are. This involves examining your input and control of the code to make certain that everything is what you expect it to be. Once you’re sure about this important detail, you can move to the sleuthing. For the obvious code breakdowns you might want to examine how the solution starts to fall apart as early in the process as you can. Is the problem localized near a boundary or a certain feature? Does it happen suddenly? Is there a slow, steady build up toward disaster? The answers all point at different sources for the problem. They tell how and where to look.

One of the key things to understand with any failure is the stability of the code and itsrk2 methods. You should be intimately familiar with the conditions for stability for the code’s methods. You should assure that the stability conditions are not being exceeded. If a stability condition is missed, or calculated incorrectly, the impact is usually immediate and catastrophic. One way to do this on the cheap is modify the code’s stability condition to a more conservative version usually with a smaller safety factor. If the catastrophic behavior goes away then it points a finger at the stability condition with some certainty. Either the method is wrong, or not coded correctly, or you don’t really understand the stability condition properly. It is important to figure out which of these possibilities you’re subject to. Sometimes this needs to be studied using analytical techniques to examine the stability theoretically.

One of the key things to understand extremely well is the state of the art in a given field. Are there codes and methods that can solve the problem well or without problems? Nothing can replace an excellent working knowledge of what experts in the field are doing. The fastest way to solve a problem is understand and potentially adopt what the best and brightest are already doing. You also have a leg up on understanding what the limits of knowledge and technology are today, and whether you’ve kept up to the boundary of what we know. Maybe it is research to make you code functional, and if you fix it, you might have something publishable! If so, what do they do differently than your code? Can you modify how your code runs to replicate their techniques? If you can do this and reproduce the results that other are getting then you have a blueprint on how to fix your code.

Peter_LaxAnother approach to take is systematically make the problem you’re solving easier until the results are “correct,” or the catastrophic behavior is replaced with something less odious. An important part of this process is more deeply understand how the problems are being triggered in the code. What sort of condition is being exceeded and how are the methods in the code going south? Is there something explicit that can be done to change the methodology so that this doesn’t happen? Ultimately, the issue is a systematic understanding of how the code and its method’s behave, their strengths and weaknesses. Once the weakness is exposed in the testing can you do something to get rid of it? Whether the weakness is a bug or feature of the code is another question to answer. Through the process of successively harder problems one can make the code better and better until you’re at the limits of knowledge.

The foundation of data gathering is built on asking questions. Never limit the number of hows, whats, wheres, whens, whys and whos, as you are conducting an investigation. A good researcher knows that there will always be more questions than answers.

― Karl Pippart III

images-2Whether you are at the limits of knowledge takes a good deal of experience and study. You need to know the field and your competition quite well. You need to be willing to borrow from others and consider their success carefully. There is little time for pride, if you want to get to the frontier of capability; you need to be brutal and focused along the path. You need to keep pushing your code with harder testing and not be satisfied with the quality. Eventually you will get to problems that cannot be overcome with what people know how to do. At that point your methodology probably needs to evolve a bit. This is really hard work, and prone to risk and failure. For this reason most codes never get to this level of endeavor, its simply too hard on the code developers, and worse on those managing them. Today’s management of science simply doesn’t enable the level of risk and failure necessary to get to the summit of our knowledge. Management wants sure results and cannot deal with ambiguity at all, and striving at the frontier of knowledge is full of it, and usually ends up failing.

The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science.

― Albert Einstein

At the point you meet the frontier, it is time to be willing to experiment with your code (experimentation is great to do even safely within the boundaries of know how). Often the only path forward is changing the way you solve problems. One key is to not undo all the good things you can already do in the process. Quite often one might actually solve the hard problem is some way (like a kludge), only to find out that things that used to be correct and routine for easier problems have been wrecked in the process. That is a back to the drawing board moment! For the very hard problem you may simply be seeking robustness and stability (running the problem to completion), and the measures taken to achieve this do real damage to your bread and butter. You need to be prepared to instrument and study your output in new ways. You are now an explorer, and innovator. Sometimes you need to tackle the problem from a different perspective, challenge your underlying beliefs and philosophies.

The true measure of success is how many times you can bounce back from failure.

― Stephen Richards

At this point its useful to point out that the literature is really bad at documenting what we don’t know. Quite often you are rediscovering something lots of experts already know, but can’t publish. This is one of the worst things about the publishing of research, we really only publish success, and not failure. As a result we have a very poor idea of what we can’t do. It’s only available through inference. Occasionally the state of what can’t be done is published, but usually not. What you may not realize that you are crafting a lens on the problem and a perspective that will shape how you try to solve it. This process is a wonderful learning opportunity and the essence of research. For all these reasons it is very hard and almost entirely unsupportable.

images-1Another big issue is finding general purpose fixes for hard problems. Often the fix to a really difficult problem wrecks your ability to solve lots of other problems. Tailoring the solution to treat the difficulty and not destroy the ability to solve other easier problems is an art and the core of the difficulty in advancing the state of the art. The skill to do this requires fairly deep theoretical knowledge of a field study, along with exquisite understanding of the root of difficulties. The difficulty people don’t talk about is the willingness to attack the edge of knowledge and explicitly admit the limitations of what is currently done. This is an admission of weakness that our system doesn’t support. When fixes aren’t general purpose, one clear sign is a narrow range of applicability. If its not general purpose and makes a mess of existing methodology, you probably don’t really understand what’s going on.

Let’s get to a general theme in fixing problems, add some dissipation to stabilize things and get rid of worrisome features. In the process you often end up destroying the very features of the solution you most want to produce. The key is to identify the bad stuff and keep the good stuff, and this comes from a deep understanding plus some vigorous testing. Dissipation almost always results in a more robust code, but the dissipation needs to be selective, or the solution is arrived at in a wasteful manner. As one goes even deeper into the use of dissipation, the adherence to the second law of thermodynamics rears its head, and defines a tool of immense power if wielded appropriately. A key is to use deep principles to achieve a balanced perspective on dissipation where it is used appropriately in clearly defensible, but limited ways. Even today applying dissipation is still an art, and we struggle to bring more science and principle to its application.

I’ve presented a personally biased view of how to engage in this sort of work. I’m sure other fields will have similar, but different rules for engaging in fixing codes. The important thing is putting simulation codes to the sternest tests they can take, exposing their weaknesses and repairing them. One wants to continually do this until you hit the proverbial wall of our knowledge and ability. Along the way you create a better code, learn the field of endeavor and grow the knowledge and capability of yourself. Eventually the endeavor leads to research and the ability to push the field ahead. This is also the way of creating experts, and masters of a given field. People move from simply being competent practitioners to masters and leaders. This is an unabashed good for everyone, and not nearly encouraged enough. It definitely paves the way forward and produces exceptional results.

A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.

― Winston S. Churchill


Brutal Problems make for Swift Progress

or the alternative title “These Problems go to Eleven!

images-1Fortunate are those who take the first steps.

― Paulo Coelho

When thinking about problems to run with a computer code there is both a fun and a harsh way to think about them. I’ll start with the fun way, borrowing from the brilliant movie “This is Spinal Tap” and the hilarious interview with the lead guitarist from the band,

Nigel Tufnel: The numbers all go to eleven. Look, right across the board, eleven, eleven, eleven and…

Marty DiBergi: Oh, I see. And most amps go up to ten?

Nigel Tufnel: Exactly.

Marty DiBergi: Does that mean it’s louder? Is it any louder?

Nigel Tufnel: Well, it’s one louder, isn’t it? It’s not ten. You see, most blokes, you know, will be playing at ten. You’re on ten here, all the way up, all the way up, all the way up, you’re on ten on your guitar. Where can you go from there? Where?

downloadMarty DiBergi: I don’t know.

Nigel Tufnel: Nowhere. Exactly. What we do is, if we need that extra push over the cliff, you know what we do?

Marty DiBergi: Put it up to eleven.

Nigel Tufnel: Eleven. Exactly. One louder.

Marty DiBergi: Why don’t you make ten a little louder, make that the top number and make that a little louder?

Nigel Tufnel[pauses] These go to eleven.

To make the best progress we need to look for problems that “go to eleven.” Even if the difficulty is somewhat artificial, the best problems are willfully extreme, if not genuinely silly. They expose the mechanisms that break methods. Often these problems are download-2simplified versions of what we know brings the code to its knees, and serve as good blueprints for removing these issues from the code’s methods. Alternatively, they provide proof that certain pathological behaviors do or don’t exist in a code. Really brutal problems that “go to eleven” aid the development of methods by highlighting where improvement is needed clearly. Usually the simpler and cleaner problems are better because more codes and methods can run them, analysis is easier and we can successfully experiment with remedial measures. This allows more experimentation and attempts to solve it using diverse approaches. This can energize rapid progress and deeper understanding.

The greater the obstacle, the more glory in overcoming it.

― Molière

images-1I am a deep believer in the power of brutality at least when it comes to simulation codes. I am a deep believer that we are generally too easy on our computer codes; we should endeavor to break them early and often. A method or code is never “good enough”. The best way to break our codes is attempt to solve really hard problems that are beyond our ability to solve today. Once the solution of these brutal problems is well enough in hand, one should find a way of making the problem a little bit harder. The testing should actually be more extreme and difficult than anything we need to do with codes. One should always be working at, or beyond the edge of what can be done instead of safely staying within our capabilities. Today we are too prone to simply solving problems that are well in hand instead of pushing ourselves into the unknown. This tendency is harming progress.

Stark truth, is seldom met with open arms.

― Justin K. McFarlane Beau

Easy problems make codes look good because they don’t push the boundaries. Easy problems are important for checking the code’s ability to be correct and work when everything is going right. The codes need to continue working when everything is going wrong too. A robust code is functional under the worst of circumstances one might encounter. The brutal problems are good at exposing many of the conditions where things go wrong, and pushing the codes to be genuinely robust. These conditions almost invariably appear in real problems, and good codes can navigate the difficulties without falling apart. Over time good codes can go from falling apart to solving these brutal problems accurately. When this happens we need to come up with new problems to brutalize the codes. We should always be working on the edge or over the edge of what is possible; safe and sound is no way to make things better.

Happiness is not the absence of problems; it’s the ability to deal with them.

― Steve Maraboli

The use of brutal problems has helped drive codes forward for decades. A long time ago simple problems that we routinely solve today brought codes to their collective knees. Examples of such problems for shock physics codes are the Sedov-Taylor blast wave and Noh’s problem. Scientists came up with new ways to solve problems that brought these problems to heel. Rather than rest upon our laurels, we found new problems to bring the codes down. We still solve the older, now tamed brutal problems, but now they are easy. Once we slay one dragon, it is time to construct a bigger, more dangerous dragon to do battle with. We should always test our mettle against a worthy opponent rather than matching up with a patsy. Getting fabulous results on a super-easy problem looks great, but does little or nothing to improve us. Through this process we progress and our codes get better and better. This is how we challenge ourselves systematically always working with the premise that we can improve and get better. Brutal problems provide a concrete focus for improvement.

A challenge only becomes an obstacle when you bow to it.

― Ray A. Davis

What does this process look like concretely? We have problems that used to push the boundaries of our ability such as the Sod shock tube. It used to be a problem we couldn’t solve well. Shock tube problems have the added benefit of having exact solutions (Noh and Sedov-Taylor do too), so agreement and error have little to no ambiguity. Solutions either had oscillations or they were inaccurate and very heavily diffused. Often important features in the solution were seriously corrupted. This problem appeared in the literature at a seminal time in the development of numerical methods for shock physics and offered a standard way of testing and presenting results.

img452By the time a decade had passed after the introduction of Sod’s problem almost all of the pathological solutions we used to see had disappeared and far better methods existed (not because of Sod’s problem per se, but something was “in the air” too). The existence of a common problem to test and present results was a vehicle for the community to use in this endeavor. Today, the Sod problem is simply a “hello World” for shock physics and offers no real challenge to a serious method or code. It is difficult to distinguish between very good and OK solutions with results. Being able to solve the Sod problem doesn’t really qualify you to attack really hard problems either. It is a fine opening ante, but never the final call. The only problem with using the Sod problem today is that too many methods and codes stop their testing there and never move to the problems that challenge our knowledge and capability today. A key to making progress is to find problems where things don’t work so well, focus attention on changing that.

Fig-11-Specific-internal-energy-for-the-LeBlanc-shock-tube-problem-at-t-60-N-xTherefore, if we want problems to spur progress and shed light on what works, we need harder problems. For example, we could systematically up the magnitude of the variation in initial conditions in shock tube problems until results start to show issues. This usefully produces harder problems, but really hasn’t been done (note to self this is a really good idea). One problem that produces this comes from the National Lab community in the form of LeBlanc’s shock tube, or its more colorful colloquial name, “the shock tube from Hell.” This is a much less forgiving problem than Sod’s shock tube, and that is an understatement. Rather than jumps in pressure and density of about one order of magnitude, the jump in density is 1000-to-1 and the jump in pressure is one billion-to-one. This stresses methods far more than Sod and many simple methods can completely fail. Most industrial or production quality methods can actually solve the problem albeit with much more resolution than Sod’s problem requires (I’ve gotten decent results on Sod’s problem on mesh sequences of 4-8-16 cells!). For the most part we can solve LeBlanc’s problem capably today, so its time to look for fresh challenges.

One such challenge is posed by problems with void in them, or those that dynamically produce vacuum conditions. We have found that this class of problems simply is not solved effectively with current methods. Every method I’ve looked at fails in some way shape or form. None of them converge to the correct solution once the problem gets close enough to vacuum or void conditions. The only methods that appear to work at all explicitly track the position of the vacuum, so my statement of dysfunction should be applied to shock capturing methods. It does not seem to be a well-appreciated or well-studied problem thus may be ripe for the picking. Running this problem and focusing on results would provide impetus for improving this clearly poor state of affairs.

1-s2.0-S1570865916300242-f02-09-9780444637895Other good problems are devised though knowing what goes wrong in practical calculation. Often those who know how to solve the problem create these problems. A good example is the issue of expansion shocks, which can be seen by taking Sod’s shock tube and introducing a velocity to the initial condition. This puts a sonic point (where the characteristic speed goes to zero) in the rarefaction. We know how to remove this problem by adding an “entropy” fix to the Riemann solver defining a class of methods that work. The test simply unveils whether the problem infests a code that may have ignored this issue. This detection is a very good side-effect of a well-designed test problem.

Another great example is the carbuncle instability where mesh aligned shock wave exhibit and symmetry breaking bounded instability. This problem was seen first in blunt body simulation of re-entry vehicles, but inhabits frightfully many simulations in a host of applications. The issue still remains inadequately solved although some remedies exist, none is fully satisfactory. The progress made to date has largely arisen through its more clear identification and the production of simplified test problems that exhibit its fundamental behavior. In my own experience if a code doesn’t explicitly stabilize the carbuncle instabilities, the problem will be present and manifest itself. These manifestations will often be subtle and difficult to detect often masquerading as valid physical effects. A good test problem will expose the difficulty and force the code to be robust to it.

One of the best carbuncle exposing problems simply propagates a strong shock wave in one dimension, hardly a challenge today. The wrinkle in the problem is to make the problem two or three dimensional even though the extra dimensions should be ignorable. It introduces a small perturbation to the problem and tests whether the shock wave induces an unphysical growth in the resulting solution. Codes that exhibit the carbuncle instability allow the perturbations to grow and ultimately corrupt the solution. The problem is simple and elegant, but brutally effective for the purpose it was designed. It was inspired by the path to successful analysis of the core issues leading to the carbuncle instability.

The key idea to the modeling and simulation enterprise is identifying problems that expose weaknesses in current methods and codes. Once these weaknesses are addressed they provide a codec of wisdom about problems that have solved, and whether such wisdom was accounted for in a new method or code. New problems working at the boundaries of our knowledge are needed to push the community and spur progress. Meaningful progress can be measured through the conversion of such challenging problems from those that break methods and codes, to reasonable solution by methods and codes, and finally to accurate solution. When a problem has moved from a threat to success, it is time to create a new harder problem to continue progress.

Ultimately the brutal problems are an admission of where the method or code is vulnerable. This reflects on the personal vulnerability of those who professionally associated themselves with the code. This makes testing the code rigorously and brutally impossibly hard especially in today’s “no failure allowed” accountability culture. Its much easier to simply stop where everything appears to work just fine. Brutal problems spur growth, but also challenge our belief of mastery and success. It takes strong people to continually confront the boundaries of their knowledge and capability push it back, and confront it anew over and over.

At a deeply personal level testing codes becomes an exercise in leaving one’s comfort zone. The comfort zone is where you know how to solve problems, and you feel like you have mastery of things. Breaking your code, fixing it and breaking it again is a process of stepping out of the comfort zone as a principle in energizing progress. In my experience we have a culture that increasingly cannot deal with failure successfully. We must always succeed and get punished mercilessly for failure. Risk must be identified and avoided at all costs. The comfort zone is success, and for codes this looks like successful operation, and bona fide healthy convergence all of which is provided by easy problems. Brutal problems expose not just code weaknesses, but personal weakness and intellectual failings or gaps. Rather than chase personal excellence we are driven to choose comfortable acquiescence to the status quo. Rather than be vulnerable and admit our limitations, we choose comfort and false confidence.

Vulnerability is the birthplace of love, belonging, joy, courage, empathy, and creativity. It is the source of hope, empathy, accountability, and authenticity. If we want greater clarity in our purpose or deeper and more meaningful spiritual lives, vulnerability is the path.

― Brené Brown

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Pember, R. B., and R. W. Anderson. A Comparison of Staggered-Mesh Lagrange Plus Remap and Cell-Centered Direct Eulerian Godunov Schemes for Rulerian Shock Hydrodynamics. No. UCRL-JC-139820. Lawrence Livermore National Lab., CA (US), 2000.

Gottlieb, J. J., and C. P. T. Groth. “Assessment of Riemann solvers for unsteady one-dimensional inviscid flows of perfect gases.” Journal of Computational Physics 78, no. 2 (1988): 437-458.

Munz, C‐D. “A tracking method for gas flow into vacuum based on the vacuum Riemann problem.” Mathematical methods in the applied sciences 17, no. 8 (1994): 597-612.

Quirk, James J. “A contribution to the great Riemann solver debate.” In Upwind and High-Resolution Schemes, pp. 550-569. Springer Berlin Heidelberg, 1997.


Am I Productive?

Simply, in a word, NO, I’m not productive.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

A big part of productivity is doing something worthwhile and meaningful. It means ambitiondemotivatorattacking something important and creating innovative solutions. I do a lot of things every day, but very little of it is either worthwhile or meaningful. At the same time I’m doing exactly what I am supposed to be doing! This means that my employer (or masters) are asking me to spend all of my valuable time doing meaningless, time wasting things as part of my conditions of employment. This includes trips to stupid, meaningless meetings with little or no content of value, compliance training, project planning, project reports, e-mails, jumping through hoops to get a technical paper, and a smorgasbord of other paperwork. Much of this is hoisted upon us by our governing agency, coupled with rampant institutional over-compliance, or managerially driven ass covering. All of this equals no time or focus on anything that actually matters squeezing out all the potential for innovation. Many of the direct actions result in creating an environment where risk and failure are not tolerated thus killing innovation before it can attempt to appear.

Worse yet, the environment designed to provide “accountability” is destroying the very conditions innovative research depends upon. Thus we are completely accountable for producing nothing of value.

Every single time I do what I supposed to do at work my productivity is reduced. The urlonly thing not required of me at work is actual productivity. All my training, compliance, and other work activities are focused on things that produce nothing of value. At some level we fail to respect our employees and end up wasting our lives by having us invest time in activities devoid of content and value. Basically the entire apparatus of my work is focused on forcing me to do things that produce nothing worthwhile other than providing a wage to support my family. Real productivity and innovation is all on me and increasingly a pro bono activity. The fact that actual productivity isn’t a concern for my employer is really fucked up. The bottom line is that we aren’t funded to do anything valuable, and the expectations on me are all bullshit and no substance. Its been getting steadily worse with each passing year too. When my employer talks about efficiency, it is all about saving money, not producing anything for the money we spend. Instead of focusing on producing more or better with the funding and unleashing the creative energy of people, we focus on penny pinching and making the workplace more unpleasant and genuinely terrible. None of the changes make for a better, more engaging workplace and simply continually reduce the empowerment of employees.

One of the big things to get to is what is productive in the first place?

There is nothing quite so useless, as doing with great efficiency, something that should not be done at all.

― Peter F. Drucker

Productivity is creating ideas and things whether it is science or art. Done properly science is art. I need to be unleashed to provide value for my time. Creativity requires focus and inspiration making the best of opportunities provided by low hanging fruit. Science can be inspired by good mission focus producing clear and well-defined problems to solve. None of these wonderful things is the focus of efficiency or productivity initiatives today. Every single thing my employer does put a leash on me, and undermines productivity at every turn. This leash is put into a seemingly positive form of accountability, but never asks or even allows me to be productive in the slightest.

Poorly designed and motivated projects are not productive. Putting research projects into a project management straightjacket makes everything worse. We make everything myopic and narrow in focus. This kills one of the biggest sources of innovation. Most breakthroughs aren’t totally original, but rather the adaptation of a mature idea from one field into another. It is almost never a completely original thing. Our focused and myopic management destroys the possibility of these innovations. Increasingly our projects and proposals are all written to illicit funding, not produce the best results. We produce a system that focuses on doing things that are low risk and nearly guaranteed payoff, which results in terrible outcomes where progress is imagesincremental at best. The result is a system where I am almost definitively not productive if I do exactly what I’m supposed to do. The entire apparatus of accountability is absurd and an insult to productive work. It sounds good, but its completely destructive, and we keep adding more and more of it.

Why am I wasting my time, I could produce so much with a few hours of actual work. I read article after article that says I should be able to produce incredible results working only four hours a day. The truth is that we are creating systems at work that keep us from doing anything productive at all. The truth is that I have to go to incredible lengths to get anything close to four hours of actual scientific focus time. We are destroying the ability for us to work effectively all in the name of accountability. We destroy work in the process of assuring that work is getting done. Its ironic, its tragic, and its totally unnecessary.

If you want something new, you have to stop doing something old

― Peter F. Drucker

The key is how do we get the point of not wasting my time with this shit? Accountability sounds good, but underneath its execution is a deep lack of trust. The key to allowivyxvbzwxng me to be productive is to trust me. We need to realize that our systems at work are structured to deal with a lack of trust. Implicit in all the systems is a feeling that people need to be constantly being checked up on. If people aren’t constantly being checked up on they are fucking off. The result is an almost complete lack of empowerment, and a labyrinth of micromanagement. To be productive we need to be trusted, and we need to be empowered. We need to be chasing big important goals that we are committed to achieving. Once we accept the goals, we need to be unleashed to accomplish them. In the process we need to solve all sorts of problems, and in the process we can provide innovative solutions that enrich the knowledge of humanity and enrich society at large. This is a tried and true formula for progress that we have lost faith in, and with this lack of faith we have lost trust in our fellow citizens.

The whole system needs to be oriented toward the positive and away from the underlying premise that people cannot be trusted. This goes hand in hand with the cult of the manager today. If one looks at the current organizational philosophy, the manager is king and apex of importance. Work is to be managed and controlled. The workers are just cogs in the machine, interchangeable and utterly replicable. To be productive, the work itself needs to be celebrated and enabled. The people doing this work need to be the focus of the organization and getting wonderful work done enabled by its actions. The organization needs to enable and support productive work, and create an environment that fosters the best in people. Today’s organizations are centered on expecting and controlling the worst in people with the assumption thaimagexst they can’t be trusted. If people are treated like they can’t be trusted, you can’t expect them to be productive. To be better and productive, we need to start with a different premise. To be productive we need to center and focus our work on the producers, not the managers. We need to trust and put faith in each other to solve problems, innovate and create a better future.

To be happy we need something to solve. Happiness is therefore a form of action;

― Mark Manson

Money is a terrible organizing principle


There is only one valid definition of business purpose: to create a customer.

― Peter F. Drucker

In today’s world money is the prime mover in almost every decision made. Money is the raison d’etre in how we are managed, and what we perceive as correct. It has become a surrogate for what is morally correct, and technically proper. Fundamentally money is just a tool. Instead of balance and some sort of holistic attitude toward what we do, money ends up being the core of meaning. The signaling from society is clear and unambiguous; money is what matters. In the process of making this tool the center of focus, we lose the focus on reality. In terms of business and those who make downloaddecisions for business, money is all that matters. If it puts more money in the pockets of those in power (i.e., the stockholder), it is by current definition a good decision. The flow and availability of money is maximized by a short business cycle, and an utter lack of long-term perspective. In the work that I do, we define the correctness of our work by whether money is allocated for it. This attitude has led us toward some really disturbing outcomes.

What gets measured gets improved.

—Peter Drucker

Stepping away from the big picture of science for a moment is instructive in seeing how money distorts things. Medicine and medical care is a good example of the sort of abominable things that money does for decision making. The United States spends an immense amount of its aggregate wealth on health care, yet the outcomes for Americans are poor. For people with lots of resources (i.e., money) the health care is better than anywhere in the World. For the common person the health care approache2014-06-03-20140529doctortreatingmalepatientthumb.jpgs second World standards, and for the poor third World standards. The outcomes for our citizens follow these outcomes in terms of life expectancy. The reasons for our terrible health care system are clear, as the day is long, money. More specifically, the medical system is tied to profit motive rather than responsibility and ethics resulting in outcomes being directly linked to people’s ability to pay. The moral and ethical dimension of health care in the United States is indefensible and appalling. It is because money is the prime mover for decisions. Worse yet, the substandard medical care for most of our citizens is a drain on society, produces awful results, but provides a vast well of money for the rich and wealthy to leech off of.

friedman_postcardMoney is a tool. Period. Computers are tools too. When tools become reasons and central organizing principles we are bound to create problems. I’ve written volumes on the issues created by the lack of perspective on computers as tools as opposed to ends unto themselves. Money is similar in character. In my world these two issues are intimately linked, but the problems with money are broader. Money’s role as a tool is a surrogate for value and worth, and can be exchanged for other things of value. Money’s meaning is connected to the real world things it can be exchanged for. We have increasingly lost this sense and put ourselves in a position where value and money have become independent of each other. This independence is truly a crisis and leads to severe misallocation of resources. At work, the attitude is increasingly “do what you are paid to do” “the customer is always right” “we are doing what we get funded to do”. The law, training and all manner of organizational tools, enforces all of this. This shadowed by business where the ability to make money justifies anything. We trade, destroy and carve up businesses so that stockholders can make money. All sense of morality, justice, and long-term consequence is scarified if money can be extracted from the system. Today’s stock market is built to create wealth in this manner, and legally enforced. The true meaning of the stock market is a way of creating resources for businesses to invest and grow. This purpose has been completely lost today, and the entire apparatus is in place to generate wealth. This wealth generation is done without regard for the health of the business. Increasingly we have used business as the model for managing everything. To its disservice, science has followed suit and lost the sense of long-term investment by putting business practice into use to manage research. In many respects the core religion in the United States is money and profit with its unquestioned supremacy as an organizing and managing principle.

So much of what we call management consists of making it difficult for people to work.

—Peter Drucker

For example I noted how horribly we are managing a certain program at work, how poorly the work is suited toward the espoused outcomes. The response to this is always, “this is the program we could get funded.” Instead of doing what has value or what is download-1needed, we construct programs to get money. Increasingly, the way we are managed pushes a deep level of accountability to the money instead of value and purpose. The workplace messaging is “only work on what you are paid to do.” Everything we do is based on the customer who is writing the checks. The vacuous and shallow end results of this management philosophy are clear. Instead of doing the best thing possible for real world outcomes, we propose what people want to hear and what is easily funded. Purpose, value and principles are all sacrificed for money. The biggest loss is the inability to deal with difficult issues or get to the heart of anything subtle. The money is increasingly uncoordinated and nothing is tied to large objectives. In the trenches people simply work on the thing they are being paid by and learn to not ask difficult questions or think in the long term. The customer cares nothing about the career development or expertise of those they fund. In the process of money first our career development and National scientific research is plummeting and in free fall whether we look at National Labs or Universities.

There is nothing quite so useless as doing with great efficiency something that should not be done at all.

—Peter Drucker

At the heart of the matter is difficulty with long-term value. The impact of short-term thinking is clear in business. Short term drives are great for making money for stockholders, the more activity in the stock market, the better. The long term health of041917_RE_science-march_main business is always lost to the possibility of making more money in the now. By the same token, the short-term thinking is terrible for value to society and leads to many businesses simply being chewed up and spit out. Unfortunately our society has adopted the short term thinking for everything including science. All activities are measured quarterly (or even monthly) against the funded plans. Organizations are driving everyone to abide by this short-term thinking. No one can use their judgment or knowledge gained to change this for values that transcend money. The result is a complete loss of long-term perspective in decision-making. We have lost the ability to care for the health and growth of careers. The defined financial path has become the only arbiter of right and wrong. All of our judgment is based on money, if its funded, it is right, if it isn’t funded its wrong. More and more all the long-term interests aren’t funded, so our future whither right in front of us. The only ones benefiting from the short-term thinking are a small number of the wealthiest people in society. Most people and society itself are left behind, but forced to serve their own demise.

Long-range planning does not deal with the future decisions, but with the future of present decisions.

—Peter Drucker

Doing something better is relatively easy to devise, but seemingly impossible to implement in the near term. Large parts of the problem are laws that favor short-term interests and profit taking over long-term investment. These laws are entirely created to maximize the personal wealth creation. Instead laws are needed to maximize the societal creation of wealth, which is invariably long-term in perspective. We could bias the system in favor of long-term investment. Part of the answer is the tax system. Currently the system of taxation is completely oriented toward short-term and wealth creation for individuals. The attitude today is that if you can make lots of money; it is correct. This perspective needs to change to something more nuanced. We need to push a balance of this idea with value, impact and the long-term perspective. Ultimately this will require people in power to sacrifice wealth now, for more wealth in the future. People imagesneed to receive a significant benefit for putting off short-term profit to take the long-term perspective. We need to overhaul how science is done. The notably long-term investment is research must be recovered and freed from the business ideas that are destroying the ability of science to create value. The idea that business practices today are correct is utterly perverse and damaging.

Rank does not confer privilege or give power. It imposes responsibility.

― Peter F. Drucker

The problem with making these changes is primarily those who benefit from the current system. A small number of the most powerful and wealthy in society are significantly advantaged. They will work steadfastly to keep the current system in place because it benefits them. Everyone else can be damned and in many cases the powerful care little about society at large (some wealthy people seem to have adopted a more generous attitude, Bill Gates, Warren Buffet come to mind). Money having value over real world things is to their advantage. Creating a system that benefits all of society hurts them. This is true in the short term, but in the longer term it creates less overall wealthUnknown. We need a realization of the long-term effects of current attitudes and policies as a loss to everyone. A piece of this puzzle is a greater degree of responsibility for the future on the part of the rich and powerful. Our leaders need to work for the benefit of everyone, not for their accumulation of more wealth and power. Until this fact becomes more evident to the population as a whole we can expect the wealthy and powerful to continue to favor a system that benefits himself or herself to exclusion of everyone else.

Doing the right thing is more important than doing the thing right.

—Peter Drucker

Part of the overall puzzle is overcoming the infatuation with using business models to manage everything including science. It isn’t necessarily incompatible with the best interests of science, but today’s business practices are utterly orthogonal to good science.

Management is doing things right. Leadership is doing the right things.

—Peter Drucker



We need better theory and understanding of numerical errors

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

In modeling and simulation numerical error is an extremely important yet generally csm_group1_2c3e352676unsatisfactorily understood thing. For general nonlinear problems dominating the use and utility of high performance computing, the state of affairs is quite incomplete. It has a central role in modeling and simulation making our gaps in theory, knowledge and practice rather unsettling. Theory is strong for linear problems where solutions are well behaved and smooth (i.e., continuously differentiable, or a least many derivatives exist). Almost every problem of substance driving National investments in computing is nonlinear and rough. Thus, we have theory that largely guides practice by faith rather than rigor. We would be well served by a concerted effort to develop theoretical tools better suited to our reality.

Sometimes a clearly defined error is the only way to discover the truth

― Benjamin Wiker

We have a fundamental existence theory for convergent solutions defined by Lax’s early work (the fundamental theorem of numerical analysis). It is quite limited, rigorously applying to linear differential equations, yet defining basic approaches to numerical approximations for models that are almost invariably nonlinear. The theorem states that when a stable approximation is consistent (approximates the differential equation properly), it will converge to the correct solution. By convergent we meaPeter_Laxn that the solution approaches the exact solution is the manner of approximation grows closer to a continuum, which is associated with small discrete steps/mesh and more computational resource. This theorem provides the basis and ultimate drive for faster, more capable computing. We apply it most of the time where it is invalid. We would be greatly served by having a theory that is freed of these limits. Today we just cobble together a set of theories, heuristics and lessons into best practices and we stumble forward.

Part of making use of this fundamental theorem is producing a consistent approximation to the model of choice. The tool for accomplishing this is a thing like Taylor series, maxresdefaultpolynomials and finite elements. All of these methods depend to some degree on solutions being well behaved and nice. Most of our simulations are neither well behaved nor nice. We assume an idealized nice solution then approximate using some neighborhood of discrete values. Sometimes this is done using finite differences, or cutting the world into little control volumes (equivalent in simple cases), or creating finite elements and using variational calculus to make approximations. In all cases the underlying presumption is smooth, nice solutions while most of the utility of approximations violates these assumptions. Reality is rarely well behaved or nice, so we have a problem. Our practice has done reasonably well and taken us far, but a better more targeted and useful theory might truly unleash innovation and far greater utility.

The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error.

― Bertolt Brecht

05 editedWe don’t really know what happens when the theory falls apart, and simply rely upon bootstrapping ourselves forward. We have gotten very far with very limited theory, and simply moving forward largely on faith. We do have some limited theoretical tools, like conservation principles (Lax-Wendroff’s theorem), and entropy solutions (converging toward solutions associated with viscous regularization consistent with the second law of thermodynamics). The thing we miss is general understanding of what is guiding accuracy and defining error in these cases. We cannot design methods specifically to produce accurate solution in these circumstances and we are guided by heuristics and experience rather than rigorous theory. A more rigorous theoretical construct would provide a springboard for productive innovation. Let’s look at a few of the tools available today to put things in focus.

One of the first things one encounters in putting together discrete approximations in realistic circumstances is a choice. For nonlinear features leading to general and rough solutions, one can decide to track features in the solution explicitly. The archetype of this is shock tracking where the discrete evolution of a shock wave is defined explicitly in t6767444295_259ef3e354he approximation. In essence the shock wave (or whatever wave is tracked) becomes an internal boundary condition allowing regular methods to be used everywhere else. This typically involves the direct solution of the Rankine-Hugoniot relations (i.e. the shock jump conditions, algebraic relations holding at a discontinuous wave). The problems with this approach are extreme, including unbounded complexity if all waves are tracked, or with solution geometry in multiple dimensions. This choice has been with us since the dawn of computation including the very first calculations at Los Alamos that used this technique, but it rapidly becomes untenable.

john-von-neumann-2To address the practical aspects of computation shock capturing methods were developed. Shock capturing implicitly computes the shock wave on a background grid through detecting its presence and adding a physically motivated dissipation to stabilize its evolution. This concept has made virtually all of computational science possible. Even when tracking methods are utilized the explosion of complexity is tamed by resorting to shock capturing away from the richtmyer_robert_b1dominant features being tracked. The origin of the concept came from Von Neumann in 1944, but lacked a critical element for success, dissipation or stabilization. Richtmyer added this critical element with artificial viscosity in 1948 while working at Los Alamos on problems whose complexity was advancing beyond the capacity of shock tracking to deal with. Together Von Neumann’s finite differencing scheme and Richtmyer’s viscosity enabled shock capturing. It was a proof of principle and its functionality was an essential springboard for others to have faith in computational science.

What one recognizes is that when dealing with shock wave physics must be added to the discrete representation. This happens explicitly in tracking where the shock itself becomes as discrete element or implicit with shock capturing where the approximation is adapted using the physics of shocks. Of course, shock capturing is useful for more than just shocks. It can be used to stabilize the computation of any feature. The overall methodology has some additional benefits not immediately recognized by its originators. For computing turbulence without fully resolving features shock capturing methods are essential (i.e., not DNS, but DNS can be criticized in its practice). Large eddy simulation was born out of adding the original Richtmyer-Von Neumann viscosity to weather modeling, and resulted in the creation of the Smagorinsky eddy viscosity. Other shock capturing methods developed for general purposes have provided the means for implicit Large Eddy Simulation. These methods all have the same origin, and rely upon the basic principles of shock capturing. The fact that all of this has the same origin almost certainly has a deep meaning that is lost in most of today’s dialog. We would be Global_Atmospheric_Modelwell served by aggressively exploring these connections in an open-minded and innovative fashion.

One of the key things about all of this capability is the realization of how heuristic it is at its core. Far too much of what we currently do in computational science is based upon heuristics, and experience gained largely through trial and error. Far too little is based upon rigorous theory. The advancement of our current approaches through theory would be a great service to the advancement of the field. Almost none of the current efforts are remotely associated with advancing theory. If one gets down to brass tacks about the whole drive for exascale, we see that it is predicated on the concept of convergence whose theoretical support is extrapolated from circumstances that don’t apply. We are really on thin ice, and stunningly unaware of the issues. This lack of awareness then translates to lack of action, lack of priority, lack of emphasis and ultimately lack of money. In today’s world if no one pays for it, it doesn’t happen. Today’s science programs are designed to be funded, rather than designed to advance science. No one speaks out about how poorly thought through our science programs are; they simply are grateful for the funding.Titan-supercomputer

When I was a kid, they had a saying, ‘to err is human but to really fuck it up takes a computer.’

― Benjamin R. Smith

There are a host of technologies and efforts flowing out from our current efforts that could all benefit from advances in theory for numerical approximation. In addition to the development of larger computers, we see the application of adaptive mesh refinement (AMR) to define enhanced resolution. AMR is even more highly bootstrapped and leveraged in terms of theory. By the same token, AMR’s success is predicated on best practices and experience from a wealth of applications. AMR is an exciting technology that produces stunning results. Better and more appropriate theory can turn these imagesresults from the flashy graphics AMR produces to justifiable credible results. A big part of moving forward is putting verification and validation into practice. Both activities are highly dependent on theory that is generally weak or non-existent. Our ability to rigorously apply modeling and simulation to important societal problems is being held back by our theoretical failings.

Another area with critical importance and utter lack of support is subgrid closure modeling especially where it depends on the mesh scale itself. The general thinking about closure modeling is completely haphazard and heuristic. The combination of numerical modeling and closure at the mesh scale is poorly thought out, and generally lacking any theoretical support. Usually the closure models are tied directly to the mesh scale, yet numerical methods rarely produce good solutions on the smallest mesh, but rather over a number of mesh cells (or elements). We rarely think about we defined or resolved solution structures and how it connects to modeling. Instead models are thought of solely geometrically in terms of scale and tied to the mesh scale. As a result we don’t have consistency between our mesh, numerical solution and the resolution-fidelity of the numerical method. Often this leaves the modeling in the code as being completely mesh-dependent, and produces no chance of mesh independence.

dag006A big issue is a swath of computational science where theory is utterly inadequate much of it involving chaotic solutions where there is extreme dependence on initial conditions. Turbulence is the classical problem most closely related to this issue. Our current theory and rigorous understand is vastly inadequate to spur progress. In most cases we are let down by both the physics modeling, mathematical and numerical theory. In every case we have weak to non-existent rigor leading to heuristic filled models and numerical solvers. Extensions of any of this work are severely hampered by the lack of theory (think higher order accuracy, uncertainty quantification, optimization,…). We don’t know how any of this converges, we just act like it does and use it to justify most of our high performance computing investments. All of our efforts would be massively assisted by almost any progress theoretically. Most of the science we care about is chaotic at a very basic level and lots of interesting things are utterly dependent on understanding this better. The amount of focus on this matter is frightfully low.

My overall view is that the lack of investment and attention to our theoretical shortcomings is a significant burden. The flipside is the loss of a massive opportunity to make some incredible advances. Instead of solving a whole new class of problems powered by deeper understanding of physics and mathematics, we are laboring under vast gaps. This lowers the effectiveness of everything we do, and every dollar we spend. While a focus on advancing theory and understanding is quite risky, the benefits are extreme. If we are not prepared to fail, we will not succeed.

Success is not built on success. Not great success. Great success is built on failure, frustration, even catastrophe.

— Sumner Redstone

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

Von Neumann, John. “Proposal and analysis of a new numerical method for the treatment of hydrodynamical shock problems.” The collected works of John von Neumann 6 (1944).

Richtmyer, R. D. “Proposed numerical method for calculation of shocks.” LANL Report, LA 671 (1948): 1-18.

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21, no. 3 (1950): 232-237.

Mattsson, Ann E., and William J. Rider. “Artificial viscosity: back to the basics.” International Journal for Numerical Methods in Fluids 77, no. 7 (2015): 400-417.

Richtmyer, Robert D., and Keith W. Morton. “Difference methods for initial-value problems.” Malabar, Fla.: Krieger Publishing Co.,| c1994, 2nd ed. (1994).

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. The basic experiment.” Monthly weather review 91, no. 3 (1963): 99-164.

Smagorjnsky, Joseph. “The beginnings of numerical weather prediction and general circulation modeling: early recollections.” Advances in Geophysics 25 (1983): 3-37.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.




Rethinking the meaning of Trump


Nationalism is power hunger tempered by self-deception.
— George Orwell

The day after the Presidential election in November left me reeling. The decision to elect Donald Trump was incomprehensible because of his deep flaws and utter lackshutterstock_318051176-e1466434794601-800x430 of preparation and qualification for the office of President. Since he has taken office, none of Trump’s actions have provided any relief from these concerns. Whether I’ve looked at his executive orders, appointments, policy directions, public statements, conduct or behavior, the conclusion is the same, Trump is unfit to be President. He is corrupt, crude, uneducated, prone to fits of anger, engages in widespread nepotism, and acts utterly un-Presidential. He has nothing to mitigate any of the concerns I felt that fateful Wednesday when it was clear that he had been elected President. At the same time virtually all of his supporters have been unwavering in support for him. The Republican Party seems impervious to the evidence before them about vast array of problems Trump represents, supporting him, if not enabling his manifest dysfunctions.

Over the past month and especially the last week my views of what Trump means have shifted. If anything my conclusions about the meaning of his reign in the White House are worse than before. Mr. Trump was elected President due to the actions of the Russian Federation and their unprecedented hacking activities and seeding of false narratives into the public conscience. The Russians deeply favored Trump in the election for two clear reasons, their dislike and fear of Clinton and the congruence of Trump’s tendencies with Putin’s in terms of basic philosophy. In addition, Trump’s manifest incompetence would weaken the United States’ role internationally. We have effectively lost the role as leaders of the Free World, and ironically put Germany in that role. Trump’s erratic actions and lack of Presidential skills, knowledge and behavior makes the United States weak, and unable to stand up against a resurgent Russia. The whole thing is actually worse than all of this because Trump represents a new direction for the United States. He represents a new commitment to authoritarian rule, diminishment of freedom, plutocracy, kleptocracy and erratic jingoism.

This gets to the core of what I’ve realized about the meaning of Trump. The reason the Republicans are not disturbed by the Russian influence on the election or the President is their Sympatico with the Russians. The ruling philosophy of Trump and Republicans is the same as the Russians. They use traditional religious and Nationalist values to build support among the populace while ruling to slant the entire government toward two roles, putting money in the hands of the wealthy and authoritarian policies to control the populace. Both scapegoat lots of minorities and fringe groups with bigoted and even violent responses. Neither the Republicans or the Russians are interested in Democratic principles and act steadfastly to undermine voting rightsmaxresdefault at every turn. The Party and its leader in turn driving a strong support among the common man are defending the core traditional National identity. This gives both Putin and Trump their political base from which they can deliver benefits to the wealthy ruling class while giving the common man red meat in oppression of minorities and non-traditional people. All of this is packaged up with a strongly authoritarian leadership with lots of extra law enforcement and military focus. Both Putin and Trump will promote defending the Homeland from the enemies external and internal. Terrorism provides a handy and evil external threat to further drive the Nationalist tendencies.

Here is the difference between Trump and Putin. Putin is a mastermind and a truly competent leader whose main interests are power for himself and Russian by proxy. Trump is an imbecilic and utterly incompetent whose interests are personal greed and power. He cares nothing for the Country or its people. Whether he is a witting or unwitting pawn of Putin doesn’t matter at some level. He is Putin’s pawn and his rule is a direct threat to our Nation’s future and place in the World. The situation we find ourselves in is far graver simply having an idiotic narcissist as President; we have a President who is undermining or Nation through both direct and indirect actions. We have a ruling political party that acts to enable this and making a foreign power more effective in the process.

The combination of the Republican Party and its leader in the President are fundamentally reshaping the United States in a corrupt and incompetent mirror to Putin’s Russia. Only time will tell how far this will go or what the long-term consequences will be. The end result will be a United States that loses its position as the sole superpower in the World. The only ones benefiting from this change are Russia and the cadre of wealthy people served by both regimes. The rest of us will suffer.

Sometimes the first duty of intelligent men is the restatement of the obvious.
— George Orwell



Numerical Approximation is Subtle, and we don’t do subtle!

We are losing the ability to understand anything that’s even vaguely complex.

― Chuck Klosterman

I get asked, “what do you do?” quite often in conversation, and I realize the truth needs to be packaged carefully for most people. One of my issues is that advertise what I do on my body with some incredibly nerdy tattoos including an equation that describes oneIMG_3502 form of the second law of thermodynamics. What I do is complex and highly technical full of incredible subtlety. Even when talking with someone from a nearby technical background the subtlety of approximating physical laws numerically in a manner suitable for computing can be daunting. For someone without a technical background it is positively alien. This character comes to play rather acutely in the design and construction of research programs where complex, technical and subtle does not sell. This is especially true in today’s world where expertise and knowledge is regarded as suspicious, dangerous and threatening to so many. In today’s world one of the biggest insults to hurl at some one is to accuse them of being one of the “elite”. Increasingly it is clear that this isn’t just an American issue, but Worldwide in its scope. It is a clear and present threat to a better future.

21SUPERCOMPUTERS1-master768I’ve written often about the sorry state of high performance computing. Our computing programs are blunt and naïve constructed to squeeze money out of funding agencies and legislatures rather then get the job done. The brutal simplicity of the arguments used to support funding is breathtaking. Rather than construct programs to be effective and efficient getting the best from every dollar spent, we construct programs to be marketed at the lowest common denominator. For this reason something subtle, complex and technical like numerical approximation gets no play. In today’s world subtlety is utterly objectionable and a complete buzz kill. We don’t care that it’s the right thing to do, or that it is massively greater in return than simply building giant monstrosities of computing. It would take an expert from the numerical elite to explain it, and those people are untrustworthy nerds, so we will simply get the money to waste on the monstrosities instead. So here I am, an expert and one of the elite using my knowledge and experience to make recommendations on how to be more effective and efficient. You’ve been warned.

Truth is much too complicated to allow anything but approximations.

— John Von Neumann

If we want to succeed at remaining a high performance computing superpower, we need change our approach and fast. Part of what is needed is a greater focus on numerical approximation. This is part of deep need to refocus on the more valuable aspects of the scientific computing ecosystem. The first thing to recognize is that our current hardware first focus is oriented on the least valuable part of the ecosystem, the computer itself. A computer is necessary, but horribly insufficient for high performance computing supremacy. The real value for scientific computing is the opposite end of the spectrum where work is grounded in physics, engineering and applied mathematics.Crays-Titan-Supercomputer

Although this may seem a paradox, all exact science is dominated by the idea of approximation.

— Bertrand Russell

I’ve made this argument before and it is instructive to unpack it. The model solved via simulation is the single most important aspect of the simulation. If the model is flawed, no amount of raw computer speed, numerical accuracy, or efficient computer code can rescue the solution and make it better. The model must be changed, improved, or corrected to produce better answers. If a model is correct the accuracy, robustness, fidelity and efficiency of its numerical solution is essential. Everything upstream of the numerical solution aimed toward the computer hardware is less important. We can move down the chain of activities all of which are necessary seeing the same effect, the further you get from the model of reality, the less efficient the measures are. This whole thing is referred to an ecosystem these days and every bit of it needs to be in place.3_code-matrix-944969 What also needs to be in place is a sense of the value of each activity, and priority placed toward those that have the greatest impact, or the greatest opportunity. Instead of doing this today, we are focused on the thing with least impact, farthest from reality and starving the most valuable parts of the ecosystem. One might argue that the hardware is a subject of opportunity, but the truth is the opposite. The environment for improving the performance of hardware is at a historical nadir; Moore’s law is dead, dead, dead. Our focus on hardware is throwing money at an opportunity that has passed into history.

I’m a physicist, and we have something called Moore’s Law, which says computer power doubles every 18 months. So every Christmas, we more or less assume that our toys and appliances are more or less twice as powerful as the previous Christmas.

Michio Kaku

At some point, Moore’s law will break down.

— Seth Lloyd

There is one word to describe this strategy, stupid!

500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9At the core of the argument is a strategy that favors brute force over subtleties understood mainly by experts (or the elite!). Today the brute force argument always takes the lead over anything that might require some level of explanation. In modeling and simulation the esoteric activities such as the actual modeling and its numerical solution are quite subtle and technical in detail compared to the raw computing power that can be understood with ease by the layperson. This is the reason the computing power gets the lead in the program, not because of its efficacy in improving the bottom line. As a result our high performance-computing world is dominated by meaningless discussions of computing power defined by a meaningless benchmark. The political dynamics is basically a modern day “missile gap” like we had during the Cold War. It has exactly as much virtue as the original “missile gap”; it is a pure marketing and political tool with absolutely no technical or strategic validity aside from its ability to free up funding.

Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation because we know that we do not know all the laws as yet.

— Richard P. Feynman

Once you have an entire program founded on bullshit arguments, it is hard to work your way back to technical brilliance. It is easier to double down on the bullshit and simply define everything in terms of the original fallacies. A big part of the problem is the application of modern verification and validation to the process. Both verification and validation are modern practices to accumulate evidence on the accuracy, correctness and fidelity of computational simulations. Validation is the comparison of simulation with experiments and in this comparison the relative correctness of models is determined. Verification determines the correctness and accuracy of the numerical solution of the vyxvbzwxmodel. Together the two activities should help energize high quality work. In reality most programs consider them to be nuisances and box checking exercises to be finished and ignored as soon as possible. Programs like to say they are doing V&V, but don’t want to emphasize or pay for doing it well. V&V is a mark of quality, but the programs want its approval rather than attend to its result. Even worse, if the results are poor or indicate problems, they are likely to be ignored or dismissed as being inconvenient. Programs get away with this because the practice of V&V is technical and subtle and in the modern world highly susceptible to bullshit.

Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.

— John W. Tukey

Numerical methods for solving models are even more technical and subtle. As such they are the focus of suspicion and ignorance. For high performance computing today they are considered to be yesterday’s work and largely a finished, completed product now simply needing a bigger computer to do better. In a sense this notion is correct, the bigger computer will produce a better result. The issue is that using the computer power, as the route to improvement is inefficient under the best of circumstances. We are not living under of the best of circumstances! Things are far from efficient, as we have been losing the share of computer power advances useful for modeling and bh_computers_09simulation for decades now. Let us be clear, when we receive an ever-smaller proportion of the maximum computing power as each year passes. Thirty years ago we would commonly get 10, 20 or even 50 percent of the peak performance of the cutting edge supercomputers. Today even one percent of the peak performance is exceptional, and most codes doing real application work are significantly less than that. Worse yet, this dismal performance is getting worse with every passing year. This is one element of the autopsy of Moore’s law that we have been avoiding while its corpse rots before us.

So we are prioritizing improvement in an area where the payoffs are fleeting and suboptimal. Even these improvements are harder and harder to achieve as computers become ever more parallel and memory access costs become ever more extreme. Simultaneously we are starving more efficient means of improvement of resources and emphasis. Numerical methods and algorithms are two key areas not getting any significant attention or priority. Moreover support for these areas is actually diminishing so that support for the inefficient hardware path can be increased. Let’s not mince words; we are emphasizing a crude naïve and inefficient route to improvement at the cost of a complex and subtle route that is far more efficient and effective.

Numerical approximations and algorithms are complex and highly technical things john-von-neumann-2poorly understood by non-experts even if they are scientists. The relative merits of one method or algorithm compared to another is difficult to articulate. The merits and comparison is highly technical and subtle. Since creating new methods and algorithms makes progress, this means improvements are hard to explain and articulate to non-experts. In some cases both methods and algorithms can produce breakthrough results and produce huge speed-ups. These cases are easy to explain. More generally a new method or algorithm produces subtle improvements like more robustness or flexibility or accuracy than the older options. Most of these changes are not obvious, but making this progress over time leads to enormous improvements that swamp the progress made by faster computers.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

The huge breakthroughs are far and few between but provide much greater value than any hardware over similar periods of time. To get these huge breakthroughs requires continual investment in research for extended periods of time. For much of the time the research is mostly a failure producing small or non-existent improvements, until they don’t. Without the continual investment, the failure and the expertise failure produces, the breakthroughs will not happen. They are mostly serendipitous and the end product of many unsuccessful ideas. Today the failures and lack of progress is not supported; we exist in a system where insufficient trust exists to support the sort of failure needed for progress. The result is the addiction to Moore’s law and its seemingly guaranteed payoff because it frees us from subtlety.

Often a sign of expertise is noticing what doesn’t happen.

― Malcolm Gladwell

A huge aspect of expertise is the taste for subtlety. Expertise is built upon mistakes and idiocracyfailure just as basic learning is. Without the trust to allow people to gloriously make professional mistakes and fail in the pursuit of knowledge, we cannot develop expertise or progress. All of this lands heavily on the most effective and difficult aspects of scientific computing, the modeling and solution of the models numerically. Progress on these aspects is both highly rewarding in terms of improvement, and very risky being prone to failure. To compound matters progress is often highly subjective itself needing great expertise to explain and be understood. In an environment where the elite are suspect and expertise is not trusted such work is unsupported. This is exactly what we see, the most important and effective aspects of high performance computing are being starved in favor of brutish and naïve aspects, which sell well. The price we pay for our lack of trust is an enormous waste of time, money and effort.

Wise people understand the need to consult experts; only fools are confident they know everything.

― Ken Poirot

Again, I’ll note that we still have so much to do. Numerical approximations for existing models are inadequate and desperately in need of improvement. We are burdened by theory that is insufficient and heavily challenged by our models. Our models are all flawed and the proper conduct of science should energize them to improve.

…all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind… [Co-author with Norman R. Draper]

— George E.P. Box

What we still don’t get about numerical error

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

― Nick Trefethen

Modern modeling and simulation is viewed as a transformative technology for science and engineering. Invariably the utility of modeling and simulation is grounded on tmaxresdefaulthe solution of models via numerical approximations. The fact that numerical approximation is the key to unlocking its potential seems largely lost in the modern perspective, and engaged in any increasingly naïve manner. For example much of the dialog around high performance computing is predicated on the notion of convergence. In principle, the more computing power one applies to solving a problem, the better the solution. This is applied axiomatically and relies upon a deep mathematical result in numerical approximation. This heritage and emphasis is not considered in the conversation to the detriment of its intellectual depth.

Where all think alike there is little danger of innovation.

― Edward Abbey

At this point, the mathematics and specifics of numerical approximation is then images-2systematically ignored by the dialog. The impact of this willful ignorance is felt across the modeling and simulation world, a general lack of progress and emphasis on numerical approximation is evident. We have produced a situation where the most valuable aspect of numerical modeling is not getting focused attention. People are behaving as if the major problems are all solved and not worthy of attention or resources. The nature of the numerical approximation is the second most important and impactful aspect of modeling and simulation work. Virtually all the emphasis today is on the computers themselves based on the assumption of their utility in producing better answers. The most important aspect is the modeling itself; the nature and fidelity of the models define the power of the whole process. Once a model has been defined, the numerical solution of the model is the second most important aspect. The nature of this numerical solution is most dependent on the approximation methodology rather than the power of the computer.

The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.

― Anthony Jay

People act as if the numerical error is so small as not to be important on one hand, while encouraging great focus on computing power where the implicit reasoning for the computing power is founded on reducing numerical error. To make matters worse with this corrupt logic, the most effective way to reduce numerical error is being starved for attention and resources having little or no priority. The truth is that numerical errors are still too large, and increasing computing power is lousy way and inefficient to make them smaller. We are committed to a low-risk path that is also highly inefficient because the argument is accessible to the most naïve people in the room.

What is important is seldom urgent and what is urgent is seldom important.

― Dwight D. Eisenhower

Another way of getting to the heart of the issue is the efficacy of using gains in computer power to get better solutions. Increases in computing power are a terrible way to produce better results; it is woefully inefficient. One simply needs to examine the rate of solution improvement based on scaling arguments. First, we need to recognize that practical problems converge quite slowly in terms of the application of enhanced computational resources. For almost any problem of true real world applicability, high-order convergence (higher than first-order) is never seen. Generally we might expect solutions to improve at first-order with the inverse of mesh size. If we look at three dimensional, time dependent problems and we want to halve the numerical error, we need to apply at least 16 times the computing power. Usually convergence rates are less than first order, so the situation is actually even worse. As a result we are investing an immense amount in progressing in an incredibly inefficient manner, and starving more efficient means of progress. To put more teeth on the impact of current programs, the exascale initiative wants to compute things fifty times better, which will only result is reducing errors by slightly more than one half. So we will spend huge effort and billions of dollars in making numerical errors smaller by half. What an utterly shitty return on investment! This is doubly shitty when you realize that so much more could be done to improve matter by other means.

The first thing we need to recognize for progress is relative efficacy of different modes of investment. The most effective way to progress in modeling and simulation are better models. Better models require work on theory and experiment with deeply innovative thinking based on inspiration and evidence of limitations of current theory and modeling. For existing and any new models the next step is solving the models numerically. This involves detailed and innovative numerical approximations of the models. The power of modeling and simulation with computers is predicated on the ability to solve complex models that cannot be understood analytically (or analytically without severe restrictions or assumptions). The fidelity of the numerical approximations is the single most effective way to improve results once modeling errors have been addressed. Numerical approximations can make a huge difference in the accuracy of simulations far more effectively than computer power.

Don’t tell me about your effort. Show me your results.

― Tim Fargo

titanSo why are we so hell bent on investing in a more inefficient manner of progressing? Our mindless addiction to Moore’s law providing improvements in computing power over the last fifty years for what in effect has been free for the modeling and simulation community.

imagesOur modeling and simulation programs are addicted to Moore’s law as surely as a crackhead is addicted to crack. Moore’s law has provided a means to progress without planning or intervention for decades, time passes and capability grows almost if by magic. The problem we have is that Moore’s law is dead, and rather than moving on, the modeling and simulation community is attempting to raise the dead. By this analogy, the exascale program is basically designed to create zombie computers that completely suck to use. They are not built to get results or do science, they are built to get exascale performance on some sort of bullshit benchmark.

This gets to the core of the issue, our appetite for risk and failure. Improving numerical approximations is risky and depends on breakthroughs and innovative thinking. Moore’s law has sheltered the modeling and simulation community from risk and failure in computing hardware for a very long time. If you want innovation you need to accept risk and failure; innovation without risk and failure simply does not happen. We are intolerant of risk and failure as a society, and this intolerance dooms innovation literally strangling it to death in its crib. Moore’s law allowed progress without risk, as if it came for free. The exascale program will be the funeral pyre for Moore’s law and we are threatening the future of modeling and simulation with our unhealthy addiction to it.

If failure is not an option, then neither is success.

― Seth Godin

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

The key thing to realize about this discussion is that improving numerical7b8b354dcd6de9cf6afd23564e39c259 approximations is risky and highly prone to failure. You can invest in improving numerical approximations for a very long time without any seeming progress until one gets a quantum leap in performance. The issue in the modern world is the lack of predictability to such improvements. Breakthroughs cannot be predicted and cannot be relied upon to happen on a regular schedule. The breakthrough requires innovative thinking and a lot of trial and error. The ultimate quantum leap in performance is founded on many failures and false starts. If these failures are engaged in a mode where we continually learn and adapt our approach, we eventually solve problems. The problem is that it must be approached as an article of faith, and cannot be planned. Today’s management environment is completely intolerant of such things, and demands continual results. The result is squalid incrementalism and an utter lack of innovative leaps forward.

Civilizations… cannot flourish if they are beset with troublesome infections of mistaken beliefs.

― Harry G. Frankfurt

What is the payoff for methods improvement?

If we improve a method we can achieve significantly better results without a finer computational mesh. This results in a large saving in computational cost as long as the improved method isn’t too expensive. As I mentioned before one needs 16 times the computational resources to knock error down by half for a 3-D time dependent calculation. If I produce a method with half the error, it can be more efficient if it is less than 16 times as expensive. In other word, the method can use 16 times the computational resource and still be more efficient. This is a lot of headroom to work with!

The most dangerous ideas are not those that challenge the status quo. The most dangerous ideas are those so embedded in the status quo, so wrapped in a cloud of inevitability, that we forget they are ideas at all.

― Jacob M. Appel

For some cases the pMRISB2ayoff is far more extreme than these simple arguments. The archetype of this extreme payoff is the difference between first and second order monotone schemes. For general fluid flows, second-order monotone schemes produce results that are almost infinitely more accurate than first-order. The reason for this stunning claim are acute differences in the results comes from the impact of the form of the truncation error expressed via the modified equations (the equations solved more accurately by the numerical methods). For first-order methods there is a large viscous effect that makes all flows laminar. Second-order methods are necessary for simulating high Reynolds number turbulent flows because their dissipation doesn’t interfere directly with the fundamental physics.

As technology advances, the ingenious ideas that make progress possible vanish into the inner workings of our machines, where only experts may be aware of their existence. Numerical algorithms, being exceptionally uninteresting and incomprehensible to the public, vanish exceptionally fast.

― Nick Trefethen

swirly2We don’t generally have good tools for numerical error approximation in non-standard (or unresolved) cases. One digestion of one of the key problems is found in Banks, Aslam, Rider where sub-first-order convergence is described and analyzed for solutions of a discontinuous problem for the one-way wave equation. The key result in this paper is the nature of mesh convergence for discontinuous or non-differentiable solutions. In this case we see sub-linear fractional order convergence. The key result is a general relationship between the convergence rate and the formal order of accuracy for the method, p, which is \frac{p}{p+1}. This comes from the analysis of the solution to the modified equation including the leading order truncation error. For nonlinear discontinuous solutions, the observed result is first-order where one establishes a balance between the regularization and the self-steepening in shock waves. At present there is no theory of what this looks like theoretically. Seemingly this system of equations could be analyzed as we did for the linear equations. Perhaps this might provide guidance for numerical method development. It would seemingly be worthy progress if we could analyze such systems more theoretically providing a way to understand actual accuracy.

Another key limitation of existing theory is chaotic solutions classically associated with turbulent or turbulent-like flows. These solutions are extremely (perhaps even infinitely) sensitive to initial conditions. It is impossible to get convergence results for point values, and the only convergence is for integral measures. These measures are generally convergent very slowly and they are highly mesh-dependent. This issue is huge in high performance computing. One area of study is measure-valued solutions where convergence is examined statistically. This is a completely reasonable approach for convergence of general solutions to hyperbolic PDE’s.

dag006The much less well-appreciated aspect comes with the practice of direct numerical simulation of turbulence (DNS really of anything). One might think that having a DNS would mean that the solution is completely resolved and highly accurate. They are not! Indeed they are not highly convergent even for integral measures. Generally speaking, one gets first-order accuracy or less under mesh refinement. The problem is the highly sensitive nature of the solutions and the scaling of the mesh with the Kolmogorov scale, which is a mean squared measure of the turbulence scale. Clearly there are effects that come from scales that are much smaller than the Kolmogorov scale associated with highly intermittent behavior. To fully resolve such flows would require the scale of turbulence to be described by the maximum norm of the velocity gradient instead of the RMS.

If you want something new, you have to stop doing something old

― Peter F. Drucker

Peter_LaxWhen we get to the real foundational aspects of numerical error and limitations, we come to the fundamental theorem of numerical analysis. For PDEs it only applies to linear equations and basically states that consistency and stability is equivalent to convergence. Everything is tied to this. Consistency means you are solving the equations in a valid and correct approximation, stability is getting a result that doesn’t blow up. What is missing is the theoretical application to more general nonlinear equations along with deeper relationships to accuracy, consistency and stability. This theorem was derived back in the early 1950’s and we probably need something more, but there is no effort or emphasis on this today. We need great effort and immensely talented people to progress. While I’m convinced that we have no limit on talent today, we lack effort and perhaps don’t develop or encourage the talent to develop appropriately.

bh_computers_01Beyond the issues with hardware emphasis, today’s focus on software is almost equally harmful to progress. Our programs are working steadfastly on maintaining large volumes of source code full of the ideas of the past. Instead of building on the theory, methods, algorithms and idea of the past, we are simply worshiping them. This is the construction of a false ideology. We would do far greater homage to the work of the past if we were building on that work. The theory is not done by a long shot. Our current attitudes toward high performance computing are a travesty, and embodied in a national program that makes the situation worse only to serve the interests of the willfully naive. We are undermining the very foundation upon which the utility of computing is built. We are going to end up wasting a lot of money and getting very little value for it.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― Ta-Nahisi Coates

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Fjordholm, Ulrik S., Roger Käppeli, Siddhartha Mishra, and Eitan Tadmor. “Construction of approximate entropy measure-valued solutions for hyperbolic systems of conservation laws.” Foundations of Computational Mathematics (2015): 1-65.

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.


Science is political and it always has been

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

marchforscience-1024x512On Saturday I participated in the March for Science in downtown Albuquerque along with many other marches across the World. This was advertised as a non-partisan event, but to anyone there it was clearly and completely partisan and biased. Two things united the people at the march: a philosophy of progressive and liberalism and opposition to conservatism and Donald Trump. The election of a wealthy paragon of vulgarity and ignorance has done wonders for uniting the left wing of politics. Of course, the left wing in the United States is really a moderate wing, made to seem liberaIMG_5229l by the extreme views of the right. Among the greater proponents of the left wing are science as an engine of knowledge and progress. The reason for this dichotomy is the right wing’s embrace of ignorance, fear and bigotry as its electoral tools. The right is really the party of money and the rich with fear, bigotry and ignorance wielded as tools to “inspire” enough of the people to vote against their best (long term) interests. Part of this embrace is a logical opposition to virtually every principle science holds dear.

041917_RE_science-march_mainThe premise that a march for science should be non-partisan is utterly wrong on the face of it; science is and has always been a completely political thing. The reasoning for this is simple and persuasive. Politics is the way human beings settle their affairs, assign priorities and make decisions. Politics is an essential human endeavor. Science is equally human in its composition being a structured vehicle for societal curiosity leading to the creation of understanding and knowledge. When the political dynamic is arrayed in the manner we see today, science is absolutely and utterly political. We have two opposing views of the future, one consistent with science favoring knowledge and progress, with the other inconsistent with science favoring fear and ignorance. In such an environment science is completely partisan and political. To expect things to be different is foolish and naïve.

The essence of Conservatism is fear: fear of what is not understood, fear of change, fear of the new, fear of what’s different, fear of the other.

― Gene Pozniak

IMG_5231One of the key things to understand is that science has always been a political thing although the contrast has been turned up in recent years. The thing driving the political context is the rightward movement of the Republican, which has led to their embrace of extreme views including religiosity, ignorance and bigotry. Of course, these extreme views are not really the core of the GOP’s soul, money is, but the cult of ignorance and anti-science is useful in propelling their political interests. The Republican Party has embraced extremism in a virulent form because it pushes its supporters to unthinking devotion and obedience. They will support their party without regard for their own best interests. The republican voter base hurts their economic standing in favor of policies that empower their hatreds and bigotry while calming their fear. All forms of fact and truth have become utterly unimportant unless they support their world-view. The upshot is the rule of a political class hell bent on establishing a ruling class in the United States composed of the wealthy. Most of the people voting for the Republican candidates are simply duped by their support of extreme fear, hate and bigotry. The Democratic Party is only marginally better since they have been seduced by the same money leaving voters with no one to work for them. The rejection of science by the right will ultimately be the undoing of the Nation as other nations will eventually usurp the United States militarily and economically.

Increasingly science and progress are rejected because they invariantly upset the status quo. When the rich and powerful make the rules and have all the advantage, any perturbation in the balance of society must be resisted. The social structure and the system of laws are already setup in service to the ruling class. Any change is viewed as a danger. Science in all its forms is a threat to power. The only science that

is favored should serve the needs of the powerful. Increasingly the science supported by the ruling class is carefully controlled and obsessively applied in nature. Pure research and acquisition of knowledge is dangerous and feared because its results cannot be controlled. Pure research has the potential to upset the order the powerful have carefully established. This order is an engine to enrich them and further tap into the wealth of society for their own benefit. Science is political because it is an engine of progress, and progress is the enemy of the powerful.

The South, which is peopled with ardent and irascible beings, is becoming more irritated and alarmed.

 Alexis de Tocqueville

GOP 2016 Debate

Republican presidential candidate, businessman Donald Trump stands during the Fox Business Network Republican presidential debate at the North Charleston Coliseum, Thursday, Jan. 14, 2016, in North Charleston, S.C. (AP Photo/Chuck Burton)

In a deep way the latest election is simply the culmination of forty years of political evolution (I can’t help but note the irony of using evolution to describe Republicans!). Corruption, hatred and abuse of power are a common thread for this entire time. The foundation of this change happened under Richard Nixon whose “Southern Strategy” allowed the GOP to embrace the toxic legacy of bigotry that defines the old Confederacy as its own brand. Along with the change of the GOP into the embodiment of Confederate values came a tidal wave of ignorance ushered in by Ronald Reagan whose generally unintellectual approach is revered by the Republicans almost as if he was a demi-God. In this way Donald Trump is almost the perfect Republican, corrupt and hateful to the core like Nixon, ignorant and uneducated like Reagan with the mantle of wealth their voters admire. We now see a political movement completely devoted to fear, hatred and ignorance as their brand. They wear these values with pride and attack the elite whosel-600-415-f9eb0bbb-2cf7-4682-8448-ad8d4415e565values of progress, love and knowledge are viewed as weakness. In this lens it is no wonder that science is rejected.

Ignorance is a progressive thinker’s enemy; acceptance, awareness, accountability and action are the tools of the informed; of people who want change and do all they can to achieve it.

― Carlos Wallace

The assault on science is based on its incompatibility with the values of modern extreme conservatism. Again and again knowledge and progress will challenge the conservative mind with truths they do not want to hear. The best way to avoscience-politicsid this problem is kill the knowledge before it is produced. We can find example after example of science being silenced because it is likely to produce results that do not match their view of the world. Among the key engines of the ignorance of conservatism is its alliance with extreme religious views. Historically religion and science are frequently at odds because the faith and truth are often incompatible. This isn’t necessarily all religious faith, but rather that stemming from a fundamentalist approach, which is usually grounded in old and antiquated notions (i.e., classically conservative and opposing anything looking like progress). Fervent religious belief cannot deal with truths that do not align with dictums. The best way to avoid this problem is get rid of the truth. When the government is controlled by extremists this translates to reducing and controlling
science to avoid such truths.

Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron.

― Dwight D. Eisenhower

With this background we can see how this imprints onto science rather clearly. Fear and money both like science that leads to National security and defense, so the right supports activities that either provide people with protection or allow us to kill more effectively and efficiently. The problem is that the right only supports this kind of work in its most naked and applied sense. The right does5064not support deeper research that forms the foundation allowing us to develop technology. As a result our ability to be the best at killing people is at risk in the long run. Eventually the foundation of science used to create all our weapons will run out, and we no longer will be the top dogs. The basic research used for weapons work today is largely a relic of the 1960’s and 1970’s. The wholesale diminishment in societal support for research during the 1980’s and onward will start to hurt us more obviously. In addition we have poisoned the research environment in a fairly bipartisan way leading to a huge drop in the effectiveness and efficiency of the fewer research dollars spent.

The right wing has become the enemy of intellectualism. Increasingly they have formed open opposition of educating citizens in anything beyond a rote traditional form of learning. They have systematically undermined a system that teaches people to think for themselves. Science is a form of high thinking that the right opposes for vehemently, and as such to be diminished as a threat to their agenda. At the apex of the educational system are universities. Universities are the factories for the elite t0-CBthX9R0resYdC1O.jpghey hate so much. The right wing has been engaged in an all out assault on universities in part because they view them as the center of left wing views. Attacking and contracting science is part of this assault. In addition to a systematic attack on universities is an increasing categorization of certain research as unlawful because its results will almost certainly oppose right wing views. Examples of this include drug research (e.g. marijuana in particular), anything sexual, climate research, health effects of firearms, evolution, and the list grows. The deepest wounds to science are more subtle. They have created an environment that poisons intellectual approaches and undermines the education of the population because educated intellectual people naturally oppose their ideas.

It is a well known fact that reality has liberal bias.

― Stephen Colbert

Let’s get to the core of science that the right opposes. One of the areas where science rubs the right wing the wrong way could broadly be characterized as ecological and environmental research. The archetype of this is climate research and the concept of climate change. In general, research in Earth sciences leads to conclusions thaturl upset the status quo hurting the ability of traditional industry to make money (with the exception of geological work associated with energy and mining). Much of the ecological research has informed us how human activity is damaging the environment. Industry does not want to adopt practices that preserve the environment primarily due to greed. Moreover the religious extremism opposes ecological research because it opposes the dictums of their faith as chosen people who may exploit the Earth to the limits of their desire. Climate change is the single greatest research threat to the conservative worldview. The fact that mankind is a threat to the Earth is blasphemous to the right wing extremists either impacting their greed or the religious conviction. The denial of climate change is based primarily on religious faith and greed, the pillars of modern right wing extremism.

An area where science and public policy come into great conflict is public health. Americans love their firearms and possess them in great numbers. Americans also die in great numbers at the end of their firearms. While gun violence captures the attention and imagination, most gun deaths are self-inflicted, or accidental. It is a public health issue of great consequence. Science should be studying it, and yet it is not allowed by federal funding. The reason is the gun lobby (i.e., the NRA), which own the right wing. They don’t want the facts known and have made government funded research on the topic effectively illegal. They are worried that knowledge of the level of public health threat will result in a desire for regulation, and a public outcry. Instead of working with facts and knowledge, we simply suppress it.

Another huge area of intrusion of politics into science is a drug especially illegal ones. We have made research into certain drugs and their medical benefit illegal (i.e., marijuana). Several things are obvious to even the most casual observer, marijuana is not a particularly dangerous drug (definitely not “Schedule 1”), and it has medical benefits. The right (and a lot of the left) wing is opposed to adding science to the discussion. This is yet another example where facts and truth are excluded from the dialog. We have a number of purely political reasons for this. A huge one is the war on drugs, which is supported by law enforcement and prison lobbies (and the people enriched by these “industries”). These lobbies work to damage society as a whole, andimages.jpg enable the implicit bigotry in how laws are enforced and people are imprisoned ignoring the damage to society at large.

The impact of the drug laws has enabled the Jim Crow laws of the past to be enforced by a new mechanism. Again, the impact of the illegal nature of the drugs is ignored because their illegality serves conservative political interests. Their illegal nature is boon to criminal cartels that use them to generate huge incomes. Instead of cutting the cartels source of money out and defuse criminal violence, we keep it fully charged because law enforcement wants power, not a solution to the problem. A perverse fact of modern life is the greater threat to public health of legal drugs (i.e., the opioid crisis), and illegal drugs like marijuana could mitigate the legal drug crisis. We maintain a ridiculous perspective on these drugs by suppressing research. Other drugs like alcohol are legal while the public health impacts are obviously severe. We don’t have the ability to understand the benefits of these drugs from scientific research.

Fundamentalism isn’t about religion, it’s about power.

― Salman Rushdie

Medicine is not immune from the political aspects especially as the money associated with it becomes enormous. Increasingly the whole of medical research starts to run into issues associated with morality. This is true with both the beginning and the end of life. Genetics is fraught with moral implications that naturally drive political response. We already see huge issues on the left with the response to GMO’s. The left wing response to GMO’s is reactionary and Luddite in character. At the same time the right’s view of GMO is cavalier and geared toward money motives above all else. Properly managed GMO’s could be a boon for mankind if we can balance the risks and benefits. The increasingly destructive political dialog without compromise assures no good outcomes.

We are living in modern times throughout the world and yet are dominated by medieval minds.

― Eqbal Ahmad

In no place does politics become more poisonous than with sex (at least in the United States). Whether we are talking about reproductive rights, public health, or sexual education, science is ignored in favor of religious moralization delivered through politics. We have the power to make huge difference in people’s lives through giving them control over their reproduction, yet this progress is undermined by the right wing. Ultimately the right’s position is to criminalize sex by using reproduction as aphoto_odile_fillod.jpgpunishment. This works to destroy people’s potential for economic advancement and burden the World with poor, unwanted children. Sex education for children is another example where ignorance is promoted as the societal response. Science could make the World far better and more prosperous, and the right wing stands in the way. It has everything to do with sex and nothing to do with reproduction.

The ludicrous idea of abstinence-based education is the rights appro1440707119737828244.jpgach even though it is utterly and completely ineffective and actually damages people. Those children are then spared the knowledge of their sexuality and their reproductive rights through an educational system that suppresses information. Rather than teach our children about sex in a way that honors their intelligence and arms them to deal with life, we send them out ignorant. This ignorance is yet another denial of reality and science by the right. The right wing does not want to seem like they are endorsing basic human instinct around reproduction and sex for pleasure. The result is to create more problems, more STD’s, more unwanted children, and more abortions. It gets worse because we also assure that a new generation of people will reach sexual maturity without the knowledge that could make their lives better. We know how to teach people to take charge of their reproductive decisions, sexual health and pleasure. Our government denies them this knowledge largely driven by antiquated moral and religious ideals, which only serve to give the right voters.

Take a stand for science, knowledge and progress by embracing politics. In today’s world nothing is more dangerous than truth and facts. People whose fundamental principles and philosophy are utterly opposed to the foundational principles of science govern us. Conflict between their approach to governance and science is inevitable. People who are completely terrified of reality increasingly rule us. Science is one of the key ways humans understand their reality. Science is a completely human thing and political to its core. We are rapidly moving to a world where it is radical to simply speak facts and the truth. Before long doing science may be an act of dissidence against the ruling class. As I noted above in many cases it already is.

If by a “Liberal” they mean someone who looks ahead and not behind, someone who welcomes new ideas without rigid reactions, someone who cares about the welfare of the people-their health, their housing, their schools, their jobs, their civil rights and their civil liberties-someone who believes we can break through the stalemate and suspicions that grip us in our policies abroad, if that is what they mean by a “Liberal”, then I’m proud to say I’m a “Liberal.”

― John F. Kennedy

Mission Focused Research Is Better Research

Great research depends on taking big risks with a large chance of failure and mission-focus is a clear way to get there. Failure is the key to learning, and research is fundamentally learning. We need to avoid failing too early by shying away from solving the harder problems and devotion to a mission provides the drive to keep focus on results that provide value to that mission.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

Unknown-3For a lot of people working at a National Lab there are two divergent paths for work, the research path that leads to lots of publishing, deep technical work and strong external connection, or the mission path that leads to internal focus and technical shallowness. The research path is for the more talented and intellectual people who can compete in this difficult world. For the less talented, creative or intelligent people, the mission world offers greater security at the price of intellectual impoverishment. Those who fail at the research focus can fall back onto the mission work and be employed comfortably after such failure. This perspective is a cynical truth for those who work at the Labs and represents a false dichotomy. If properly harnessed the mission focus and empower and energize better research, but it must be mindfully approached.

The measure of greatness in a scientific idea is the extent to which it stimulates thought and opens up new lines of research.

― Paul A.M. Dirac

As I stated, I believe the dichotomy of mission versus research is false. The mission imposes limitations and constraints on research. In a nutshell, the mission imposes a fixed problem to solve and one must adapt the solution to impact this mission. Conversely, pure research is unconstrained by a mission, which encourages people to change the problem to fit a solution. The fixed problem, adaptive solution mindset is imagesmuch better for engaging innovation and producing breakthrough results. It also means a great amount of risk and lots of failure. Pure research can chase unique results, but the utility of those results is often highly suspect. This sort of research entails less risk and less failure as well. If the results are necessarily impactful on the mission, the utility is obvious. The difficulty is noting the broader aspects of research applicability that mission application might hide.

Examples of great mission-focused research abound and our modern world is testimony to the breakthrough nature of Cold War defense focused research. The shape of the modern world is a testament to the power of mission-focused work to succeed. images copyUbiquitous aspects of modernity such as the Internet, cell phones and GPS all owe their existence to Cold War research focused on some completely different mission. All of these technologies were created through steadfast focus on utility that drove innovation as a mode of problem solving. This model for creating value has fallen into disrepair due to its uncertainty and risk. Risk is something we have lost the capacity to withstand as a result the failure necessary to learn and succeed with research never happens.

Failure is a greater teacher than success

― Clarissa Pinkola Estés

Mission focused research falls prey to concerns over risk. In many mission organizations there is a fear of taking on too much risk in adopting research results into the mission delivery. The thought is that the research might not pan out aimages-1nd the mission will suffer as result. This is both shortsighted and foolhardy. The truth is vastly different than this fear-based reaction and the only thing that suffers from shying away from research in mission-based work is the quality of the mission-based work. Doing research causes people to work with deep knowledge and understanding of their area of endeavor. Research is basically the process of learning taken to the extreme of discovery. In the process of getting to discovery one becomes an expert in what is known and capable of doing exceptional work. Today to much mission focused work is technically shallow and risk adverse. It is over-managed and underled in the pursuit of false belief that risk and failure are bad things.

There is a key tension to maintain in harnessing this engine of knowledge. The successful delivery of value and success to the mission work must take priority. Those conducting the research should have a deep commitment to the mission and its success. Ultimately the success at the mission work must supersede the research objectives. Even better the research objectives need to be guided by the mission needs. In this sense the mission acts to constrain the research and shape its direction and focus. This sort of dynamic must be carefully and adroitly managed if it can be achieved. Unconstrained research without mission focus is quite tempting and much simpler to manage. It is also less successful at producing real value for society. Almost every breakthrough of great significance was the result of results-focused work although many of the breakthroughs had far greater reach beyond their intended use.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

– Werner Heisenberg

mellado_turb_mixing_01In my own experience the drive to connect mission and research can provide powerful incentives for personal enrichment. For much of my early career the topic of turbulence was utterly terrifying, and I avoided it like the plague. It seemed like a deep, complex and ultimately unsolvable problem that I was afraid of. As I began to become deeply engaged with a mission organization at Los Alamos it became clear to me that I had to understand it. Turbulence is ubiquitous in highly energetic systems governed by the equations of fluid dynamics. The modeling of turbulence is almost always done using dissipative techniques, which end up destroying most of the fidelity in numerical methods used to compute the underlying ostensibly non-turbulent flow. These high fidelity numerical methods were my focus at the time. Of course these energy rich flows are naturally turbulent. I came to the conclusion that I had to tackle understanding turbulence.

One Winter break my laptop broke leaving without the abilicontentty to work on my computer codes over the break (those were the days!). So I went back to my office (those were the days!) and grabbed seven books on turbulence that had been languishing on my bookshelves unread due to my overwhelming fear of the topic. I started to read these books cover to cover, one by one and learn about turbulence. I’ve included some of these references below for your edification. The best and most eye opening was Uriel Frisch’s “Turbulence: the Legacy of A. N. Kolmogorov”. In the end, the mist began to clear and turbulence began to lose its fearful nature. Like most things one fears; the lack of knowledge of a thing gives it power and turbulence was no different. Turbulence is actually kind of a sad thing; its not understood and very little progress is being made.

First-Time-Measurements-of-Turbulent-MixingThe main point is that the mission focus energized me to attack the topic despite my fear of it. The result was a deeply rewarding and successful research path resulting in many highly cited papers and a book. All of a sudden the topic that had terrified me was understood and I could actually conduct research in it. All of this happened because I took contributing work to the mission as an imperative. I did not have the option of turning my back on the topic because of my discomfort over it. I also learned a valuable lesion about fearsome technical topics; most of them are fearsome because we don’t know what we are doing and overelaborate the theory. Today the best things we know about turbulence are simple, and old discovered by Kolmogorov as he evaded the Nazis in 1941.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

― Peter F. Drucker

In today’s world we have allowed a system to come into power that funds useless research. We have created vast swaths of safe research topics that ultimately produce rafts of papers, but little or no real utility for society. A big driver behind this mentality is the need to fund “sure things” that can’t fail. This pushes research into the realm of squalid incrementalism. Incremental research is safe and almost never prone to the risk of failure. It is also an actual waste of money that can produce the appearance and guise of success without actual achievement. Our unremitting fearful society including the worry of the appearance of scandal has driven us to this horrible point. Research has become cowardly and uninspired so that it doesn’t ever fail. Being mission-focused is too hard and too risky because the mission is too important to ever fail at. The true attitude should be that the mission is too important not fail at!

The main reason of fear of failure lies in people’s being unaware of the impact of misfortune on achieving success

― Sunday Adelaja

The current sorry state in high performance computing is a direct result of the current milieu where mission-focus is neglected in favor of carefully managed projects with sure things as targets. Project management is not leadership, and without leadership we will continue to steadfastly underachieve. For example, we have utterly eviscerated applied mathematics by pushing a product-oriented approach that demands the delivery of Unknown-2results in software. Producing software in the conduct of applied mathematics used to be a necessary side activity instead of the core of value and work. Today software is the main thing produced and actual mathematics is often virtually absent. Actual mathematical research is difficult, failure prone and hard to measure. Software on the other hand is tangible and managed. It is still is hard to do, but ultimately software is only as valuable as what it contains, and increasingly our software is full of someone else’s old ideas. We are collectively stewarding other people’s old intellectual content, and not producing our own, nor progressing in our knowledge.

This trend would be bad enough on its own, but it is the tip of a proverbial iceberg of underachievement. The second pillar for underachievement in high performance computing is, ironically, a devotion to computer hardware. Again computer hardware is tangible and easy to measure. To a naïve person (or congressman) the ability to measure our ability to do things with computers should be a one-to-one match with the raw power of our computers. Nothing could be farther from the truth as computing is a completely multi-disciplinary field depending on a huge swath of science for success. The computer hardware is actually one of the least important components in our modeling and simulation competence. Instead of producing a program that strives for true success in modeling and simulation based on real mission value, we have constructed programs that are intellectually vacuous because they are easier to fund and explain to unsophisticated people. The hardware program more naturally lends itself to management and simple metrics of success. It can be sold to uninformed people. Its current form is the abdication of leadership and antithetical to the concept of mission-focus. Our approach to high performance computing is only likely to achieve supremacy for the Chinese in the field.


Success is stumbling from failure to failure with no loss of enthusiasm.

― Winston S. Churchill

What is gained by this mission focus? The focus on mission-focused research means the problem being solved is fixed and unwavering, and the results and knowledge must contribute to the solution of this problem. This forces the research to adapt itself to the needs5064 of the problem rather than the problem to the research. The result of this model is the tendency to confront difficult thorny issues rather than shirk them. At the same time this form of research can also lead to failure and risk manifesting itself. This tendency is the rub, and leads to people shying away from it. We are societally incapable of supporting failure as a viable outcome. The result is the utter and complete inability to do anything hard. This all stems from a false sense of the connection between risk, failure and achievement.

If a single characteristic is contributing to a societal feeling that we have lost greatness, it is that we cannot accept failure. Without failure, great things cannot be achieved. Failure is the vehicle of achievement and learning whether we are talking about individuals, organizations or nations. The inability to accept failure as a possible outcome is the tacit acceptance of not wanting to do anything that matters, or anything great. The road to greatness is paved with many failures and the unerring drive to learn and grow from these failures. For the complex missions we are charged with, the commitment to mission focus in research means accepting failure as a necessary outcome of endeavor. This is the hard message that our spineless politicians and managers cannot give us. The inability to grasp this core truth is utter societal cowardice. True leadership would provide us the necessary support and encouragement to be courageous and steadfast. Instead we succumb to fear and the false belief that achievement can be managed and had without risk.

leadersResearch is about learning at a fundamental, deep level, and learning is powered by failure. Without failure you cannot effectively learn, and without learning you cannot do research. Failure is one of the core attributes of risk. Without the risk of failing there is a certainty of achieving less. This lower achievement has become the socially acceptable norm for work. Acting in a risky way is a sure path to being punished, and we are being conditioned to not risk and not fail. For this reason the mission-focused research is shunned. The sort of conditions that mission-focused research produces are no longer acceptable and our effective social contract with the rest of society has destroyed it.

If we are to successfully do great things again as people, as organizations, as laboratories and as a nation, the irony is that we need to fail a lot more. One way to assure the sort of failure we need is mission-focused research where providing value to a difficult mission is the primal goal of research. Better research is founded on devotion to meaningful outcomes, taking big risks and tolerating lots of failure.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

Launder, Brian Edward, and Dudley Brian Spalding. “Lectures in mathematical models of turbulence.” (1972).

Frisch, Uriel, and Russell J. Donnelly. “Turbulence: the legacy of AN Kolmogorov.” (1996): 82-84.

Pope, Stephen B. “Turbulent flows.” (2001): 2020.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Margolin, L. G., W. J. Rider, and F. F. Grinstein. “Modeling turbulent flow with implicit LES.” Journal of Turbulence 7 (2006): N15.

dt161005 copy