Toward a More Useful and Impactful Scientific Computing in 2018?

The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.

― Ralph Waldo Emerson

downloadIt would really be great to be starting 2018 feeling good about the work I do. Useful work that impacts important things would go a long way toward achieving this. I’ve put some thought into considering what might constitute work having these properties. This has two parts, what work would be useful and impactful in general, and what would be important to contribute to. As a necessary subtext to this conversation is a conclusion that most of the work we are doing in scientific computing today is neither useful, nor impactful and nothing important is at stake. This alone is a rather bold assertion. Simply put, as a Nation and society we are not doing anything aspirational, nothing big. This shows up in the lack of substance in the work we are paid to pursue. More deeply, I believe that if we did something big and aspirational, the utility and impact of our work would simply sort itself out as part of a natural order.

5064The march of science is the 20th Century was deeply impacted by international events, several World Wars and a Cold (non) War that spurred National interests in supporting science and technology. The twin projects of the atom bomb and the nuclear arms race along with space exploration drove the creation of much of the science and technology today. These conflicts steeled resolve, purpose and granted resources needed for success. They were important enough that efforts were earnest. Risks were taken because risk is necessary for achievement. Today we don’t take risks because nothing important is a stake. We can basically fake results and market progress where little or none exists. Since nothing is really that essential bullshit reigns supreme.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

One of the keys to these conflicts was the presence of a worthy adversary to steel ourselves for the push forward. Both Nazi Germany and Soviet Russia were worthy enemies whose competence meant putting our best foot forward. In reality and rhetorically we lack such an adversary today to push us. We needed to fully commit and faithfully execute our endeavors to achieve victory against these enemies. These opponents had the clear capacity to destroy the United States and the West if the patriotic_american_demotivators_640_01-s640x487-200741resistance was not real. Ironically the Soviets were ultimately defeated by bullshit. The Strategic Defense Initiative, or Star Wars bankrupted the Soviets. It was complete bullshit and never had a chance to succeed. This was a brutal harbinger of today’s World where reality is optional, and marketing is the coin of the realm. Today American power seems unassailable. This is partially true and partially over-confidence. We are not on our game at all, and far to much of our power is based on bullshit. As a result, we can basically just pretend to try, and actually not execute anything with substance and competence. This is where we are today; we are doing nothing important, and wasting lots of time and money in the process.

How do you defeat terrorism? Don’t be terrorized.

― Salman Rushdie

Again, I freely admit that this is a bold assertion. In scientific computing, we have a National exascale program that underpins National security and economic interests.  It contributes to all of these things in massive ways, at least rhetorically. This support for these National goals is pure marketing, or less generously absolute bullshit. This is simply trotting out a bunch of tired sales pitches for scientific computing that lack any soul and increasingly lack substance. The Nation has no large objectives to support, the entire system is drifting along on auto-pilot. It is brimming with over-confidence and a feeling of superiority that only needs a worthy opponent to expose our largess. We have no enemies that are remotely worthy. We have created some chicken-shit paper tigers like Iran, North Korea and the amorphous and largely toothless Islamic fundamentalism. None of these enemies is even the remotest threat to the United States, or the West in general. If they were a worthy threat then we are in awful shape and far worse than we actually are. Terrorism is only as much of a threat as we make it. We have stoked fear and let ourselves we terrorized because it is useful for the defense-intelligence Industrial complex. It has put trillions of dollars into their coffers, and done little or nothing to build a future. We could simply defeat these enemies by refusing to be terrorized. Some courage and resilience as a Nation would be sufficient to render these pathetic enemies utterly impotent. The greatest damage and threat from these enemies is our response to it, not the actual carnage. Our “leaders” are using them to spread fear among the populace to further their own agendas.

csm_group1_2c3e352676The result of the current model is a research establishment that only goes through the motions and does little or nothing. We make lots of noise and produce little substance. Our nation deeply needs a purpose that is greater. There are plenty of worthier National goals. If war-making is needed, Russia and China are still worthy adversaries. For some reason, we have chosen to capitulate to Putin’s Russia simply because they are an ally against the non-viable threat of Islamic fundamentalism. This is a completely insane choice that is only rhetorically useful. If we want peaceful goals, there are challenges aplenty. Climate change and weather are worthy problems to tackle requiring both scientific understanding and societal transformation to conquer. Creating clean and renewable energy that does not create horrible environmental side-effects remains unsolved. Solving the international needs for food and prosperity for mankind is always there. Scientific exploration and particularly space remain unconquered frontiers. Medicine and genetics offer new vistas for scientific exploration. All of these areas could transform the Nation in broad ways socially and economically. All of these could meet broad societal needs. More to the point of my post, all need scientific computing in one form or another to fully succeed. Computing always works best as a useful tool employed to help achieve objectives in the real World. The real-World problems provide constraints and objectives that spur innovation and keep the enterprise honest.

Reality is that which, when you stop believing in it, doesn’t go away.

― Philip K. Dick

image008Instead our scientific computing is being applied as a shallow marketing ploy to shore up a vacuous program. Nothing really important or impactful is at stake. The applications for computing are mostly make believe and amount to nothing of significance. The marketing will tell you otherwise, but the lack of gravity for the work is clear and poisons the work. The result of this lack of gravity are phony goals and objectives that have the look and feel of impact, but contribute nothing toward an objective reality. This lack of contribution comes from the deeper malaise of purpose as a Nation, and science’s role as an engine of progress. With little or nothing at stake the tools used for success suffer, scientific computing is no different. The standards of success simply are not real, and lack teeth. Even stockpile stewardship is drifting into the realm of bullshit. It started as a worthy program, but over time it has been allowed to lose its substance. Political and financial goals have replaced science and fact, the goals of the program losing connection to objective reality.

Scientific computing came to maturity being an important supporting player for large enterprises. Originally born in the Cold War as a key tool for science and engineering supporting defense science. Scientific computing spread from this base toward more general science, and more recently into broad use by business and the society as a whole. The kernel from which computing sprang was an interwoven set of large National objectives providing the technical foundation that powers our economy today. Computing was a key contributing player in these endeavors. These endeavors also supported a broad phalanx of other technologies and scientific explorations that formed the broad basis of modernizing the world. Such over-arching goals are breathtakingly missing today. We are lacking a World with any vision of a better future and limitless progress.

If we could marshal our efforts into some worthy efforts, what would we work on?

fig10_roleWe would still be chasing faster computers, but the faster computers would not be the primary focus. We would focus on using computing to solve problems that were important. We would focus on making computers that were useful first and foremost. We would want computers that were faster as long as they enabled progress on problem solving. As a result, efforts would be streamlined toward utility. We would not throw vast amounts of effort into making computers faster, just to make them faster (this is what is happening today there is no rhyme or reason to exascale other than, faster is like better, Duh!). Utility means that we would honestly look at what is limiting problem solving and putting our efforts into removing those limits. The effects of this dose of reality on our current efforts would be stunning; we would see a wholesale change in our emphasis and focus away from hardware. Computing hardware would take its proper role as an important tool for scientific computing and no longer be the driving force. The fact that hardware is a driving force for scientific computing is one of clearest indicators of how unhealthy the field is today.

Thinking something does not make it true. Wanting something does not make it real.

― Michelle Hodkin

If scientific computing was taking its role in a healthy National enterprise, the focus would be entirely different. Invariably we would see a very strong emphasis on modeling. In almost every serious endeavor using computing to get real design and analysis results, the physical modeling is the greatest limiting factor. A faster computer is always welcome, but a faster computer never fixes a faulty model. This maxim seems to be utterly and completely ignored in the current scientific computing narrative. The most effective way to improve modeling is also different than current emphasis. Better numerical methods and algorithms provide faster and more accurate solutions to models than computing hardware. This is another area where progress is completely stalled.

code_monkeyCurrent computing focus is only porting old codes to new computers, a process that keeps old models, methods and algorithms in place. This is one of the most corrosive elements in the current mix. The porting of old codes is the utter abdication of intellectual ownership. These old codes are scientific dinosaurs and act to freeze antiquated models, methods and algorithms in place while acting to squash progress. Worse yet, the skillsets necessary for improving the most valuable and important parts of modeling and simulation are allowed to languish. This is worse than simply choosing a less efficient road, this is going backwards. When we need to turn our attention to serious real work, our scientists will not be ready. These choices are dooming an entire generation that could have been making breakthroughs to simply become caretakers. To be proper stewards of our science we need to write new codes containing new models using new methods and algorithms. Porting codes turns our scientists into mindless monks simply transcribing sacred texts without any depth of understanding. It is a recipe for transforming our science into magic. It is the recipe for defeat and the passage away from the greatness we once had.

Without Your Opponent, You are no Victor.

― Anajo Black






Saying “NO!” is the key to success

Things which matter most must never be at the mercy of things which matter least.

― Johann Wolfgang von Goethe

bullshit_everywhere-e1345505471862My work day is full of useless bullshit. There is so much bullshit that it has choked out the room for inspiration and value. We are not so much managed as controlled.  This control comes from a fundamental distrust of each other to a degree that any independent ideas are viewed as dangerous. This realization has come upon me in the past few years. It has also occurred to me that this could simply be a mid-life crisis manifesting itself, but the evidence might seem to indicate that it is something more significant (look at the bigger picture of the constant crisis my Nation is in). My mid-life attitudes are simply much less tolerant of time-wasting activities with little or no redeeming value. You realize that your time and energy is limited, why waste it on useless things.

You and everyone you know are going to be dead soon. And in the short amount of time between here and there, you have a limited amount of fucks to give. Very few, in fact. And if you go around giving a fuck about everything and everyone without conscious thought or choice—well, then you’re going to get fucked.

― Mark Manson

the-subtle-art-3d-340pxI read a book that had a big impact on my thinking, “The Subtle Art of Not Giving a Fuck” by Mark Manson . In a nutshell, the book says that you have a finite number of fucks to give in life and you should optimize your life by mindfully not giving a fuck about unimportant things. This gives you the time and energy to actually give a fuck about things that actually matter. The book isn’t about not caring, it is about caring about the right things and dismissing the wrong things. What I realized is that increasingly my work isn’t competing for my fucks, they just assume that I will spend my limited fucks on complete bullshit out of duty. It is actually extremely disrespectful of me and my limited time and effort. One conclusion is that the “bosses” (the Lab, the Department of Energy) not give enough of a fuck about me to treat my limited time and energy with respect and make sure my fucks actually matter.

Maturity is what happens when one learns to only give a fuck about what’s truly fuckworthy.

― Mark Manson

I’ve realized recently that a sense of being inspired has departed from work. I’ve felt this building for years with the feeling that my work is useful and important ebbing away. I’ve been blessed for much of my career with work that felt important and useful where an important component of the product was my own added creativity. The work included a distinct element of my own talents and ideas in whatever was produced.

Superficially speaking, the element of inspiration seems to be present, work with meaning and importance with a sense of substantial freedom. As I implied, these elements are superficial, the reality is that each of these pieces has eroded away, and it is useful to explore how this has happened. The job I have would be a dream to most people, but conditions are degrading. It isn’t just my job, but most Americans are experiencing worsening conditions. The exception is the top of the management class, the executives. This is a mirror to broader societal inequalities logically expressed in the working environment. The key is recognizing that my job used to be much better, and that is something worth exploring in some depth.

At one level, I should be in the midst of a glorious time to be working in computational science and high-performance computing. We have a massive National program focused on achieving “exascale” or at the very least a great advance in computing power. Looking more closely, we can see deep problems that produce an inspiration gap. On the one hand, we have the technical objectives for the program being obsessively hardware focused for progress. We have been on this hardware path for 25 years producing progress, but no transformation in science has actually occurred (the powers that be will say it has, but the truth is that is hasn’t). Our computations are still not predictive, and the hardware is not the limiting aspect of computational science. Worse yet, the opportunities for massive hardware advances has passed and advancing now is fraught with difficulties, roadblocks and will be immensely costly. Aside from hardware, the program is largely focused on low-level software focused while porting old codes, methods and models (note: the things being ported and not invested in are the actual science!). It is not focused on the more limiting aspects of predictive modeling because they are subtle and risky to work on. They cannot be managed like a construction project using off the shelf management practices better suited for low wage workers, and unsuitable for scientists. The hardware path is superficial, easy to explain to the novice and managed as a project similarly to building a bridge or road.K0013407147--590438

This gets to the second problem with the current programs, how they are managed. Science cannot be managed like a big construction project, at least not successfully. The result of this management model is a stifling level of micromanagement. Our management model is defined by overwhelming suspicion and lack of trust resulting in massive inefficiency. The reporting requirements for this mode of management are massive and without value except to bean-counters. At the same time, there is no appetite for risk, and no capacity to tolerate failure. As a result, the entire program loses an ability to inspire, or reach for greatness.

If the Apollo Program had been managed in this fashion, we would have never made it to the Moon while spending vastly greater sums of money. If we had managed the Manhattan project in this way, we would have failed to create the atomic bomb. Without risk, there is no reward. There is a huge amount of resource and effort wasted. We do not lack money as much as we lack vision, inspiration and competent management. This is not to say that the United States does not have an issue investing in science and technology, we do. The current level of commitment to science and technology will assure that some other nation becomes the global leader in science and technology. A compounding issue to the lack of investment is how appallingly inefficient our investment is because of how science is managed today. A complimentary compounding element is the lack of trust in the scientists and engineers. Without trust, no one will take any risk and without taking risks nothing great will ever be achieved. If we don’t solve these problems, we will not produce greatness, plain and simple; we will create decline and decay into mediocrity.

But until a person can say deeply and honestly, “I am what I am today because of the choices I made yesterday,” that person cannot say, “I choose otherwise.

― Stephen R. Covey

None of these problems suddenly appeared. They are the consequence of decades of evolution toward the current completely dysfunctional management approach. Once great Laboratories have been brought to heel with a combination of constraints, regulations and money. There is more than enough money and people to accomplish massive things. The problem is that the constraints and regulatory environment have destroyed any chance for achievement. With each passing year our scientific programs sound more expansive, but less capable of achieving anything of substance. Our management approach is undermining achievement at every turn. The focus of the management is not producing results, but producing the appearance of success without regard for reality. The workforce must be complaint, and never make any mistakes. The best way to avoid mistakes is low-balling results. You always aim low to avoid the possibility of failing. Each year we aim a little lower, and achieve a little less. This has produced a steady erosion of capability much like an interest-bearing account, but in reverse.

mediocritydemotivatorIf we look at work, it might seem that an inspired workforce would be a benefit worth creating. People would work hard and create wonderful things because of the depth of their commitment to a deeper purpose. An employer would benefit mightily from such an environment, and the employees could flourish brimming with satisfaction and growth. With all these benefits, we should expect the workplace to naturally create the conditions for inspiration. Yet this is not happening; the conditions are the complete opposite. The reason is that inspired employees are not entirely controlled. Creative people do things that are unexpected and unplanned. The job of managing a work place like this is much harder. In addition, mistakes and bad things happen too. Failure and mistakes are an inevitable consequence of hard working inspired people. This is the thing that our work places cannot tolerate. The lack of control and unintended consequences are unacceptable. Fundamentally this stems from a complete lack of trust. Our employers do not trust their employees at all. In turn, the employees do not trust the workplace. It is vicious cycles that drags inspiration under and smothers it. The entire environment is overflowing with micromanagement, control suspicion and doubt.

In the end that was the choice you made, and it doesn’t matter how hard it was to make it. It matters that you did.”

― Cassandra Clare

How do we change it?

One clear way of changing this is giving the employees more control over their work. It has become very clear to me that we have little or no power to make choices at work. One of the clearest ways of making a choice is being given the option to say “NO”. Many articles are written about the power of saying NO to things because it makes your “YES” more powerful. The problem is that we can’t say NO to so many things. I can’t begin to elaborate on all the functionally useless things that don’t have to option of skipping. I spend a great deal of effort on mandatory meetings, training, and reporting that has no value whatsoever. None of it is optional, and most of it is completely useless. Each of these useless activities drains away energy from something useful. All of the useless things I do are related to a deep lack of trust in me and my fellow scientists.

Let’s take the endless reporting and tracking of work as a key example. There is nothing wrong with planning a project and getting updates on progress. This is not what is happening today. We are seeing a system that does not trust its employees and needs to continually look over their sholders. A big part of the problem is that the employees are completely uninspired because the programs they work on are terrible. The people see very little of themselves in the work, or much purpose and meaning in the work. Rather than make the work something deeper and more collaborative, the employers increase the micromanagement and control. A big part of the lack of trust is the reporting. Somehow the whole concept of quarterly progress used for business has become part of science creating immense damage. Lately quarterly progress isn’t enough, and we’ve moved to monthly reporting. All of this says, “we don’t trust you,” “we need to watch you closely” and “don’t fuck up”.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….

― Noam Chomsky

The-Subtle-Art-of-not-giving-a-fuck2If we can’t say NO to all this useless stuff, we can’t say YES to things either. My work and time budget is completely stocked up with non-optional things that I should say NO to. They are largely useless and produce no value. Because I can’t say NO, I can’t say YES to something better. My employer is sending a message to me with very clear emphasis, we don’t trust you to make decisions. Your ideas are not worth working on. You are expected to implement other people’s ideas no matter how bad they are. You have no ability to steer the ideas to be better. Your expertise has absolutely no value. A huge part of this problem is the ascendency of the management class as the core of organizational value. We are living in the era of the manager; the employee is a cog and not valued. Organizations voice platitudes toward the employees, but they are hollow. The actions of the organization spell out their true intent. Employees are not to be trusted, they are to be controlled and they need to do what they are told to do. Inspired employees would do things that are not intended, and take organizations in new directions, focused on new things. This would mean losing control and changing plans. More importantly, the value of the organization would move away from the managers and move to the employees. Managers are much happier with employees that are “seen and not heard”.

If something is not a “hell, YEAH!”, then it’s a “no!

― James Altucher

What should I be saying YES to?

If I could say YES then I might be able to put my focus into useful, inspired and risky endeavors. I could produce work that might go in directions that I can’t anticipate or predict. These risky ideas might be complete failures. Being a failure I could learn invaluable lessons, and grow my knowledge and expertise. Being risky these ideas might produce something amazing and create something of real value. None of these outcomes are a sure thing. All of these characteristics are unthinkable today. Our managers want a sure thing and cannot deal with unpredictable outcomes. The biggest thing our managers cannot tolerate is failure. Failure is impossible to take and leads to career limiting consequences. For this reason, inspired risks are impossible to support. As a result, I can’t say NO to anything, no matter how stupid and useless it is. In the process, I see work as an increasingly frustrating waste of my time.

Action expresses priorities.

― Mahatma Gandhi

We all have limits defined our personal time and effort. Naturally we have 24 hours a day, 7 days a week and 365 days a year, along with our own personal energy budget. If we are managed well, we can expand our abilities and create more. We can be more efficient and work more effectively. If one looks honestly at how we are managed expanding our abilities and personal growth has almost no priority. Creating an inspiring and exciting place to work is equally low on the list. Given the pathetic level of support for creation and inspiration attention naturally turns elsewhere. Everyone needs a level of balance in their lives and we obviously gravitate toward places where a difference can be made.

mark_manson_the_subtle_art_of_not_giving_a_f_ck_our_crisis_is_no_longerAs Mark Manson writes we only have so many fucks to give and my work is doing precious little to give them there. I have always focused on personal growth and increasingly personal growth is resisted by work instead of resonated with. It has become quite obvious that being the best “me” is not remotely a priority. The priority at work is to be compliant, take no risks, fail at nothing and help produce marketing material for success and achievement. We aren’t doing great work anymore, but pretend we are. My work could simply be awesome, but that would require giving me the freedom to set priorities, take risks, fail often, learn continually and actually produce wonderful things. If this happened the results would speak for themselves and the marketing would take care of itself. When the Labs I’ve worked at were actually great this is how it actually happened.  The Labs were great because they achieved great things. The labs said NO to a lot of things, so they could say YES to the right things. Today, we simply don’t have this freedom.

We are our choices.

― Jean-Paul Sartre

9.26.16If we could say NO to the bullshit, and give our limited fucks a powerful YES, we might be able to achieve great things. Our Labs could stop trying to convince everyone that they were doing great things and actually do great things. The missing element at work today is trust. If the trust was there we could produce inspiring work that would generate genuine pride and accomplishment. Computing is a wonderful example of these principles in action. Scientific computing became a force in science and engineering contributing to genuine endeavors for massive societal goals. Computing helped win the Cold War and put a man on the moon. Weather and climate has been modeled successfully. More broadly, computers have reshaped business and now societally massively. All of these endeavors had computing contributing to solutions. Computing focused on computers was not the endeavor itself like it is today. The modern computing emphasis was originally part of a bigger program of using science to support the nuclear stockpile without testing. It was part of a focused scientific enterprise and objective. Today it is a goal unto itself, and not moored to anything larger. If we want to progress and advance science, we should focus on great things for society, not superficially put our effort into mere tools.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

Say no to everything, so you can say yes to the one thing.

― Richie Norton

Verification and Validation’s Biggest Hurdle is an Honesty

Better to get hurt by the truth than comforted with a lie.

― Khaled Hosseini

Being honest about one’s shortcomings is incredibly difficult. This is true whether one is looking at their self, or looking at a computer model. It’s even harder to let someone else be honest with you. This difficulty is the core of many problems with verification and validation (V&V). If done correctly, V&V is a form of radical honesty that many simply cannot tolerate. The reasons are easy to see if our reward systems are considered. Computer modeling desires to get great results on the problems they want to solve. Computer modelers are rated on their ability to get seemingly high-quality answers ( ). As a result, there is significant friction with honest V&V assessments, which provide uncertainty and doubt on the quality of results. The tension between good results and honesty will always favor the results. Thus V&V is done poorly to conserve the ability of modelers to believe their results are better than they really are. If we want V&V to be done well an additional level of emphasis needs to be placed on honesty.image008

If you do not tell the truth about yourself you cannot tell it about other people.

― Virginia Woolf

V&V is about assessing capability. It is not about getting great answers. This distinction is essential to recognize. V&V is about collecting highly credible evidence about the nature of modeling capability. By its very nature, the credibility of the evidence means that the results are whatever the results happen to be. If the results are good the evidence will show this persuasively. If the results are poor, the evidence will indicate the quality ( ). The utility of V&V is providing a path to improvement along with evidence to support this path. As such, V&V provides a path and evidence for getting to improved results. This improved result would then be supported by V&V assessments. This entire process is predicated on the honesty of those conducting the work, but the management of these efforts is a problem. Management is continually trying to promote the great results outcomes for modeling. Unless the results are actually great, this promotion provides direction for lower quality V&V. In the process, honesty and evidence are typically sacrificed.


Standards Subcommittee. Provide procedures for assessing and quantifying the accuracy and credibility of computational modeling and simulation. V&V Standards Committee in Computational Modeling and Simulation. V&V-10 – Verification and Validation in Computational Solid Mechanics. V&V-20 – Verification and Validation in Computational Fluid Dynamics and Heat Transfer. V&V-30 – Verification and Validation in Computational Simulation of Nuclear System Thermal Fluids Behavior. V&V-40 – Verification and Validation in Computational Modeling of Medical Devices.

If we want to do V&V properly, something in this value system needs to change. Fundamentally, honesty and a true understanding of the basis of computational modeling must surpass the desire to show great capability. The trends in management of science are firmly arrayed against honestly assessing capability. With the prevalence of management by press release, and a marketing based sales pitch for science money both act to promote a basic lack of honesty and undermine disclosure of problems. V&V provides firm evidence of what we know, and what we don’t know. The quantitative and qualitative aspects of V&V can produce exceptionally useful evidence of where modeling needs to improve. These characteristics conflict directly with the narrative that modeling has already brought reality to heel. Program after program is sold on the basis that modeling can produce predictions of what will be seen in reality. Computational modeling is seen as an alternative to expensive and dangerous experiments and testing. It can provide reduced costs and cycle times for engineering. All of this can be a real benefit, but the degree of current mastery is seriously oversold.image001

Doing V&V properly can unmask this deception (I do mean deception even if the deceivers are largely innocent of outright graft). The deception is more the product of massive amounts of wishful thinking, and harmful group think focused on showing good results rather than honest results. Sometimes this means willfully ignoring evidence that does not support the mastery. In other cases, the results are based on heavy-handed calibrations, and the modeling is far from predictive. In the naïve view, the non-predictive modeling will be presented as predictions and hailed as great achievements. Those who manage modeling are largely responsible for this state of affairs. They reward the results that show how good the models are and punish honest assessment. Since V&V is the vehicle for honest assessment, it suffers. Modelers will either avoid V&V entirely, or thwart any effort to apply it properly. Usually the results are given without any firm breakdown of uncertainties, and simply assert that the “agreement is good” or the “agreement is excellent” without any evidentiary basis save plots that display data points and simulation values being “close”.

If you truly have faith in your convictions, then your convictions should be able to stand criticism and testing.

― DaShanne Stokes

This situation can be made better by changing the narrative about what constitutes good results. If we value knowledge and evidence of mastery as objectives instead of predictive power, we tilt the scales toward honesty. One of the clearest invitations to hedge toward dishonesty is the demand of “predictive modeling”. Predictive modeling has become a mantra and sales pitch instead of an objective. Vast sums of money are allotted to purchase computers, and place modeling software on these computers with the promise of prediction. We are told that we can predict how our nuclear weapons work so that we don’t have to test them. The new computer that is a little bit faster is the key to doing this (they always help, but are never the lynchpin). We can predict the effects of human activity on climate to be proactive about stemming its effects. We can predict weather and hurricanes with increasing precision. We can predict all sorts of consequences and effect better designs of our products. All of these predictive capabilities are real, and all have been massively oversold. We have lost our ability to look at challenges as good things and muster the will to overcome them. We need to tilt ourselves to be honest about how predictive we are, and understand where our efforts can make modeling better. Just as important we need to unveil the real limits on our ability to predict.

A large part of the conduct of V&V is unmasking the detailed nature of uncertainty. Some of this uncertainty comes from our lack of knowledge of nature, or flaws in our fundamental models. Other uncertainty is simply intrinsic to our reality. This is phenomena that is variable even with seemingly identical starting points. Separating these types of uncertainty, and defining their magnitude should be greatly in the service of science. For the uncertainties that we can reduce through greater knowledge, we can array efforts to affect this reduction. This must be coupled to the opportunity for experiment and theory to improve matters. On the other hand, if uncertainty is irreducible, it is important to factor it into decisions and accommodate its presence. By ignoring uncertainty with the practice of default of ZERO uncertainty ( ), we become powerless to assert our authority, or practically react to it.

image004In the conduct of predictive science, we should look to uncertainty as one of our primary outcomes. When V&V is conducted with high professional standards, uncertainty is unveiled and estimated in magnitude. With our highly over-promised mantra of predictive modeling enabled by high performance computing, uncertainty is almost always viewed negatively. This creates an environment where willful or casual ignorance of uncertainty is tolerated and even encouraged. Incomplete and haphazard V&V practice becomes accepted because it serves the narrative of predictive science. The truth and actual uncertainty is treated as bad news, and greeted with scorn instead of praise. It is simply so much easier to accept the comfort that the modeling has achieved a level of mastery. This comfort is usually offered without evidence.

The trouble with most of us is that we’d rather be ruined by praise than saved by criticism.

― Norman Vincent Peale

Somehow a different narrative and value system needs to be promoted for science to flourish. A starting point would be a recognition of the value of highly professional V&V work and the desire for completeness and disclosure. A second element of the value system would be valuing progress in science. In keeping with the value on progress would be a recognition that detailed knowledge of uncertainty provides direct and useful evidence to steer science productively. We can also use uncertainty to act proactively in making decisions based on actual predictive power. Furthermore, we may choose not to use modeling to decide if the uncertainties are too large and informing decisions. The general support for the march forward of scientific knowledge and capability is greatly aided by V&V. If we have a firm accounting of our current state of knowledge and capability, we can mindfully choose where to put emphasis on progress.\

image006This last point gets at the problems with implementing a more professional V&V practice. If V&V finds that uncertainties are too large, the rational choice may be to not use modeling at all. This runs the risk of being politically incorrect. Programs are sold on predictive modeling, and the money might look like a waste! We might find that the uncertainties from numerical error are much smaller than other uncertainties, and the new super expensive, super-fast computer will not help make things any better. In other cases, we might find out that the model is not converging toward a (correct) solution. Again, the computer is not going to help. Actual V&V is likely to produce results that require changing programs and investments in reaction. Current management often looks to this as a negative and worries that the feedback will reflect poorly on previous investments. There is a deep-seated lack of trust between the source of the money and the work. The lack of trust is driving a lack of honesty in science. Any money spent on fruitless endeavors is viewed as a potential scandal. The money will simply be withdrawn instead of redirected more productively. No one trusts the scientific process to work effectively.  The result is an unwillingness to engage in a frank and accurate dialog about how predictive we actually are.

It’s discouraging to think how many people are shocked by honesty and how few by deceit.

― Noël Coward

It wouldn’t be too much of a stretch to say that technical matters are a minor aspect of improving V&V. This does not make light of, nor minimize the immense technical challenges in conducting V&V. The problem is that the current culture of science is utterly toxic for progress technically. We need a couple of elements to change in the culture of science to make progress. The first one is trust. The lack of trust is pervasive and utterly incapacitating (,,  ). Because of the underlying lack of trust, scientists and engineers cannot provide honest results or honest feedback on results. They do not feel safe and secure to do either. This is a core element surrounding the issues with peer review ( ). In an environment where there is compromised trust, peer review cannot flourish because honesty is fatal.

Nothing in this world is harder than speaking the truth, nothing easier than flattery.

― Fyodor Dostoyevsky

The second is a value on honesty. Today’s World is full of examples where honesty is punished rather than rewarded. Speaking truth to power is a great way to get fired. Those of us who want to be honest are left in a precarious position. Choose safety and security while compromising our core principles, or stay true to our principles and risk everything. Over time, the forces of compromised integrity, marketing and bullshit over substance wear us down. Today the liars and charlatans are winning. Being someone of integrity is painful and overwhelming difficult. The system seems to be stacked against honest discourse and disclosure. Of course, honesty and trust are completely coupled. Both need to be supported and rewarded. V&V is simply one area where these trends play out and distort

It is both jarring and hopeful that the elements holding science back are evident in the wider world. The new and current political discourse is full of issues that are tied to trust and honesty. The degree to which we lack trust and honesty in the public sphere is completely disheartening. The entire system seems to be spiraling out of control. It does not seem that the system can continue on this path much longer ( ). Perhaps we have hit bottom and things will get better. How much worse can things get? The time for things to start getting better has already passed. This is true in the broader public World as well as science. In both cases trust for each other, and a spirit of honesty would go a long way to providing a foundation for progress. The forces of stagnation and opposition to progress have won too much ground.

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson


Nothing is so difficult as not deceiving oneself.

― Ludwig Wittgenstein


Scientific Computing’s Future Is Mobile, Adaptive, Flexible and Small

Without deviation from the norm, progress is not possible.

― Frank Zappa

titanThere is something seriously off about working on scientific computing today. Once upon a time it felt like working in the future where the technology and the work was amazingly advanced and forward-looking. Over the past decade this feeling has changed dramatically. Working in scientific computing is starting to feel worn-out, old and backwards. It has lost a lot of its sheen and it’s no longer sexy and fresh. If I look back 10 years everything we then had was top of the line and right at the “bleeding” edge. Now we seem to be living in the past, the current advances driving computing are absent from our work lives. We are slaving away in a totally reactive mode. Scientific computing is staid, immobile and static, where modern computing is dynamic, mobile and adaptive. If I want to step into the modern world, now I have to leave work. Work is a glimpse into the past instead of a window to the future. It is not simply the technology, but the management systems that come along with our approach. We are being left behind, and our leadership seems oblivious to the problem.

For most of the history of computing in the 20th and into the 21st Century, scientific computing was at the forefront of technology. That is starting to change. Even today scientific computing remains exotic in terms of hardware and some aspects of software, but it also feels antiquated and antique. We get to use cutting edge computer chips and networking hardware that demand we live on the ragged edge technologically. This is only half the story. We also remain firmly entrenched in the “mainframe” era with corporate computing divisions that seem more “Mad Men” and less “Star Trek” than ever. The distance between the computers we use to execute our leading edge scientific investigations and our offices or our personal lives are diverging at warp speed. It has become hopelessly ironic in many ways. Worse than ironic, the current state of things is unhealthy and lessens the impact of scientific computing on today’s World.

Unknown-2Even worse than the irony is the price this approach is exacting on scientific computing. For example, the computing industry used to beat a path to scientific computing’s door, and now we have to basically bribe the industry to pay attention to us. A fair accounting of the role of government in computing is some combination of being a purely niche market, and partially pork barrel spending. Scientific computing used to be a driving force in the industry, and now lies as a cul-de-sac, or even pocket universe, divorced from the day-to-day reality of computing. Scientific computing is now a tiny and unimportant market to an industry that dominates the modern World. In the process, scientific computing has allowed itself to become disconnected from modernity, and hopelessly imbalanced. Rather than leverage the modern World and its technological wonders many of which are grounded in information science, it resists and fails to make best use of the opportunity. It robs scientific computing of impact in the broader World, and diminishes the draw of new talent to the field.

It would be great to elaborate on the nature of the opportunities, and the cost of the pileofshitpresent imbalances. If one looks at the modern computing industry and its ascension to the top of the economic food chain, two things come to mind: mobile computing – cell phones – and the Internet. Mobile computing made connectivity and access ubiquitous with massive penetration into our lives. Networks and apps began to create new social connections in the real world and lubricated communications between people in a myriad of ways. The Internet became both a huge information repository, and commerce. but also an engine of social connection. In short order, the adoption and use of the internet and computing in the broader human World overtook and surpassed the use by scientists and business. Where once scientists used and knew computers better than anyone, now the World is full of people for whom computing is far more important than for science. Science once were in the lead, and now they are behind. Worse yet, science is not adapting to this new reality.

Those who do not move, do not notice their chains.

― Rosa Luxemburg

The core of the problem with scientific computing is its failure to adapt and take advantage of the opportunity defined by this ascendency of computing. A core of science’s issue with computing is the lost sense that computers are merely a tool. Computers are a tool that may be used to do science. Instead of following this maxim, we simply focus on the older antiquated model of scientific computing firmly grounded in the mainframe era. Our mindset has not evolved with the rest of the World. One of the clear consequences of the mindset is a creeping degree of gluttony and intellectual laziness with high performance computing. All problems reduce to simply creating faster computers and making problems submit to the raw power of virtually limitless computations. We have lost sight of the lack of efficiency of this approach. A renewed focus on issues of modeling, methods and algorithms could be deeply enlivened by the constraints imposed by limited computing resources. Moreover, the benefits of solving problems more efficiently with smaller computing resources would yield innumerable benefits in the setting of big iron. This could be achieved without the very real limitations of having big iron be the sole focus of our efforts.

Cielo rotator

Scientific computing could be arranged to leverage the technology that is advancing the World today. We could look at a mobile, adaptive platform for modeling, simulation and data analysis that harnessed the best of technology. We could move through the cloud using technology in an adaptive, multiscale manner. One of the biggest challenges is letting go of the power dynamic that drives thinking today. Scientific computing has been addicted to Moore’s law for too long. The current exascale push is symptomatic of this addiction. Like any addiction it is unhealthy and causes the subject to avoid real cures for their problem. We see progress as equivalent to raw power with a single computer. The huge stunt calculation as a vehicle for science is a manifestation of this addiction. Science is done with many calculations along with an adaptive examination of problems or mindful interrogation of results. Power can also be achieved through mobility, ubiquity and flexibility. The big iron we pursue has become tantamount to progress because it’s the only route we can envision. The problem is that technology, and the arc of progress is working against us instead of with us. It is past time to change our vision of what the future can be. The future needs to be different by embracing a different technological path. On one hand, we won’t be swimming against the current of computing technology, but on the other hand we will need to invest in different solutions to make it work.

Flexibility is an art of creating way outs within the cul-de-sacs!
― Mehmet Murat ildan

Mobility is power, and it has made computing ubiquitous. When the broader computing industry embraced the death of Moore’s law, it switched its attention to cell phones. Instead of simply being phones, they became mobile computers and mobile extensions of the Internet. In doing so we unleashed a torrent of creativity and connection. All of a sudden, we saw computers enable the level of social connection that the Internet always had promised, but never delivered. The mobile computing revolution has reshaped the World in a decade. In the process, the mobile market overwhelmed the entire computing industry and created economic dominance on an unparalleled scale. The killer piece of technology was the iPhone. It combined a focus on user interface along with software that enabled everything. We also need to recognize that each phone is more powerful than the fastest computer in the World 25 years ago. We have tremendous power at our fingertips.

One of the really clear messages of the recent era in computing is a change in the nature of value and power. For a long time, power was measured by hardware gains in speed, memory and capability, but now application innovation and flexibility rule. Hardware is largely a fixed and slowly changing commodity and represents a level playing field. The software in the applications and the user interface are far more important. Algorithms that direct information and attention are dominating the success in computing. Providing the basis of connection and adaption to the needs of the users has become the medium for creating new markets. At the same time these algorithms have come under fire for how they manipulate people and data. These mobile computers have become a massive issue for society as a whole. We are creating brand new social problems and side-effects we need to effectively solve. The impact of this revolution in computing on society as a whole has been incredible.

A whole cadre of experts is fading from the field of play in computing. In taking the tact of focusing on mainframe computing, scientific computing is sidelining itself. Instead of this enormously talented group of people playing in the area that means the most to society, they are focused on a cul-de-sac grounded in old and outdated models of success. Our society would benefit by engaging these experts in making mobile computing more effective in delivering value in new innovative ways. We could be contributing to solving some of the greatest problems facing us rather than seeing our computing as a special niche serving a relatively small segment of society’s needs. In the past, scientific computing has provided innovative and dynamic solutions that ultimately made their way into the general computing. A perfect example is Google. The problem new-google-algorithmthat Google solved is firmly grounded in scientific computing and applied mathematics. It is easy to see how massive the impact of this solution is. Today we in scientific computing are getting further and further from relevance to society. This niche does scientific computing little good because it is swimming against a tide that is more like a tsunami. The result is a horribly expensive and marginally effective effort that will fail needlessly where it has the potential to provide phenomenal value.

You never change things by fighting the existing reality.

To change something, build a new model that makes the existing model obsolete.

― R. Buckminster Fuller

We are long passed the time to make a change in scientific computing’s direction and strategy. Almost everywhere else the mainframe era died decades ago. Why is scientific computing tied to this model? Why are scientists resisting the conclusions so nakedly obvious? In today’s risk, adverse environment making a change to the underlying model of this branch of science is virtually impossible. Even when the change is dramatically needed and overdue by years the resistance is strong. The status quo is safe and firmly entrenched. In a time when success can be simply asserted and largely manufactured, this unacceptable state of affairs will persist far longer than it should. Sooner or later someone will take the plunge, and success will follow them. They will have the winds of progress at their backs solving most of the problems easily that we throw billions of dollars at with meager success.

The measure of intelligence is the ability to change.

― Albert Einstein


What’s going wrong and why

If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.

― Albert Einstein

A few years ago, I was challenged to examine the behavior of void in continuum hydrocodes. A senior colleague suggested looking at problems that might allow us to understand how the absence of material would be treated in a code. The simplest version of this problem would solve the expansion of a real gas into a void. With an ideal gas this problem has an exact solution that can be found with a Riemann solution. In the process, we have discovered that these problems are not solved well by existing methods. We approximate the void with a very low density and pressure material, and we have found as the material approaches an actual void, the solutions seem to become non-convergent, and prone to other significant numerical difficulties. Even when using extremely refined meshes with many 1000’s of cells in one dimension, convergence is not observed for a broad class of methods. These methods have solved many difficult problems and we believe them to be robust and reliable. These problems persist for all methods tested including our fail-safe methods (e.g., first order Godunov).

What is going on?

I’ll just say in passing that this post is a bit of a work in progress conversation with myself (or myself to you). My hope is that it will shake lose my thinking. It is patterned on the observation that sometimes you can solve a problem by carefully explaining it to someone else.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

This slideshow requires JavaScript.

One of the difficulties of this problem is the seemingly bad behavior coming from our most reliable and robust methods. When we want a guaranteed a good solution to a problem, we unleash a first-order Godunov method on it, and if use an exact Riemann solver we can expect the solution to be convergent. The results we see with void seemingly violate this principle. We are getting terrible solutions in a seemingly systematic manner. To make matters worse, the first-order Godunov method is the basis, and fallback position for the more important second- or third-order methods we practically want to use. We can conclude that this problem is exposing some rather serious problems with our workhorse methods and the potential for wholesale weakness in our capability.

There are no facts, only interpretations.

― Friedrich Nietzsche


First order Godunov with 1000 cells. Plotting the maximum velocity over time shows the convergence for 100 and 1000:1 jumps. The velocioty peaks and decays to the correct solution.

Let’s look at what happens for the approximate-void problem. We approximate the void with a gas that has a density and pressure of twelve orders of magnitude smaller than the “real” material. This problem has a solution that almost gives the expansion into vacuum solution to the Euler equations (where the head of the rarefaction and the contact discontinuity collapse into a single structure that separates material from nothing). The problem is dominated by an enormous rarefaction that takes the density down by many orders of magnitude. What we see is a solution that appears to get worse and worse under mesh refinement. In other words, it diverges under mesh refinement. Actually, the behavior we see is a bit more complex than this. At very low resolutions, the solution is behind the exact solution, and as we refine the mesh, the solution catches up to and, then passes the exact solution. Then as we add more and more mesh, the solution just gets worse and worse. This is not supposed to happen. This is a very bad thing that needs focused attention.


Comparing first order, PLM and PPM solutions for the 1000:1 jump. The high order methods converge much faster than the first-order method.

So maybe backing away from the extreme problem is worth doing. I ran a sequence of shock tube problems varying the jump in pressure and density starting at 10:1 and slowly going up to the extreme jump that approximates an expansion into void. The shock tube is a self-similar problem, meaning that we can swap time and space through a similarity transformation. Thus, the very early time evolution on a very fine grid is essentially the same as a late time solution on a very coarse grid. What I noticed is the same pattern over and over. More importantly, the problem gets worse and worse as the jumps get larger and larger. By examining the trend as the jumps become very large, we start to see the nature of our problem. As the jump becomes larger and larger, the solution converges more and more slowly. We can start to estimate the mesh resolution needed for a good result and we can see that the situation becomes almost hopeless in the limit. I believe the solution will eventually converge given enough mesh, but the size of the mesh needed to get a convergent solution becomes completely absurd.


For the large jumps of a million to a trillion convergence is lost at 1000 cells. The solution hasn’t even reached its peak value to decay toward the correct solution.

In summary, the problem with a factor of a million jump converges with modestly unreasonable mesh. As the jump grows in size, the convergence requires a mesh that is prohibitive for any practical work. If we are going to accurately solve this class of problems some other approach is needed. To make things worse the when the problem converges, the rate of convergence under refinement of the mesh is painfully slow, and incredibly expensive as a result.

Everywhere is walking distance if you have the time.

― Steven Wright

The second issue we see is a persistent velocity glitch at the head of the rarefaction. It is fair to say that the glitch has heretofore been viewed as a cosmetic problem. This velocity peak looks like a meaningfully wrong solution to the equations locally. It produces a shock like solution in the sense that produces a violation of Lax’s entropy condition, where the characteristics locally converge in a shock-like manner in a rarefaction where the characteristics should diverge locally. We might expect that this problem would hurt the physically meaningful solution. Not all-together surprisingly the solution can also violate the second law of thermodynamics when using higher than first-order methods. Moreover, this character simply gets worse and worse as the problem gets closer to a void. A reasonable supposition is that this feature in the numerical solution is a symptom of difficulties in rarefactions. Usually this feature can be categorized as a nuisance and relatively small contributor to error, but may be a sign of something deeper. Perhaps this nuisance becomes a significant issue as the rarefaction becomes stronger, and ultimately dominates the numerical character of the solution. We might be well-served by removing it from the solution. One notion we might add to the treatment of the glitch is its diminishing size as the mesh is refined. Having this anomalous shock-like character allows dissipation to damp the spike and improve the solution. The counter-point to this solution is not creating the glitch in the first place.


For the jump of 100 million we get convergence with 2000 and 4000 cells. This also shows that the curves are quite close to self-similar In addition the slow convergence is evident in the behavior.

At this point it’s useful to back away from the immediate problem to a broader philosophical point. The shock capturing methods are naturally focused on computing shocks. Shock waves were a big challenge for numerical methods. They remain a large challenge, and failure to treat them effectively can be fatal for a calculation. If the shock wave was not treated with care, the numerical can fail catastrophically, or significantly damaged. Even when the results are not catastrophic, poor treatment of a shock can result in significant corruption of the solution that often spreads from the shock to other areas in the solution. For this reason, the shock wave and its numerical treatment has been an enduring focus of numerical methods for compressible flows. Conversely rarefactions have largely been an afterthought. Rarefactions are benign smooth structures that do not directly threaten a calculation. A few bad things can happen in rarefactions, but they are rarely fatal to the calculation. A few have been so cosmetically problematic that major effort has ensued (the rarefaction shock). Problems in rarefactions are generally just a nuisance, and only become a focal point when the details of the solution are examined. One aspect of the details is the convergence character of the solution. Shock tube problems are rarely subjected to a full convergence analysis. The problem we focus on here is dominated by a rarefaction thus magnifying any problems immensely. What we can conclude is that strong rarefactions are not computed with high fidelity.

The trick to forgetting the big picture is to look at everything close up.

― Chuck Palahniuk

One of the key ways of dealing with shock waves are upwind methods. A clear manner of treating these waves and getting an upwind solution is the use of a discontinuous basis to define the spatial discretization. This discontinuous basis is also used with high-order methods, and the first order solution becomes the fallback position for the methods. This approach is very well suited to computing shocks; a discontinuous approximation for a discontinuous phenomenon. By the same token, a discontinuous basis is not well suited for a continuous phenomenon like a rarefaction. One hypothesis to explore is different types of approximations to the problem where the rarefaction dominates the solution. We may find that we can solve this class of problem far more efficiently with a continuous basis getting asymptotically convergent solutions far sooner. What we observe is an ever slower approach to a convergent behavior in the code. For this class of problems we see a consistent pattern, the solution starts out being under-resolved and the velocity rises, it then overshoots the correct analytical result, then slowly decays toward the correct solution. As the rarefaction becomes stronger and stronger, we see that the mesh resolution needed to capture the full rise, its achievement of the peak overshoot value take place at a finer and finer mesh.  Ultimately, the mesh required to get a solution that converges becomes absurdly refined.

If this proposition is indeed correct, it implies that we need to define a hybrid approach where the basis is adaptively chosen. At discontinuous structures, we want to choose discontinuous approaches, and at continuous structures we want continuous structures. This is almost obvious, but carrying this out in practice is difficult. Clearly the current adaptive approaches are not working well enough as evidenced by the painful and absurd degree of mesh needed to get a reasonable solution. It would seem that the answer to this problem lies in developing a new method capable of solving extreme rarefactions on reasonable meshes.  We need to have methods that can solve strong, but continuous waves with higher fidelity. In all reality, these methods might need to effectively compute shocks albeit less effectively than methods using a discontinuous basis. The bottom line from attacking a challenging problem like this is the demonstration that our methods today are not sufficient to all our challenges.

Creativity consists of coming up with many ideas, not just that one great idea.

― Charles Thompson

Is the code part of the model?


Of course, it’s not really that simple, but yes, the code is part of the model. If it isn’t, one has to provide a substantial burden of proof.

We have no idea about the ‘real’ nature of things … The function of modeling is to arrive at descriptions which are useful.

– Richard Bandler and John Grinder

images-2Ideally, it should not be, but proving that ideal is a very high bar that is almost never met. A great deal of compelling evidence is needed to support an assertion that the code is not part of the model. The real difficulty is that the more complex the modeling problem is, the more the code is definitely and irreducibly part of the model. These complex models are the most important uses of modeling and simulation. The complex models of engineered things, or important physical systems have many submodels each essential to successful modeling. The code is often designed quite specifically to model a class of problems. The code then becomes are clear part of the definition of the problem. Even in the simplest cases, the code includes the recipe for the numerical solution of a model. This numerical solution leaves its fingerprints all over the solution of the model. The numerical solution is imperfect and contains errors that influence the solution. For a code, there is the mesh and geometric description plus boundary conditions, not to mention the various modeling options employed. Removing the specific details of the implementation of the model in the code from consideration as part of the model becomes increasingly intractable.

The word model is used as a noun, adjective, and verb, and in each instance it has a slightly different connotation. As a noun “model” is a representation in the sense in which an architect constructs a small-scale model of a building or a physicist a large-scale model of an atom. As an adjective “model” implies a degree of perfection or idealization, as in reference to a model home, a model student, or a model husband. As an adjective “model” implies a degree or perfection or idealization, as in reference to a model home, a model student, or a model husband. As a verb “to model” means to demonstrate, to reveal, to show what a thing is like.

– Russell L. Ackoff

The word model itself is deeply problematic. Model is one of those words that can mean many different things whether its used a noun or verb (I’ll note in passing much like the curse word, “fuck” is so flexible as to be wonderful and confusing all at once). Its application in a scientific and engineering context is common and pervasive. As such, we need to inject some precision into how it is being used. For this reason, some discourage the use of “model” in discussion. On the other hand, models and modeling is so 16376102935_002fea8384_zcentral to the conduct of science and engineering that it should be dealt with head on. It isn’t going away. We model our reality when we want to make sure we understand it. We engage in modeling when we have something in the Real World, we want to demonstrate an understand of. Sometimes this is for the purpose of understanding, but ultimately this gives way to manipulation, the essence of engineering. The Real World is complex and effective models are usually immune to analytical solution.

Essentially, all models are wrong, but some are useful.

– George E. P. Box, Norman R. Draper

You view the world from within a model.

― Nassim Nicholas Taleb

Computational science comes to the rescue, and opens the doors to solving these complex models via numerical approximations. It is a marvelous advance, but brings new challenges because the solutions are imperfect. This adds a new layer of imperfection to modeling. We already should recognize that models are generically approximate versions of reality (i.e., wrong), and necessarily imperfect mathematical representations of the Real World. Solving this imperfect model, imperfectly via an approximate method makes the modeling issue even more fraught. Invariably for any model with complexity, the numerical solution of the model, and its detailed descriptionjohn-von-neumann-2 are implemented in computer code, or “a computer code”. The details and correctness of the implementation become inseparable from the model itself. It becomes quite difficult to extract the model as any sort of pure mathematical construct; the code is part of it intimately.

Evidence of the model’s nature and correctness is produced in the basic conduct of verification and validation with uncertainty quantification. Doing a full accounting of the credibility of modeling, including pedigree of the model will not help to exclude the code from the model, simply define the extent of this connection. Properly speaking, the code is always part of the model, but the extent or magnitude of its impact can be small, or even considered minor or negligible. This evidence is contained within the full assessment of the predictive quality of the simulation including a quantitative assessment. Among these activities verification is the most important for the question at hand. Do we have evidence that the mathematical model desired is correctly solved? Do we have evidence that the numerical errors in the solution are small? Can all the aspects of the model be well described by clearly articulated mathematics?

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

― Stephen Hawking

A model is not the operating system for the universe. Reality is not determined by these mathematical abstractions; the mathematics is designed to describe what we observe. As such, the models are always flawed and imperfect representations to some level. Determining the flaws and the quantitative level of imperfection is difficult worlogok requiring detailed verification and validation. It is an abstraction and representation of the processes we believe produce observable physical effects. We theorize that the model explains how these effects are produced. Some models are not remotely this high minded; they are nothing, but crude empirical engines for reproducing what we observe. Unfortunately, as phenomena become more complex, these crude models become increasingly essential to modeling. They may not play a central role in the modeling, but still provide necessary physical effects for utility. These submodels necessary to produce realistic simulations become ever more prone to include these crude empirical engines as problems enter the engineering realm. As the reality of interest becomes more complicated, the modeling becomes elaborate and complex being a deep chain of efforts to grapple with these details.

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

― Arthur Conan Doyle

Validation of a model occurs when we take the results of solving the model and compare them directly with observations from the Real World. A key aspect of the validation exercise is characterizing the uncertainty in both the observations and the model. When all this assessment is in hand, we can render a judgment of whether the model represents the observed reality well enough for the purposes we intend. This use is defined by a question we want to answer with the modeling. The answer needs to have a certain fidelity, and certainty that provides the notion of precision to the exercise. The certainty of the observations defines a degree of agreement that can be demanded. The model’s uncertainties define the model’s precision, but includes the imagesimpact of numerical approximation. The numerical uncertainty needs to be accounted for to isolate the model. This uncertainty defines the level of approximation in the solution to the model, and a deviation from the mathematical idealization the model represents. In actual validation work, we see a stunning lack of this essential step from validation work presented. Another big part of the validation is recognizing the subtle differences between calibrated results and predictive simulation. Again, calibration is rarely elaborated in validation to the degree that it should.

We should always expect the model to deviate from observations to some degree. If we are capable of producing more accurate observations of reality, we can more accurately determine how wrong the model is. In a sense, we can view this as a competitive race. If our model is quite precise, we are challenged in being able to observe nature well enough to expose its innate flaws. Conversely, if we can observe nature with extreme precision, we can define the model’s imperfections clearly. Progress can be made by using this tension to push one or the other. The modeling uncertainty is compounded by approximate numerical solution implemented in a computer code (including the correctness of the code). Verification and validation activities are a systematic manner to collect evidence so that the comparison can be made in a complete and compelling manner.

Computer codes serve two very important roles in modeling: the model is contained in the code including geometry, boundary condition, and a host of ancillary models for complex situations, and solving the model numerically. Both of these characteristics are essential in the conduct of modeling, but numerical solutions are far more subtle and complex. Many people using codes for modeling do not have a background sufficient to understand the subtleties of numerical methods and their impact on solutions. Moreover, the fiction that numerical methods and codes are so reliable that detailed understanding is not essential, persists and grows. Our high performance computing programs work to fuel this fiction. The most obvious aspect of the numerical solution is the meshing and the time integration with the error’s proportionality to this detail. Producing evidence of the correctness and error characteristics is produced through verification. In addition, most advanced codes solve linear and nonlinear equations in an iterative manner. Iterative solutions have a finite tolerance in their solution, which can impact solutions. This is particularly true for nonlinear equation solvers where the error tolerance that can be achieved by some popular solvers is extremely loose. This looseness can produce significant physical effects in solutions. Most verification work does not examine these aspects closely although they should. Again, the code and its capabilities and methods are extremely important, if not essential, to the model produced. In many cases fantastic modeling work is polluted by naïve numerical methods, thus a wonderful model can be wiped out by a terrible code.

You’ve baked a really lovely cake, but then you’ve used dog shit for frosting.

― Steve Jobs

So, when can we exclude the code? The big thing to focus on in this question is verification evidence. Code verification is necessary to be confident that the mathematical model intended is provably present in the code. It asks whether the mathematical abstraction that the model is based on is correctly solved by the code. Code verification can be completely satisfactory and successful, and the code can still be important. Code verification does not say that the numerical error is small, it says that numerical error is ordered and the model equations desired to be solved are indeed solved. The second half of verification is solution (calculation) verification determines the errors in solving the model. The question is how large (or small) the numerical errors in the solution of the model are? Ultimately, these errors are a 6767444295_259ef3e354strong function of the discretization and solver used in the code. The question of whether the code matters comes down to asking if another code used skillfully would produce a significantly different result. This is rarely, if ever, the case. To make matters worse, verification evidence tends to be flimsy and half-assed. Even if we could make this call and ignore the code, we rarely have evidence that this is a valid and defensible decision.

Truth can only be found in one place: the code.

― Robert C. Martin

In closing, the code IS part of the model unless evidence can be found otherwise. This can happen more easily where the model is simple. In general, the exclusion of the code is an ideal that cannot be reached. As models become complex detaching the model from the code becomes nearly intractable, and indefensible. Evidence will almost invariably point to the code being an important contributor to the model’s picture of reality.

For the scientist a model is also a way in which the human though processes can be amplified. This method often takes the form of models that can be programmed into computers. At no point, however, the scientist intend to loose control of the situation because off the computer does some of his thinking for him. The scientist controls the basic assumptions and the computer only derives some of the more complicated implications.

– C. West Churchman



11 Things in Computational Science that Sound Awesome, but are Actually Terrible

From the bad things and bad people, you learn the right way and right direction towards the successful life.

― Ehsan Sehgal

Computational science is an extremely powerful set of disciplines for conducting scientific investigations. The end result of computational science is usually grounded in the physical sciences, and engineering, but depends on a chain of expertise spanning much of modern science. Doing computational science well completely depends on all of these disparate disciplines working in concert. A big area of focus these days are the supercomputers being used. The predicate for acquiring a these immensely expensive machines is the improvement in scientific and engineering product arising from their use. While this should be true, getting across this finish line requires a huge chain of activities to be done correctly.

Unknown-2Let’s take a look at all the things we need to do right. Computer engineering and computer science are closest to the machines needed for computational science. These disciplines make these exotic computers accessible and useful for domain science and engineering. A big piece of this work is computer programming and software engineering. The computer program is a way of expressing mathematics in a way for the computer to operate on. Efficient and correct computer programs are a difficult endeavor all by themselves. Mathematics is the language of physics and engineering and essential for the conduct of computing. Mathematics is a middle layer of work between the computer and their practical utility. It is a deeply troubling and ironic trend that applied mathematics is disappearing from computational science. As the bridge between the computer and its practical use, it forms the basis for conducting and believing the computed results. Instead of being an area of increased focus, the applied math is disappearing into either the maw of computer programming or domain science/engineering. It is being lost as a separate contributor. Finally, we have the end result in science and engineering. Quite often we lose sight of computers and computing as a mere tool that must follow its specific rules for quality, reliable results. Too often the computer is treated like it is a magic wand.

imgresAnother common thread to horribleness is the increasing tendency for science and engineering to be marketed. The press release has given way to the tweet, but the sentiment is the same. Science is marketed for the masses who have no taste for the details necessary for high quality work. A deep problem is that this lack of focus and detail is creeping back into science itself. Aspects of scientific and engineering work that used to be utterly essential are becoming increasingly optional. Much of this essential intellectual labor is associated with the hidden aspects of the investigation. Things related to mathematics, checking for correctness, assessment of error, preceding work, various doubts about results and alternative means of investigation. This sort of deep work has been crowded out by flashy graphics, movies and undisciplined demonstrations of vast computing power.

Some of the terrible things we discuss here are simply bad science and epileofshitngineering. These terrible things would be awful with or without a computer being involved. Other things come from a lack of understanding of how to add computing to an investigation in a quality focused manner. The failure to recognize the multidisciplinary nature of computational science is often at the root of many of the awful things I will now describe.

Fake is the new real, You gotta keep a lot a shit to yourself.

― Genereux Philip

Without further ado, here are some terrible things to look out for. Every single item on the list will be accompanied by a link to a full blog post expanding on the topic.

  1. If one follows high performance computing online (institutional sites, Facebook, Twitter) you might believe that the biggest calculations on the fastest computers are the very best science. You are sold that these massive calculations have the greatest impact on the bottom line. This is absolutely not the case. These calculations are usually one-off demonstrations with little or no technical value. Almost everything of enduring value happens on the computers being used by the rank and file to do the daily work of science and engineering. These press release calculations are simply marketing. They almost never have the pedigree or hard-nosed quality work necessary for good science and engineering. –,
  2. The second thing you come across is the notion that a calculation with larger-finer mesh is better than one with a coarser mesh. In the naïve pedestrian analysis, this would seem to be utterly axiomatic. The truth is that computational modeling is an assembly of many things all working in concert. This is another example of proof by brute force. In the best circumstances this would hold, but most modeling is hardly taking places under the best conditions. The proposition is that the fine mesh allows one to include all sorts of geometric details, so the computational world looks more like reality. This is a priori What isn’t usually discussed is where the challenge is in modeling. Is geometric detail driving uncertainty? What is biggest challenge, and is the modeling focused there? –,,
  3. In concert with these two horrible trends, you often see results presented as the result of single massive calculation that magically unveils the mysteries of the universe. This is computing as a magic wand, and has very little to do with science or engineering. This simply does not happen. Real science and engineering takes 100’s or 1000’s of calculations to happen. There is an immense amount of dag006background work needed to create high quality results. A great deal of modeling is associated with bounding uncertainty or bounding the knowledge we possess. A single calculation is incapable of this sort of rigor and focus. If you see a single massive calculation as the sole evidence of work, you should smell and call “bullshit”. –
  4. One of the key elements in modern computing is the complete avoidance of discussing how the equations in the code are being solved. The notion is that this detail has no importance. On the one hand, this is evidence of progress, our methods for solving equations are pretty damn good. The methods and the code itself is still an immensely important detail, and constitute part of the effective model. There seems to be a mentality that the methods and codes are so good that this sort of thing can be ignored. All one needs are a sufficiently fine mesh, and the results are pristine. This is almost always false. What this almost willful ignorance shows are lack of sophistication. The methods are immensely important to the results, and we are a very long way from being able to apply the sort of ignorance of this detail that is rampant. The powers that be want you to believe that the method disappears from importance because the computers are so fast. Don’t fall for it. –,
  5. The George Box maxim about models being wrong, but useful is essential to keep in mind. This maxim is almost uniformly ignored in the high-performance computing bullshit machine. The politically correct view is that the super-fast computers will solve the models so accurately that we can stop doing experiments. The truth is that eventually, if we are doing everything correct, the models will be solved with great accuracy and their incorrectness will be made evident. I strongly expect that we are already there in many cases; the models are being solved too accurately and the real answer to our challenges is building new models. Model building as an enterprise is being systematically disregarded in favor of chasing faster computers. We need far greater balance and focus on building better models worthy of the computers they are being solved on. We need to build the models that are needed for better science and engineering befitting the work we need to do. –
  6. Calculationa03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0ael error bars are an endangered species. We never see them in practice even though we know how to compute them. They should simply be a routine element of modern computing. They are almost never demanded by anyone, and their lack never precludes publication. It certainly never precludes a calculation being promoted as marketing for computing. If I was cynically minded, I might even day that error bars when used are opposed to marketing the calculation. The implicit message in the computing marketing is that the calculations are so accurate that they are basically exact, no error at all. If you don’t see error bars or some explicit discussion of uncertainty you should see the calculation as flawed, and potentially simply bullshit. –,,
  7. One way for a calculation to seem really super valuable is to declare that it is direct numerical simulation (DNS). Sometimes this is an utterly valid designator. The other term that follows DNS is “first principles”. Each of these terms seeks to endow the calculation with legitimacy that it may, or may not deserve. One of the biggest problems with DNS is the general lack of evidence for quality and legitimacy. There is a broad spectrum of the technical World that seems to be OK with treating DNS as equivalent (or even better) with experiments. This is tremendously dangerous to the scientific process. DNS and first principles is still based on solving a model, and models are always wrong. This doesn’t say that DNS isn’t useful, but this utility needs to be proven and bounded by uncertainty. –
  8. Most press releases are rather naked in the implicit assertion that the bigger computer gives a better answer. This is treated as being completely axiomatic. As such there is no evidence provided to underpin this assertion. Usually some colorful graphics, or color movies beautifully rendered accompany the calculation. Their coolness is all the proof we need. This is not science or engineering even though this mode of delivery dominates the narrative today. –,,
  9. Modeling is the use of mathematics to connect reality to theory and understanding. Mathematics is translated into methods and algorithms implemented in computer code. It is ironic that the mathematics that forms the bridge between physical world and the computer is increasingly ignored by science. Applied mathematics has been a tremendous partner for physics, engineering and computing throughout the history of computational science. This partnership has waned in priority over the last thirty years. Less and less applied math is called upon and happens being replaced by computer programming or domain science and engineering. Our programs seem to think that the applied math part of the problem is basically done. Nothing could be further from the truth. –,
  10. A frequent way of describing a computation is to describe the mesh as defining the solution. Little else is given about the calculation such as the equations being solved or how the equations are being approximated. Frequently, the fact that the solutions are approximated is left out. This fact is damaging to the accuracy narrative of massive computing. The designed message is that the massive computer is so powerful that the solution to the equations is effectively exact. The equations themselves basically describe reality without error. All of this is in service of saying computing can replace experiments, or real-world observations. The entire narrative is anathema to science and engineering doing each great disservice. –
  11. Computational science is often described in terms that are not consistent with the rest of science. We act like it is somehow different in a fundamental way. Computers are just tools for doing science, and allowing us to solve models of reality far more generally than analytical methods. With all of this power comes a lot of tedious detail needed to do things with quality. This quality comes from the skillful execution of this entire chain of activities described at the beginning of this Post. These details all need to be done right to get good results. One of the biggest problems in the current computing narrative is ignorance to the huge set of activities bridging a model of reality and the computer itself. The narrative wants to ignore all of this because it diminishes the sense that these computers are magical in their ability. The power isn’t magic, it is hard work, success is not a forgone conclusion, and everyone should ask for evidence, not take their word for it. –

csm_group1_2c3e352676Taking the word of the marketing narrative is injurious to high quality science and engineering. The narrative seeks to defend the idea is that buying these super expensive computers is worthwhile, and magically produces great science and engineering. The path to advancing the impact of computational science dominantly flows through computing hardware. This is simply a deeply flawed and utterly naïve perspective. Great science and engineering is hard work and never a foregone conclusion. Getting high quality results depends on spanning the full range of disciplines associated with computational science adaptively as evidence and results demand. We should always ask hard questions of scientific work, and demand hard evidence of claims. Press releases and tweets are renowned for simply being cynical advertisements and lacking all rigor and substance.

One reason for elaborating upon things that are superficially great, but really terrible is cautionary. The current approach allows shitty work to be viewed as successful by receiving lots of attention. The bad habit of selling horrible low-quality work as success destroys progress and undermines accomplishing truly high-quality work. We all need to be able to recognize these horrors and strenuously reject them. If we start to effectively police ourselves perhaps this plague can be driven back, and progress can flourish.

The thing about chameleoning your way through life is that it gets to where nothing is real.

― John Green



The Piecewise Parabolic Method (PPM)

A method which can solve this problem well should be able to handle just about anything which can arise in one-dimensional pure hydrodynamic flow. PPM is such a scheme.

– P.R. Woodward

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

This is one of the most important methods in the early history of the revolutionaryfig9 developments for solving hyperbolic PDEs in the 1980’s. For a long time this was one of the best methods available to solve the Euler equations. It still outperforms most of the methods in common use today. For astrophysics, it is the method of choice, and also made major inroads to the weather and climate modeling communities. In spite of having over 4000 citations, I can’t help but think that this paper wasn’t as influential as it could have been. This is saying a lot, but I think this is completely true. This partly due to its style, and relative difficulty as a read. In other words, the paper is not as pedagogically effective as it could have been. The most complex and difficult to understand version of the method is presented in the paper. The paper could have used a different approach to great effect by perhaps providing a simplified version to introduce the reader and deliver the more complex approach as a specific instance. Nonetheless, the paper was a massive milestone in the field.

It was certainly clear that high-order schemes were not necessarily bringing greater accuracy so physics would have to step in to shore up the failing numerics.

– Jay Boris

Part of the problem with the paper is the concise and compact introduction to the two methods used in the accompanying review article, PPMLR and PPMDE. The LR stands for Lagrange-Remap where the solution is solved on a Lagrangian grid and then remapped back to the original grid for an utterly Eulerian solution. Both the Lagrangian and Eulerian grids are unevenly spaced, and this results in far more elaborate formulas. As a result it is hard to recognize the simpler core method lurking inside the pages of the paper. The DE stands for direct Eulerian, which can be very simple for the basic discretization. Unfortunately, the complication for the DE flavor of PPM comes with the Riemann solver, which is far more complex in the Eulerian frame. The Largangian frame Riemann solver is very simple and easy to evaluate numerically. Not so for the Eulerian version, which has many special cases and requires some exceedingly complex evaluations of the analytical structure of the fig1_350Riemann solution. Advances that occurred later greatly simplified and clarified this presentation. This is a specific difficulty of being an early adopter of methods, the clarity of presentation and understanding is dimmed by purely narrative effects. Many of these shortcomings have been addressed in the recent literature discussed below.

The development of the PPM gas dynamics scheme grew out of earlier work in the mid 1970s with Bram van Leer on the MUSCL scheme. The work of Godunov inspired essential aspects of MUSCL.

– Paul R. Woodward

The paper had a host of interesting and important sub-techniques for solving hyperbolic PDEs. Many of these “bells” and “whistles” are not part of the repertoire for most methods today. The field actually suffers from some extent by not adopting most of these strategies for attacking difficult problems. It is useful to list the special approaches along with a description and context that might make them easier to adopt more broadly (,, The paper is written in such a way that these algorithms seem specifically tailored to PPM, but they are far broader in utility. Generalizing their use more broadly would serve the quality of numerical solutions immensely. To a large extent Phil Colella extended many of these techniques to piecewise linear methods that form the standard approach in production codes today.ParabolicExtrap

  • Shock flattening – Shocks are known to be horrifically nonlinear and difficult both forgiving and brutal. This technique acknowledges this issue by blending a bit of safe first order method with the nonlinearly adaptive high-order methods when a strong shock is encountered. The principle is to use a bit more first-order when the shock is strong because oscillations can escape. For weak shocks this is unnecessary. Rather than penalize the solution everywhere the method is made locally more dissipative where the danger is the greatest.
  • Contact steepening – contact discontinuities will smear out without limit if dissipation is applied to them. In other words, errors made in their solution are with you forever. To keep this from happening, the amount of dissipation applied at these waves is minimized. This sort of technique must be applied with great caution because at a shock wave this is exceedingly dangerous. Additionally, the method to limit the dissipation can produce a very good interface tracking method that is far simpler than the elaborate methodology using interface geometry. It is a useful pragmatic way to move interfaces with little dissipation along with relative simplicity. This basic approach is the actual interface tracking method in many production codes today although few use methods as elaborate or as high quality as that used in the original PPM.
  • Extra dissipation – Monotonicity preservation and Riemann solvers are two 41598_2017_13484_Fig11_HTMLelaborate ways of producing dissipation while achieving high quality. For very nonlinear problems this is not enough. The paper describes several ways of adding a little bit more, one of these is the shock flattening, and another is an artificial viscosity. Rather than use the classical Von Neumann-Richtmyer approach (that really is more like the Riemann solver), they add a small amount of viscosity using a technique developed by Lapidus appropriate for conservation form solvers. There are other techniques such as grid-jiggling that only really work with PPMLR and may not have any broader utility. Nonetheless, there may be aspects of the thought process that may be useful.
  • High-order edges – One of PPM’s greatest virtues is the use of formally higher order principles in the method. Classic PPM uses fourth-order approximations for its edge values. As a result, as the Courant number goes to zero, the method becomes formally fourth-order accurate. This is a really powerful aspect of the method. It is also one of the clear points where the method can be generalized. We can use whatever high-order edge value we like for PPM. One of the maxims to take from this approach is the power of including very high-order discretizations even with otherwise lower order approximation methods. The impact of the high-order is profoundly positive.
  • Steepened edge values – For horrible nonlinear problems, the simple use of high-order differencing is not advisable. The nature of the high-order approximation can be decomposed into several pieces, and the approximation can be built more carefully and appropriately for complex problems. In this way, the high order edge values are a bit hierarchical. This is partially elaboration, but also reflects a commitment to quality that is imminently laudable.

Generalized Monotonicity – PPM uses a parabola and as a result the limiters so well-known don’t work to provide monotone results. As a result, the limiter for PPM takes two steps instead of the single step needed for a linear profile. I don’t like the original presentation in the paper and recast the limiter into an equivalent algorithm that uses two applications of the median function per edge. The first step makes sure the edge value being used is bounded by the cell averages adjacent to it. The second step asks whether the parabola is monotone in the cell and limits it to one that is by construction should it not be (,, ).

Before launching into a systematic description of the PPM algorithm, it is worthwhile to first explain the goals and constraints that have influenced its design. These are:

  1. Directional operator splitting.
  2. Robustness for problems involving very strong shocks.
  3. Contact discontinuity steepening.
  4. Fundamental data in the form of cell averages only.
  5. Minimal dissipation.
  6. Numerical errors nevertheless dominated by dissipation, as opposed to dispersion.
  7. Preservation of signals, if possible, even if their shapes are modified, so long as they travel at roughly the right speeds.
  1. Minimal degradation of accuracy as the Courant number decreases toward 0.

– Paul R. Woodward

Over time PPM has mostly been interpreted monolithically as opposed to some basic principles. PPM is really a wonderful foundation with the paper only providing a single instantiation of a panoply of powerful methods. This aspect has come to the fore more recently, but would have served the community better far earlier. Some of these comments are the gift of 2020 hindsight. A great deal of the pedagogical clarity with regard to Godunov-type methods is the result of its success, and only came to common use in the late 1980’s, if not the 1990’s. For example, the language to describe 1-s2.0-S0021999109003830-gr17Riemann solvers with clarity and refinement hadn’t been developed by 1984. Nevertheless, the monolithic implementation of PPM has been a workhorse method for computational science. Through Paul Woodward’s efforts it is often the first real method to be applied to brand new supercomputers, and generates the first scientific results of note on them.

The paper served as a companion to the adjacent paper that reviewed the performance of numerical methods for strong shocks. The review was as needed as it was controversial. The field of numerical methods for shock waves as set to explode into importance and creative energy. The authors Phil Colella and Paul Woodward would both play key roles in the coming revolution in methods. Woodward had already made a huge difference by spending time in Europe with Bram van Leer. Paul helped Bram with implementing advanced numerical methods using methodologies Paul learned at the Livermore Labs. Bram exposed Paul to his revolutionary ideas about numerical methods chronicled in Bram’s famous series of papers (,, The ideas therein were immensely influential in changing how hyperbolic equations were solved.

One of the great successes in numerical methods for hyperbolic conservation laws has been the use of nonlinear hybridization techniques, known as limiters, to maintain positivity and monotonicity in the presence of discontinuities and underresolved gradients.

– Michael Sekora and Phil Collela

Bram’s ideas created a genuine successor to Godunov’s method. The methods he created were novel in producing a nonlinearly adaptive numerical method where the method would adapt locally to the nature of the solution. This overcame the limitations of Godunov’s theorem regarding the accuracy of numerical methods for hyperbolic equations. Bram’s ideas were geometric in nature, and reflected the approach of the physicist. Paul being a physicist gravitated into the same view, and added a genuine does of pragmatism. Bram also wasn’t the first person to overcome Godunov’s theorem. He may have actually been the third (or fourth). The first is most likely to have been Jay Boris who invented the flux-corrected transport (FCT) method in 1971. In addition, Kolgan in the Soviet Union and Ami Harten might lay claims to overcoming Godunov’s barrier theorem. Some of these different methods played a role in the comparison in the review article by Woodward and Colella. In the light of history many of the differences in the results were more due to the approaches to systems of equations and related difficulties than the nonlinearly adaptive principles in the methods.

The strong, fluid-dynamic shock problem had become the number one computational roadblock by the fall of 1970 so I was urged to concentrate on the problem full time, finally developing the FCT convection algorithm in the winter.

– Jay Boris

In totality, the methods developed by three or four men in the early 1970’s set the stage for revolutionary gains in method performance. At the time of the developments, the differences in the methods were fiercely debated and hotly contested. The reviews of the papers were contentious and resulted in bitter feelings. Looking back with the virtues of time and perspective several things stand out. All the methods represented a quantum leap in performance, and behavior over the methods available prior.  The competition and ideas so hotly contested probably helped to spur developments, but ultimately became counter-productive as the field matured. It seems clear that the time was ripe for the breakthrough. There was a combination of computers, mathematics and applications that seeded the developments. For the same basic idea to arise independently in a short period of time means the ideas were dangling just out of reach. The foundations for the breakthrough were common and well-known.

Paul Woodward is an astrophysicist, and PPM found its most common and greatest use in his field. For a long time the nature of PPM’s description meant that the exact versions of the method described in the canonical 1984 paper were the exact method used in other codes. Part of this results from PPM being a highly tuned, high-performance method with a delicate balance between high-resolution methodology and various safety measures needed for difficult highly nonlinear problems. In a manner of speaking it is a recipe that produces really great results. Imagine PPM as something akin to the Toll House chocolate chip cookie recipe. The cookies you get by following the package exactly are really, really good. At the same time, you can modify the recipe to produce something even better while staying true to the basic framework. The basic cookies will get you far, but with some modification you might just win contests or simply impress your friends. PPM is just like that.

At this point I’ve said quite little about the method itself. The core of the method is a parabolic representation of the solution locally in a cell. The method is totally one-dimensional in nature. This parabola is determined by the integral average in a cell and the point values of the quantity at the edge of the cell. What is not so widely appreciated is the connection of PPM to the fifth scheme in Van Leer’s 1977 paper. This method is interesting because the method evolves both cell averages like any finite volume code, and the point values at the cell boundary. It is compact and quite supremely accurate compared with other third-order methods. The PPM is a way of getting some of the nice properties of this method from a finite volume scheme. Rather than evolve the point values on the edge, they are recovered from the finite volumes.

Rather than belabor the technical details of PPM, I’ll point to the recent trends that have extended the method beyond its classical form. One of the original authors has used the parabola to represent valid extrema in the solution rather than damping them by forcing monotonicity. I did the same thing in my own work largely paralleling Phil’s work. In addition, the change in the high-order edge reconstruction has been recognized and implemented to good effect by both Phil, Paul, myself and others. The connection of Riemann solvers has also been generalized. All of this reflects the true power of the method when projected onto the vast body of work that arose after the publication of this paper.  Even today PPM remains one of the very best methods in existence especially with the modifications recently introduced.

Personally, I’ve come to know both Phil and Paul personally and professionally. In the numerical solution of hyperbolic PDEs both men have played a significant personal role and witnessed history being made. They helped make CFD what it is today. It’s always an interesting experience to read someone’s work then come to know the person. A big part of a deeper appreciation is finding out the underlying truths of the paper. You start to realize that the written, published record is a poor reflection of the real story. Some of this comes through the hard work of reading and re-reading a paper, then deriving everything in it for yourself. A deeper appreciation came from expressing the same method in my own language and mathematics. Finally taking each of these expressions into conversations with the authors who clarified most of the remaining questions. The academic literature is a scrubbed and largely white-washed reflection of reality. What we are allowed to read and see is not the truth, but an agreed upon distortion.

When the numerics fails, substitute the physics.

– Steve Zalesak

the scientists who use such algorithms must have both input to and knowledge of their design. There may come a day when we no longer hold to this view, when the design of such algorithms can be left to expert numerical analysts alone, but that day has not yet arrived.

– Steve Zalesak

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Carpenter Jr, Richard L., Kelvin K. Droegemeier, Paul R. Woodward, and Carl E. Hane. “Application of the piecewise parabolic method (PPM) to meteorological modeling.” Monthly Weather Review 118, no. 3 (1990): 586-612.

Woodward, Paul R. “Piecewise-parabolic methods for astrophysical fluid dynamics.” In Astrophysical Radiation Hydrodynamics, pp. 245-326. Springer Netherlands, 1986.

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Plewa, Tomasz, and Ewald Mueller. “The consistent multi-fluid advection method.” arXiv preprint astro-ph/9807241 (1998).

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection.” Journal of computational physics 23, no. 3 (1977): 276-299.

Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Rider, William J. “Reconsidering remap methods.” International Journal for Numerical Methods in Fluids 76, no. 9 (2014): 587-610.

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

J. P. Boris “A Fluid Transport Algorithm That Works,” Proceedings of the seminar course on computing as a language of physics, 2-20 August 1971, InternationalCentre for Theoretical Physics, Triest, Italy.



We are all responsible for this mess; It is everyone’s fault

Never attribute to malevolence what is merely due to incompetence

― Arthur C. Clarke

shutterstock_318051176-e1466434794601-800x430A year ago, I sat in one of my manager’s office seething in anger. After Trump’s election victory, my emotions shifted from despair to anger seamlessly. At that particular moment, it was anger that I felt. How could the United States possibly have elected this awful man President? Was the United States so completely broken that Donald Trump was a remotely plausible candidate, much less victor.

Is ours a government of the people, by the people, for the people, or a kakistocracy rather, for the benefit of knaves at the cost of fools?

― Thomas Love Peacock

fig10_roleApparently, the answer is yes, the United States is that broken. I said something to the effect that we too are to blame for this horrible moment in history. I knew that both of us voted for Clinton, but felt that we played our own role in the election of our reigning moron-in-chief. Today a year into this national nightmare, the nature of our actions leading to this unfolding national and global tragedy is taking shape. We have grown to accept outright incompetence in many things, and now we have a genuinely incompetent manager as President. Lots of incompetence is accepted daily without even blinking, I see it every single day. We have a system that increasingly renders, the competent, incompetent by brutish compliance with directives born of broad-based societal dysfunction.

In a hierarchy, every employee tends to rise to his level of incompetence.

― Laurence J. Peter

What does the “Peter Principle” say about the United States? The President is incompetent. Not just a little bit, he is utterly and completely unfit for the job he has. He is the living caricature of a leader, not actually one. His whole shtick is loudly and brashly sounding like what a large segment of the population thinks a leader should be. Under his leadership, our government has descended into the theatre of the absurd. He doesn’t remotely understand our system of government, economics, foreign policy, maxresdefaultscience, or really anything other than marketing himself. His is an utterly self-absorbed anti-intellectual completely lacking empathy and the basic knowledge we should expect him to have. The societal destruction wrought by this buffoon-in-chief is profound. Our most important institutions are being savaged. Divisions in society are being magnified and we stand on the brink of disaster. The worst thing is that this disaster is virtually everyone’s fault whether you stand on the right or the left, you are to blame. The United States was in a weakened state and the Trump virus was poised to infect us. Our immune system was seriously compromised and failed to reject this harmful organism.

I love the poorly educated.

– Donald Trump

Sorry losers and haters, but my I.Q. is one of the highest -and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.

– Donald Trump

Trump is making everything worse. One of the keys to understanding the damage being done to the United States is seeing the poor condition of Democracy prior to the election. A country doesn’t just lurch toward such a catastrophic decision overnight, we were already damaged. In a sense, the body politic was already weakened and ripe for infection. We have gone through a period of more than 20 years of massive dysfunction led by the dismantling of government as a force for good in society. The Republican party is committed to small government, and part of their approach is to attack it. Government is viewed as an absolute evil. Part of the impact of this is the loss of competence in governing. Any governmental incompetence supports their imagesargument about the need to diminish it. The result has been a steady march toward dysfunction and poor performance along with deep seated mistrust, if not outright distain.

All of this stems from deeper wounds left in our history. The deepest wound is the Civil War and the original national sin of slavery. The perpetuation of institutional racism is one of the clearest forces driving our politics. We failed to heal the wounds of this war, and continue to wage a war against blacks. First through the scourge of Jim Crow laws, and now with the war on drugs with its mass incarceration. Our massive prison population is driven by our absurd and ineffective efforts to combat drug abuse. We actively avoid taking actions that would be effective in battling drug addiction. While it is a complete failure as a public health effort, it is a massively effective tool of racial oppression. More recent wounds were left by the combination of the Vietnam war and Civil rights movement in the 1960’s along with Watergate and Nixon’s corruption. The Reagan revolution and the GOP attacks on the Clinton’s were their revenge for progress. In a very real way the country has been simmering in action and reaction for the last 50 years. Trump’s election was the culmination of this legacy and our inability to keep the past as history.

Government exists to protect us from each other. Where government has gone beyond its limits is in deciding to protect us from ourselves.

― Ronald Reagan

Part of the hardest aspect of accepting what is going on comes in understanding how Trump’s opposition led to his victory. The entire body politic is ailing. The Republican party is completely inept at leading, unable to govern. This shouldn’t come as any surprise; the entire philosophy of the right is that government is bad. When your a priori assumption is that government is inherently bad, the nature of your governance is half-hearted. A natural outgrowth of this philosophy is rampant incompetence in governance. Couple this to a natural tendency toward greed as a core value, and you have the seeds of corruption. Corruption and incompetence is an apt description of the Republican party. The second part of this toxic stew is hate and fear. The party has spent decades stoking racial and religious hatred, and using fear of crime and terrorism to build their base. The result is a governing coalition that cannot govern at all. They are utterly incompetent, and no one more embodies their incompetence than the current President.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The Democrats are no better other than some basic human capacity for empathy. For example, the Clintons were quite competent, but competence is something we as a nation don’t need any more, or even believe in. Americans chose the incompetent candidate for President over the competent one. At the same time the Democrats feed into the greedy and corrupt nature of modern governance with a fervor only exceeded by the Republicans. They are what my dad called “limousine liberals” and really cater to the rich and powerful first and foremost while appealing to some elements of compassion (it is still better than “limousine douchebags” on the right). As a result the Democratic party ends up being only slightly less corrupt than the Republican while offering none of the cultural red meat that drives the conservative culture warriors to the polls.

In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule.

― Friedrich Nietzsche

The thing that sets the Democratic party back is a complete lack unity or discipline. They are fractious union of special interests that can barely tolerate one another. They cannot unify to help each other, and each faction is single issue group that can’t be bothered to form an effective coalition. The result is a party that is losing despite holding a majority of the votes. Many of the Democratic voters can’t be bothered to even vote. This losing coalition has let GOP driven fear and hate win along with a systematic attack on our core values as a democratic republic (vast sums of money in politics, voter rights, voter suppression, and gerrymandering). They are countered by a Republican party that is unified and supporting of their factions. The different factions work together to form a winning coalition in large part through accepting each other’s extreme views as part of their rubric of beliefs.\

maxresdefault copyWhile both parties cater to the greedy needs of the rich and powerful, the differences in the approach is completely seen in the approach to social issues. The Republicans appeal to traditional values along with enough fear and hate to bring the voters out. They stand in the way of scary progress and the future as the guardians of the past. They are the force that defends American values, which means white people and Christian values. With the Republicans, you can be sure that the Nation will treat those we fear and hate with violence and righteous anger without regard to effectiveness. We will have a criminal justice system that exacts vengeance on the guilty, but does nothing to reform or treat criminals. The same forces provide just enough racially biased policy to make the racists in the Republican ranks happy.

The Democrats stand for a progressive and empathic future that is represented by many different groups each with their own specific grievances. One of the biggest problems on both sides is intolerance. This might be expected on the right, after all white supremacy is hardly a tolerant world view. The left helps the right out by being even less tolerant. The left’s factions cannot tolerate any dissent, on any topic. We hear endless whining about micro-aggressions, and cultural appropriation along with demands for politicalblamedemotivator correctness. They are indeed “snowflakes” who are incapable of debate and standing up for their beliefs. When they don’t like what someone has to say, they attack them and completely oppose the right to speak. The lack of tolerance on the left is one of the forces that powered Trump to the White House. It did this through a loss of any moral high ground, and the production of a divided and ineffective liberal movement. The left has science, progress, empathy and basic human decency on their side yet continue to lose. A big part of their losing strategy is the failure to support each other, and engage in an active dialog on the issues they care so much about.

A dying culture invariably exhibits personal rudeness. Bad manners. Lack of consideration for others in minor matters. A loss of politeness, of gentle manners, is more significant than is a riot.

― Robert A. Heinlein

The biggest element in Trump’s ascension to the Presidency is our acceptance of incompetence in our leaders. We accept incompetence too easily; incompetence is promoted across society. We have lost the ability to value and reward expertise and competence. Part of this can be blamed on the current culture where marketing is more important than substance. Trump is pure marketing. His entire brand is himself, sold to people who have lost the ability to smell the con. A big part of the appeal of Trump was the incompetence of governing that permeates the Republican view.

This is where the incompetence and blame comes to work. Success at work depends little on technical success because technical success can be faked. What has become essential at work is compliance with rules and control of our actions. Work is not managed, our compliance with rules is managed.  Increasingly the incompetence of the government is breeding incompetence at my work. The government agency that primarily runs my Lab is a complete disaster. We have no leadership either management orimages science. Both are wrought by the destructive tendency of the Republican party that makes governing impossible. They are a party of destruction, not creation. When Republicans are put in power they can’t do anything, their entire being is devoted to taking things apart. The Democrats are no better because of their devotion to compliance, regulation and compulsive rule following without thought. This tendency is paired with the liberal’s inability to tolerate any discussion or debate over a litany of politically correct talking points.

The management incompetence has been brewing for years. Our entire management construct is based lack of trust. The Lab itself is not to be trusted. The employees are not to be trusted. We are not trusted by the left or the right albeit for different reasons. The net result of all of this lack of trust is competence being subservient to lack-of-trust-based compliance with oversight. We are made to comply and heel to the will of the government. This is the will of a government that is increasingly completely incompetent and unfit to run anything, much less a nuclear weapons enterprise! The management of the Lab is mostly there to launder money and drive the workforce into a state of compliance with all directives. The actual accomplishment of high quality technical work is the least important thing we do. Compliance is the main thing. We want to be managed to never ever fuck up, ever. Ipeter_nanosf you are doing anything of real substance and performing at a high level, fuck ups are inevitable. The real key to the operation is the ability of technical competence to be faked. Our false confidence in the competent execution of our work is a localized harbinger of “fake news”.

Fox treats me well, it’s that Fox is the most accurate.

– Donald Trump

We have non-existent peer review and this leads to slack standards. Our agency tells us that we cannot fail (really, we effectively have to succeed 100% of the time). The way to not fail is lower our standards, which we have done in response. We aid our lower standards by castrating the peer review we ought to depend on. We now have Labs that cannot stand to have an honest critical peer review because of the consequences. In addition, we have adopted foolish financial incentives for executive management to compound problems. Since the executive bonuses are predicated on successful review, reviews have become laughable. Reviewers don’t dare raise difficult issues unless they never want to be invited back. We are now graded on a scale where everyone gets an “A” without regard to actual performance. Our excellence has become a local version of “fake news”.

At the very time that we need to raise our standards, we are allowing them to plummet lower and lower. Our reviews have become focused on spin and marketing of the work. Rather than show good work, provide challenges, and receive honest feedback, we form a message focused on “everything is great, and there is nothing to worry about”. Let’s be clear, the task of caring for nuclear weapons without testing them is incredibly challenging. To do this task correctly we need to be focused5064 on raising our level of excellence across the board in science and engineering. Our technical standards should be higher than ever because of the difficulty and importance of this enterprise. Requiring 100% success might seem to be a way to do this, but it isn’t.

If you are succeeding 100% of the time, you are not applying yourself. When one is working at a place where you are mostly succeeding, but occasionally failing (and learning/growing), the outcomes are optimal. This is true in sports, business, science and engineering. Organizations are no different to do the best work possible, you need to fail and be working on the edge of failure. Ideally, we should be doing our work in a mode where we succeed 70-80% of the time. Our incompetent governance and leadership does not understand how badly they are undermining the performance of this vital enterprise. So, the opposite has happened, and the people leading us in the government are too fucking stupid to realize it. Our national leadership has become more obsessed with appearances than substance. All they see is the 100% scores and they conclude everything is awesome while our technical superiority is crumbling. Greatness in America today is defined by declaring greatness and refusing to accept evidence to the contrary.

Look at the F-35 as an example of our current ability to execute a big program. This aircraft is a completely corrupt massive shit storm. It is a giant, hyper-expensive fuckup. Rather than a working aircraft the F-35 was a delivery vehicle for pork barrel spending. God knows how much bullshitting went into the greenlighting of the program over the years. The bottom line is that the F-35 costs a huge amount of money, while being a complete failure as a weapon’s system. My concern that the F-35 is an excellent representative of our current technical capability. If it is, we are in deep trouble. We are expensive, corrupt and incompetent (sounds like a description of the President!). I’m very glad that we never ask our weapon’s lab to fly. Given our actual ability, we can guess the result.

160908_pol_trump-forum-jpg-crop-promo-xlarge2-1This is the place where we get to the core of the accent of Trump. When we lower our standards on leadership we get someone like Trump. The lowering of standards has taken place across the breadth of society. This is not simply National leadership, but corporate and social leadership. Greedy, corrupt and incompetent leaders are increasingly tolerated at all levels of society. At the Labs where I work, the leadership has to say yes to the government, no matter how moronic the direction is. If you don’t say yes, you are removed and punished. We now have leadership that is incapable of engaging in active discussion about how to succeed in our enterprise. The result are labs that simply take the money and execute whatever work they are given without regard for the wisdom of the direction. We now have the blind leading the spineless, and the blind are walking us right over the cliff. Our dysfunctional political system has finally shit the bed and put a moron in the White House. Everyone knows it, and yet a large portion of the population is completely fooled (or simply to foolish or naïve to understand how bad the situations is).

We are a paper tiger; a real opponent may simply destroy us. Our national superiority militarily and technically may already be gone. We are vastly confident of our outright superiority. This superiority requires our nation to continually bring their best to the table. We have almost systematically undermined our ability to apply our best to anything. We’ve already been attacked and defeated in the cyber-realm by Russia. Our society and democracy was assaulted by the Russians, and we were routed. Our incompetent governance has done virtually nothing. The seeds of our defeat have been sown for years all across our society. We are too incompetent to even realize how vulnerable we are.

I will admit that this whole line of thought might be wrong. The Labs where I work might be local hotbeds of incompetent management. What we see locally is not indicative of broader national trends. This seems very unlikely. What is more terrifying is the prospect that the places where I work are well managed comparatively. If this is true then it is completely plausible for us to have an incompetent President. So, the reality we have is stark incompetence across society that has set the stage for national tragedy. Our institutions and broad societal norms are under siege. Every single day of the Trumptrump_fired_tw-865x452 administration lessens the United States’ prestige. The World had counted on the United States for decades, but cannot any longer. We have made a decision as a nation that disqualifies us from a position of leadership. The Republican party has the greatest responsibility for this, but the Democrats are not blameless. Our institutional leadership shares the blame too. Places like the Labs where I work are being destroyed one incompetent step at a time. All of us need to fix this.

We have a walking, talking, tweeting example of our incompetence leading us, and it is everyone’s fault. We all let this happen. We are all responsible. We own this.

Ask not what your country can do for you; ask what you can do for your country.

― John F. Kennedy






How to properly use direct numerical simulations (DNS)

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

Nothing stokes the imagination for the power of computing to shape scientific discovery like direct numerical simulation (DNS). Imagine using the magic of the computer to unveil the secrets of the universe. We simply solve the mathematical equations that describe nature accurately at immense precision, and magically truth comes out the other end. DNS also stokes the demand for computing power, the bigger the Unknown-2computer, the better the science and discovery. As an added bonus the visualizations of the results are stunning almost Hollywood-quality and special effect appealing. It provides the perfect sales pitch for the acquisition of the new supercomputer and everything that goes with it. With a faster computer, we can just turn it loose and let the understanding flow like water bursting through a dam. With the power of DNS, the secrets of the universe will simply submit to our mastery!

If science were only that easy.  It is not and this sort of thing is a marketing illusion for the naïve and foolish.

vyxvbzwxThe saddest thing about DNS is the tendency for scientist’s brains to almost audibly click into the off position when its invoked. All one has to say is that their calculation is a DNS and almost any question or doubt leaves the room. No need to look deeper, or think about the results, we are solving the fundamental laws of physics with stunning accuracy! It must be right! They will assert, “this is a first principles” calculation, and predictive at that. Simply marvel at the truths waiting to be unveiled in the sea of bits. Add a bit of machine learning, or artificial intelligence to navigate the massive dataset produced by DNS, (the datasets are so fucking massive, they must have something good! Right?) and you have the recipe for the perfect bullshit sandwich. How dare some infidel cast doubt, or uncertainty on the results! Current DNS practice is a religion within the scientific community, and brings an intellectual rot into the core computational science. DNS reflects some of the worst wishful thinking in the field where the desire for truth, and understanding overwhelms good sense. A more damning assessment would be a tendency to submit to intellectual laziness when pressed by expediency, or difficulty in progress.

mellado_turb_mixing_01Let’s unpack this issue a bit and get to the core of the problems. First, I will submit that DNS is an unambiguously valuable scientific tool. A large body of work valuable to a broad swath of science can benefit from DNS. We can study our understanding of the universe in myriad ways at phenomenal detail. On the other hand, DNS is not ever a substitute for observations. We do not know the fundamental laws of the universe with such certainty that the solutions provide an absolute truth. The laws we know are models plain and simple. They will always be models. As models, they are approximate and incomplete by their basic nature. This is how science works, we have a theory that explains the universe, and we test that theory (i.e., model) against what we observe. If the model produces the observations with high precision, the model is confirmed. This model confirmation is always tentative and subject to being tested with new or more accurate observations. Solving a model does not replace observations, ever, and some uses of DNS are masking laziness or limitations in observational (experimental) science.

To acquire knowledge, one must study;

but to acquire wisdom, one must observe.

― Marilyn Vos Savant

One place where the issue of DNS comes to a head is validation. In validation, a code (i.e., model) is compared with experimental data for the purposes of assessing the model’s ability to describe nature. In DNS, we assume that nature is so well understood that our model can describe it in detail, the leap too far is saying that the model can replace observing nature. This presumes that the model is completely and totally validated. I find this to be an utterly ludicrous prospect. All models are tentative descriptions of reality, and intrinsically limited in some regard. The George Box maxim immediately comes to mind “all models are wrong”. This is axiomatically true, and being wrong, models cannot be used to validate. With DNS, this is suggested as a course of action violating the core principles of the scientific method for the sake of convenience. We should not allow this practice for the sake of scientific progress. It is anathema to the scientific method.

dag006This does not say that DNS is not useful. DNS can produce scientific results that may be used in a variety of ways where experimental or observational results are not available. This is a way of overcoming a limitation of what we can tease out of nature. Realizing this limitation should always come with the proviso that this is expedient, and used in the absence of observational data. Observational evidence should always be sought and the models should always be subjected to tests of validity. The results come from assuming the model is very good and provides value, but cannot be used to validate the model. DNS is always second best to observation. Turbulence is a core example of this principle, we do not understand turbulence; it is an unsolved problem. DNS as a model has not yielded understanding sufficient to unveil the secrets of the universe. They are still shrouded. Part of the issue is the limitations of the model itself. In turbulence DNS almost always utilizes an unphysical model to describe fluid dynamics with a lack of thermodynamics and infinitely fast acoustic waves. Being unphysical in its fundamental character, how can we possibly consider it a replacement for reality? Yet in a violation of common sense driven by frustration of lack of progress, we do this all the time.

One of the worst aspects of the entire DNS enterprise is the tendency to do no assessment of uncertainty with its results. Quite often the results of DNS are delivered without any uncertainty of approximation or the model. Most often no uncertainty at all is included, estimated or even alluded to. The results of DNS are still numerical approximations with approximation error. The models while detailed and accurate are always approximations and idealizations of reality. This aspect of the modeling must be included for the work to be used for high consequence work. If one is going to use DNS as a stand-in for experiment, this is the very least that must be done. The uncertainty assessment should also include the warning that the validation is artificial and not based on reality. If there isn’t an actual observation available to augment the DNS in the validation, the reader should be suspicious, and the smell of bullshit should alert one to deception.

Some of our models are extremely reliable, and have withstood immense scrutiny. These models are typically the subject of DNS. A couple of equations are worth discussing in depth, Schrödinger’s equations for quantum physics, molecular & atomic dynamics and the Navier-Stokes equations for turbulence. These models are frequent topics of DNS investigations, and all of them are not reality. The equations are mathematics and a logical constructive language of science, but not actual reality. These equations are unequal in terms of their closeness to fundamentality, but our judgment should be the same. The closeness to “first principles” should be reflected in the assessment of uncertainty, which also reflects the problem being solved by the DNS. None of these equations will yield truths so fundamental as to not be questioned or free of uncertainty.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

― Arthur C. Clarke

Another massive problem with DNS is the general lack of uncertainty assessment. It is extremely uncommon to see any sort of uncertainty assessment accompanying DNS. If we accept the faulty premise that DNS can replace experimental data, the uncertainty associated with these “measurements” must be included. This almost universally shitty practice further undermines the case of using DNS as a replacement for experiment. Of course, we are accepting far too many experimental results without their own error bars these days. Even if we make the false premise that the model being solved DNS is true to the actual fundamental laws, the solution is still 2621-fluid-dynamics-look-two-giantsapproximate. The approximate solution is never free of numerical error. In DNS, the estimate of the magnitude of approximation error is almost universally lacking from results.

Let’s be clear, even when used properly DNS results must come with an uncertainty assessment. Even when DNS is used simply as a high-fidelity solution of a model, the uncertainty of the solution is needed for assessment of the utility of the results. This utility is ultimately determined by some comparison with observations with phenomena seen in reality. We may use DNS to measure the power of a simpler model to provide consistency with the more fundamental model included in DNS. This sort of utility is widespread in turbulence, material science or constitutive modeling, but credibility of the use must always be determined with experimental data. The observational data always has primacy and DNS should always be subservient to realities results.

Cielo rotatorUnfortunately, we also need to address an even more deplorable DNS practice. Sometimes people simply declare that their calculation is a DNS without any evidence to support this assertion. Usually this means the calculation is really, really, really, super fucking huge and produces some spectacular graphics with movies and color (rendered in super groovy ways). Sometimes the models being solved are themselves extremely crude or approximate. For example, the Euler equations are being solved with or without turbulence models instead of Navier-Stokes in cases where turbulence is certainly present.  This practice is so abominable as to be almost a cartoon of credibility. This is the use of proof by overwhelming force. Claims of DNS should always be taken with a grain of salt. When the claims take the form of marketing they should be met with extreme doubt since it is a form of bullshitting that tarnishes those working to practice scientific integrity.

The world is full of magic things, patiently waiting for our senses to grow sharper.

― W.B. Yeats

logoPart of doing science correctly is honesty about challenges. Progress can be made with careful consideration of the limitations of our current knowledge. Some of these limits are utterly intrinsic. We can observe reality, but various challenges limit the fidelity and certainty of what we can sense. We can model reality, but these models are always approximate. The models encode simplifications and assumptions. Progress is made by putting these two forms of understanding into tension. Do our models predict or reproduce the observations to within their certainty? If so, we need to work on improving the observations until they challenge the models. If not, the models need to be improved, so that the observations are produced. The current use of DNS short-circuits this tension and acts to undermine progress. It wrongly puts modeling in the place of reality, which only works to derail necessary work on improving models, or work to improve observation. As such, poor DNS practices are actually stalling scientific progress.

I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.

― Isaac Asimov