The Culture of Computation

We seldom realize, for example that our most private thoughts and emotions are not actually our own. For we think in terms of languages and images which we did not invent, but which were given to us by our society.

― Alan W. Watts

Culture pervades our lives as people and shapes how we connect to our World, Nation, Community, Jobs, and personal lives. Much of what we do is deeply influenced by a web of cultures our lives are embedded within. All of this highlights the importance of understanding how culture influence computation as culture often defines what is georgewashingtoncomfortable and automatic. In many cases culture is the permanent habits of our social constructs, and often defines practices that impede progress. Accepted cultural practices are usually done without thinking and applied almost mindlessly. If these practices are wrong, they are difficult to dislodge or improve upon.

The imagination is the goal of history. I see culture as an effort to literally realize our collective dreams.

― Terence McKenna

Culture is a powerful thing. It defines so much about the collective activity of groups of people. Culture defines a set of beliefs, practices and habits that are naturally accepted and reinforced by the collective action of the people. Some cultures are driven by biologyRonaldRaygunor fundamental human needs, but most are constructs to help regulate the structures that our collective actions are organized about. The fundamental values, moral code and behaviors of people are heavily defined by culture. If the culture is positive, the effect is resonant and amplifies the actions of people toward much greater achievements. If the culture is negative, the effect can undo and overwhelm much of the best that people are capable of. Invariably cultures are a mixture of positive and negative. Cultures persist for extremely long times and outlive those who set the cultural tone for groups. Cultures are set or can change slowly unless the group is subjected to an existential crisis. When a crisis is successfully navigated the culture that arose in its resolution is enshrined, and tends to persist without change until a new crisis is engaged.

Every culture has its southerners — people who work as little as they can, preferring to dance, drink, sing brawl, kill their unfaithful spouses; who have livelier gestures, more lustrous eyes, more colorful garments, more fancifully decorated vehicles, a wonderful sense of rhythm, and charm, charm, charm; unambitious, no, lazy, ignorant, superstitious, uninhibited people, never on time, conspicuously poorer (how could it be otherwise, say the northerners); who for all their poverty and squalor lead enviable lives — envied, that is, by work-driven, sensually inhibted, less corruptly governed northerners. We are superior to them, say the northerners, clearly superior. We do not shirk our duties or tell lies as a matter of course, we work hard, we are punctual, we keep reliable accounts. But they have more fun than we do … They caution[ed] themselves as people do who know they are part of a superior culture: we mustn’t let ourselves go, mustn’t descend to the level of the … jungle, street, bush, bog, hills, outback (take your pick). For if you start dancing on tables, fanning yourself, feeling sleepy when you pick up a book, developing a sense of rhythm, making love whenever you feel like it — then you know. The south has got you.

― Susan Sontag

download.jpgWe see all sorts of examples of the persistence of culture. The United States is still defined by the North-South divide that fractured during the Civil War. The same friction and hate that defined that war 150 years ago dominate our politics today. The culture of slavery persists in systematic racism and oppression. The white and black divide remains unhealed even though none of the people who enslaved or who were enslaved are still alive with many generations having passed. The United States is still defined by the Anglo-Saxon Protestant beliefs of the founding fathers. Their culture is dominant even after being overwhelmed in numbers of people and centuries of history. The dominant culture was formed in the crucible of history by the originating crisis for the Nation, the Revolutionary war. Companies and Laboratories are shaped by their original cultures and these habits and practices persist long after their originators have left, retired or died.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

We all exist within a broad array of cultures all the way from our family to the entirety of humanity. Our culture is set by our biology, history and arc through life. This web of cultures connects together and runs much of our lives. We all have free will, but the decision to go against the culture tends to carry high costs to us personally. There are a number of things that influence culture including events, technology and new modes of engagement. Some events are part of natural world, such as disasters Unknown copy 26(earthquakes, floods, hurricanes, famines, …). These events can stress people and existing cultures providing the sorts of crises that shape the future to be more resilient to future disasters. Human events such as wars, trade, and general political events provide both the impact of culture in causing or navigating events, as well as producing crises that shape cultural responses and evolution. We can continue down this line of thinking to ever-smaller cultures such as organizations and businesses are influenced by crises induced by the larger systems (natural or political). This web of culture continues to smaller and smaller scale all the way to communities (towns, regions, schools, families) each having a culture shaped heavily by other cultures or events. In every case a crisis is almost invariably necessary to induce change, cultures are resistant to change unless something painful provides direct evidence of the incapacity of existing culture to succeed.

Men build too many walls and not enough bridges.

― Joseph Fort Newton

The culture emerging in the World today is deeply stressing may subcultures. A combination of demographic changes, ethnic conflict, technology and economic systems are all spiraling toward crisis. People across the World sense the depth of the impending changes to the structure of society. In many cases the combination of demographics and economic changes is stressing large populations of people to an extent that they exude a wholesale revolt against existing cultures and systems. When this population is large enough it becomes a movement, and starts driving other populations toward crisis. These movements ultimately create an environment where other events are triggered such as wars. These in turn are a crisis that ultimately must be resolved, and induce enough pain that people willingly overthrow existing cultures and embrace new cultures that enable successful resolution. We may be spiraling toward this cascade of crises that are almost necessary for our cultures to adapt to the reality of today.

One of the most effective ways to learn about oneself is by taking seriously the cultures of others. It forces you to pay attention to those details of life which differentiate them from you.

― Edward T. Hall

Before plunging into the specifics of the culture of computation, we should discuss theshapeimage_1culture of the broader scientific community. This culture exists within the broader network of cultures in society with give-and-take between them. In the past science has provided deep challenges to prevailing culture, and induced changes societal culture. Today the changes in main societal culture are challenging science. One key aspect of today’s culture wars is lack of support for expertise. One of the key rifts in society is mistrust of the elite and educated. The broader society is attacking and undermining educational institutions across the board. Scientific laboratories are similar in makeup and similarly under assault. Much of this broader assault is related to a general lack of trust. Some of this is a reaction to the surplus of trust granted science in the wake of its massive contributions to the resolution of World War 2 and the Cold War. These successes are waning in memory and science is now contracting for a distinguished role societally.

I work in a National Laboratory, and I have worked at a National Laboratory for my entire career. These Labs have strong cultures shaped by their history and work. Both Los Alamos and Sandia were born in the crucible of World War 2 and the Manhattan Project’s pursuit of an atomic bomb. The genesis of the third weapons’ Lab, Lawrence Livermore was also present albeit in an unexpressed form. During that war Los Alamos contained the origins of all three Labs. Los Alamos of course was the core of this revolving around Oppenheimer’s scientists pursuing the nuclear explosive part of the bomb. Sandia was contained within the engineering portion of Los Alamos that remained under military control. These identities are still evident in the Lab’s cultures today. At Los Alamos there is a scientific identity and habit that colors all engagements. Conversely the engineering character of Sandia is evident as is the shadow of General Groves’ priorities and approach on how the institution works today. Lawrence Livermore’s genesis was contained within a deep controversy associated with the follow-on to the atomic bomb, the hydrogen bomb. Many at Los Alamos opposed the hydrogen bomb, but Edward Teller was committed to this and ultimately created a Laboratory to pursue his vision. This adversarial, political and controversial nature still defines that Laboratory today.

The first step – especially for young people with energy and drive and talent, but not money – the first step to controlling your world is to control your culture. To model and demonstrate the kind of world you demand to live in. To write the books. Make the music. Shoot the films. Paint the art.

― Chuck Palahniuk

Each of these identities firmly expresses itself in the scientific-technical cultures of the Labs. Los Alamos has a core identity as an experimental physics Laboratory. Engineering and computational approaches are also taken at Los Alamos, but the experimental approach is the most natural and favored by the culture. Livermore is more favorable toward a theoretical-computational approach within its basic culture. Experimental approaches are also seriously engaged, but in particular computation is more naturally supported by its culture. Sandia is an engineering culture, and borrowing from Los Alamos, a predominantly test-based culture being a compliment to experiments. As such theory, design and computation is a hard sell and culturally discouraged. None of these approaches is the “right” approach and the cultures all move them toward a certain approach to be favored over others.

These characters graft themselves onto how computation is accomplished at each Lab. The computational side of things is favored at Lawrence Livermore yielding better support from the institution. This comes in the form of support for research and prestige for those doing computation. At the same time the validation of computatio220px-Foster_John_S_Jrns suffers a bit relative to the other Labs, as does the rigor of computed results. Los Alamos was the birthplace of all three labs and computational work, but always puts computation in a subservient role compared to experiments. This leads to a mighty struggle between validation and calibration. Often calibration wins out so that computed results are sufficiently close to experiment. Sandia excels at process and rigor in the conduct of calculations, but struggles at other aspects (at least in a relative sense). The whole verification and validation approach to simulation quality comes from Sandia reflecting the rigor. At the same time institutional support and emphasis are weaker leading to long-term effects.

B61-12All this texture is useful to think about because it manifests itself in every place computational science is done today. The scientific culture of any institution is reflected in its emphasis, and approach to the conduct of science. The culture produces a natural set of priorities that define investments and acceptable quality. We can speak volumes about how computational work should be done, but the specific acuity to the message is related to preconceived notions about the aspects. For example, some places are more prone to focus on computing hardware as an investment. In terms of the competition for resources, the purchase of hardware is a priority, and a typical route for enhancement. This becomes important when trying to move into new “hot” areas. If the opportunity falls in line with the culture, investments flow and if it is out of line the institution will miss it.

Cleland_taylor_320omputational science is a relatively new area of endeavor. It is at most 70 years old as practiced at Los Alamos; it is a new area of focus in most places. Sometime it is practiced at an institution and added to the repertoire as a new innovative way of doing work. In all these cases the computational work adopts the basic culture of the institution it exists within. It then differentiates based on the local conditions usually dominated by whatever the first acknowledged success is. One of the key aspects of a culture is origin stories or mythological achievements. Origins are almost invariably fraught situations with elements of crisis. These stories pervade the culture and define what success looks like and how investments in the future are focused.

downloadWhere I work at Sandia, the origin story is dominated by early success with massively parallel computers. The greatest success was the delivery of a computer, Red Storm. As a result the culture is obsessed with computer hardware. The path to glory and success runs through hardware; a focus on hardware is culturally accepted and natural for the organization. It is a strong predisposition. At Lawrence Livermore the early stages of the Laboratory were full of danger and uncertainty. Early in the history of the Lab there was a huge breakthrough in weapons design. It used computational modeling, and the lead person in the work went on to huge professional success (Lab Director). This early success became a blueprint for others and an expected myth to be repeated. A computational study and focus was always expected and accepted by the Lab. At Los Alamos all roads culturally lead to the Manhattan Project. The success in that endeavor has defined the Laboratory ever since. The manner of operation and approach to science adopted then is blueprint for success at that Laboratory. The multitude of crises starting with the end of the Cold War, spying, fires, and scandal have all weakened the prevailing culture, and undermined the future.

In each case the myths and legends of past success provide the basis for the culture and the means of understanding why a place is what the place it is. Old myths and legends have to be replaced to change the culture, and this can only happen in a crisis of sufficient magnitude to challenge the existing culture. We can’t usually manage to think about what culture arises from the resolution of a crisis, we are too busy surviving to make the best use of the opportunity.

Without culture, and the relative freedom it implies, society, even when perfect, is but a jungle. This is why any authentic creation is a gift to the future.

― Albert Camus

 

Credibility and Confidence

Leadership is fundamentally about credibility.

― Rick Crossland

Under the best of circumstances we would like to confidently project credibility for the modeling and simulation we do. Under the worst of circumstances we would have confidence in modeling and simulation without credibility. This is common. Quite often the confidence is the product of arrogance or ignorance instead of humility download-2and knowledge. This always manifests itself with a lack of questioning in the execution of work. Both of these issues are profoundly difficult to deal with and potentially fatal to meaningful impact of modeling and simulation. These issues are seen quite frequently. Environments with weak peer review contribute to allowing confidence with credibility to persist. The biggest part of the problem is a lack of pragmatic acceptance of modeling and simulation’s intrinsic limitations. Instead we have inflated promises and expectations delivered by over confidence and personality rather than hard nosed technical work.

blog-image-pursuit-of-excellenceWhen confidence and credibility are both in evidence, modeling and simulation is empowered to be impactful. It will be used appropriately with deference to what is and is not possible and known. When modeling and simulation is executed with excellence and professionalism along with hard-nosed assessment of uncertainties, using comprehensive verification and validation, the confidence is well grounded in evidence. If someone questions a simulations result, answers can be provided with well-vetted evidence. This produces confidence in the results because questions are engaged actively. In addition the limitations of the credibility are well established, and confidently be explained. Ultimately, credibility is a deeply evidence-based exercise. Properly executed and delivered, the degree of credibility depends on honest assessment and complete articulation of the basis and limits of the modeling.

When you distort the truth, you weaken your credibility.

― Frank Sonnenberg

One of the dangers of hard-nosed assessment is the tendency for those engaged in it to lose confidence in the work. Those who aggressively pursue credibility assessment tend to be cynics and doubters. They are prone to pessimism. They usually project doubt and focus on limitations of the modeling instead of confidence where it may be used. One of the hardest tricks of credibility assessment is pairing excellence in the execution of the work with an appropriate projection of confidence. The result is a mixed message where confidence is projected without credibility, and credibility is projected without confidence. Neither serves the purpose of progress in the impact of modeling and simulation.

The_Unknown_Known_posterOne of the major sins of over-confidence is flawed or unexamined assumptions. This can be articulated as “unknown knowns” in the famously incomplete taxonomy forwarded by Donald Rumsfeld in his infamous quote. He didn’t state this part of the issue even though it was the fatal flaw in the logic of the Iraqi war in the aftermath of 9/11. There were basic assumptions about Hussein’s regime in Iraq that were utterly false, and these skewed the intelligence assessment leading to war. They only looked at information that supported the conclusions they had already drawn or wanted to be true. The same faulty assumptions are always present in modeling. Far too many simulation professionals ignore the foundational and unfounded assumptions in their work. In many cases assumptions are employed without thought or question. They are assumptions that the community has made for as long as anyone can remember and simply cannot be questioned. This can include anything from the equations solved, to the various modeling paradigms applied as a matter of course. Usually these are unquestioned and completely unexamined for validity in most credibility assessments.

This is an immensely tricky thing to execute. The standard assumptions are essential to managing complexity and making progress. That said, it is a remarkably difficult and important task to detect when the assumptions become limiting. More succinctly put, the limitations of the standard assumptions need to be thought-through and tested. Usually these assumptions can only be tested through removing everything else from the field and doing very hard work. It is so much easier to simply downloadstay the course and make standard assumptions. In many cases the models have been significantly calibrated to match existing data, and new experiments or significantly more accurate measurements are needed to overturn or expose modeling limitations. Moreover the standard assumptions are usually unquestioned by peers. Questions are often met with ridicule. A deeply questioning assessment requires bravery and fortitude usually completely lacking from working scientists and utterly unsupported by our institutions.

Another manner for all of this to unfold is unwarranted confidence. Often this is couched in the form of arrogant perspectives where the proof of credibility is driven by download-1personality. This proof by authority is incredibly common and troubling to dislodge. In many cases personal relationships to consumers of simulations are used to provide confidence. People are entrusted with the credibility and learn how to give their customer what they want. Credibility by personality is cheap and requires so much less work plus it doesn’t raise any pesky doubts. This circumstance creates an equilibrium that is often immune to scientific examination. It is easier to bullshit the consumers of modeling and simulation results than level with them about the true quality of the work.

The credibility of the teller is the ultimate test of the truth of a proposition.

― Neil Postman

More often than not honest and technically deep peer review is avoided like a plague. If it is imposed on those practicing this form of credibility, the defense of simulations takes the personal form of attacking the peer reviewers themselves. This sort of confidence is a cancer on quality and undermines any progress. It is a systematic threat to excellence in simulation, and must be controlled. It is dangerous because it is effective in providing support for modeling and simulation along with the appearance of real World impact.

images-3One of the biggest threats to credibility is the generation of the lack of confidence honesty has. Engaging deeply and honestly in assessment of credibility is excellent at undermining confidence. Almost invariably the accumulation of evidence regarding credibility endows the recipients of this knowledge with doubt. These doubts are healthy and often the most confident people are utterly ignorant of the shortcomings. The accumulation of evidence regarding the credibility should have a benefit for the confidence in how simulation is used. This is a problem when those selling simulation oversell what it can do. The promise of simulation has been touted widely as transformative. The problem with modeling and simulation is its tangency to reality. The credibility of simulations is grounded by reality, but the uncertainty comes from both modeling, but also the measured and sensed uncertainty with our knowledge of reality.

The dynamic and tension with confidence and credibility should be deeply examined. When confidence is present without evidence, people should be deeply suspicious. A strong culture of (independent) peer review is an antidote to this. Too often these days the peer review is heavily polluted by implicit conflicts of interest. The honesty of peer review is hampered by an unwillingness to deal with problems particularly with respect to modification of the expectations. Invariably modeling and simulation has beedownload-5n oversold and any assessment will provide bad news. In today’s World we see a lot of bad news rejected, or repackaged (spun) to sound like good news. We are in the midst of a broader crisis of credibility with respect to information (i.e. fake news), so the issues with modeling and simulation shouldn’t be too surprising. We would all be well served by a different perspective and approach to this. The starting point is a re-centering of expectations, but so much money has been spent using grossly inflated claims.

Belief gives knowledge credibility.

― Steven Redhead

So what should we expect from modeling and simulation?

Modeling and simulation is a part of the scientific process and subject to its limits and rules. There is nothing magic about simulation that unleashes modeling from its normal bullshit_everywhere-e1345505471862limitations. The difference that simulation makes is the ability to remove the limitations of analytical model solution. Far more elaborate and accurate modeling decisions are available, but carry other difficulties due to the approximate nature of numerical solutions. The tug-of-war intellectually is the balance between modeling flexibility, nonlinearity and generality with effects of numerical solution. The bottom line is the necessity of assessing the uncertainties that arise from these realities. Nothing releases the modeling from its fundamental connection to validity grounded in real world observations. One of the key things to recognize is that models are limited and approximate in and of themselves. Models are wrong, and under a sufficiently resolved examination will be invalid. For this reason an infinitely powerful computer will ultimately be useless because the model will become invalid at some resolution. Ultimately progress in modeling and simulation is based on improving the model. This fact is ignored by computational science today and will result wasting valuable time, effort and money chasing quality that is impossible to achieve.

Bullshit is a greater enemy of the truth than lies are.

—Harry Frankfurt

In principle the issue of credibility and confidence in modeling and simulation should be based on evidence. Ideally this evidence should be quantitative with key indicators of its quality included. Ideally, the presence of the evidence should bolster credibility. Instead, paradoxically, evidence associated with the credibility of modeling and simulation seems to undermine credibility. This is a strong indicator that claims about the predictive power of modeling and simulation has been over-stated. 03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0aeThis is a nice way of saying this is usually a sign that the quality is actually complete bullshit! We can move a long way toward better practice by simply recalibrating our expectations about what we can and can’t predict. We should be in a state where greater knowledge about the quality, errors and uncertainty in modeling and simulation work improves our confidence.

If you can’t dazzle them with brilliance, baffle them with bullshit!

– W.C. Fields

Part of the issue is the tendency for the consumers of modeling and simulation work to not demand evidence to support confidence. This evidence should always be present and available for scrutiny. If claims of predictive power are made without evidence, the default condition should be suspicion. The various sources of error and uncertainty should be drawn out, and quantified. There should be estimates based on concrete evidence for the value of uncertainty for all sources. Any uncertainty that is declared to be zero or negligible must have very specific evidence to support this assertion. Even more important any claims of this nature should receive focused and heavy scrutiny because they are likely to be based on wishful thinking, and often lack any evidentiary basis.

incompetencedemotivatorOne of the issues of increasing gravity in this entire enterprise is the consumption of results using modeling and simulation by people unqualified to judge the quality of the work. The whole enterprise is judged to be extremely technical and complex. This inhibits those using the results from asking key questions regarding the quality of the work. With the people producing modeling and simulation results largely driven by money rather than technical excellence, we have the recipe for disaster. Increasingly, false confidence accompanies results and snows the naïve consumers into accepting the work. Often the consumers of computational results don’t know what questions to ask. We are left with quality being determined more by flashy graphics and claims about massive computer use than any evidence of prediction. This whole cycle perpetuates an attitude that starts to allow viewing reality as more of a video game and less like a valid scientific enterprise. Over inflated claims of capability are met with money to provide more flashy graphics and quality without evidence. We are left with a field that has vastly over-promised and provided the recipe for disaster.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― Ta-Nahisi Coates

A Good Question Makes Everything Better

Judge a man by his questions rather than by his answers.

― Voltaire

In thinking about what makes work good for me, I explored an element of the creative process for me revolving around answering questions. If one doesn’t have the right question, the work isn’t framed correctly and progress will stall. A thing to consider in this frame of reference is what makes a good question? This itself is an excellent question! The quality of the question makes a great difference in framing the whole scientific enterprise, and can either lead to bad places of “knowledge cul-de-sacs” or open stunning vistas of understanding. Where you end up depends on the quality of the question you answer. Success depends far more on asking the right question than answering the question originally put to you (or you put to yourself).

truth, like gold, is to be obtained not by its growth, but by washing away from it all that is not gold.

― Leo Tolstoy

Feynman_RichardA great question is an achievement in itself although rarely viewed as such. More often than not little of the process of work goes into asking the right question. Often the questions we ask are highly dependent upon foundational assumptions that are never questioned. While assumptions about existing knowledge are essential, finding the weak or invalid assumptions is often the key to progress. These assumptions are wonderful for simplifying work, but also inhibit progress. Challenging assumptions is one of the most valuable things to do. Heretical ideas are fundamental to progress; all orthodoxy began as heresy. If the existing assumptions hold up under the fire of intense scrutiny they gain greater credibility and value. If they fall, new horizons are opened up to active exploration.

If we have no heretics we must invent them, for heresy is essential to health and growth.

― Yevgeny Zamyatin

It goes without saying that important questions are good ones. Defining importance is tricky business. There are plenty of important questions that lead nowhere “what’s the meaning of life?” or we simply can’t answer using existing knowledge, “is faster than light travel possible?” On the other hand we might do well to break these questions down to something more manageable that might be attacked, “is the second law of thermodynamics responsible for life?” or “what do subatomic particles tell us about the speed of light?” Part of the key to good scientific progress is threading the proverbial needle of important, worthy and possible to answer. When we manage to ask an important, but manageable question, we serve progress well. Easy questions are not valuable, but are attractive due to their lack of risk and susceptibility to management and planning. Sometimes the hardest part of the process is asking the question, and a well-defined and chosen problem can be amenable to trivial resolution. It turns out to be an immensely difficult task with lots of hard work to get to that point.

I have benefited mightily from asking some really great questions in the past. These _12122_tex2html_wrap26questions have led to the best research, and most satisfying professional work I’ve done. I would love to recapture this spirit of work again, with good questioning work feeling almost quaint in today’s highly over-managed climate. One simple question occurred in my study of efficient methods for solving the equations of incompressible flow. I was using a pressure projection scheme, which involves solving a Poisson equation at least once, if not more than once a time step. The most efficient way to do this involved using the multigrid method because of its algorithmic scaling being linear. The Poisson equation involves solving a large sparse system of linear equations, and the solution of linear equations scales with powers of the number of equations. Multigrid methods have the best scaling thought to be possible (I’d love to see this assumption challenged and sublinear methods discovered, I think they might well be possible).

As problems with incompressible flows become more challenging such as involving large density jumps, the multigrid method begins to become fragile. Sometimes the optimal scaling breaks down, or the method fails altogether. I encountered these problems, but found that other methods like conjugate gradient could still solve the problems. The issue is that the conjugate gradient method is less efficient in its scaling than multigrid.cycles As a result as problems become larger, the proportion of the solution time spent solving linear equations grows ever larger (the same thing is happening now to multigrid because of the cost of communication on modern computers). I posed the question of whether I could get the best of both methods, the efficiency with the robustness? Others were working on the same class of problems, and all of us found the solution. Combine the two methods together, effectively using a multigrid method to precondition the conjugate gradient method. It worked like a charm; it was both simple and stunningly effective. This approach has become so standard now that people don’t even think about it, its just the status quo.

At this point it is useful to back up and discuss a key aspect of the question-making process essential to refining a question into something productive. My original question was much different, “how can I fix multigrid?” was the starting point. I was working from the premise that multigrid was optimal and fast for easier problems, and conjugate gradient was robust, but slower. A key part of the process was a reframing the question. The question I ended up attacking was “can I get the positive attributes of both algorithms?” This changed the entire approach to solving the problem. At first, I tried switching between the two methods depending on the nature of the linear problem. This was difficult to achieve because the issues with the linear system are not apparent under inspection.

The key was moving from considering the algorithms as different options whole cloth, to combining them. The solution involved putting one algorithm inside the other. As it turns out the most reasonable and powerful way to do this is make multigrid a preconditioner for conjugate gradient. The success of the method is fully dependent on the characteristics of both algorithms. When multigrid is effective by itself, the conjugate gradient method is effectively innocuous. When multigrid breaks down, the conjugate gradient method picks up the pieces, and delivers robustness along with the linear scaling of multigrid. A key aspect of the whole development is embracing an assault on a philosophical constraint in solving linear systems. At the outset of this work these two methods were viewed as competitors. One worked on one or the other, and the two communities do not collaborate, or even talk to each other. They don’t like each other. They have different meetings, or different sessions at the same meeting. Changing the question allows progress, and is predicated on changing assumptions. Ultimately, the results win and the former feud fades into memory. In the process I helped create something wonderful and useful plus learned a huge amount of numerical (and analytical) linear algebra.

imagesThe second great question I’ll point to involved the study of modeling turbulent flows with what has become known as implicit large eddy simulation. Starting in the early 1990’s there was a stunning proposition that certain numerical methods seem to automatically (auto-magically) model aspects of turbulent flows. While working at Los Alamos and learning all about a broad class of nonlinearly stable methods, the claim that they could model turbulence caught my eye (I digested it, but fled in terror from turbulence!). Fast forward a few years and combine this observation with a new found interest in modeling turbulence, and a question begins to form. In learning about turbulence I digested a huge amount of theory regarding the physics, and our approaches to modeling it. I found large eddy simulation to be extremely interesting although aspects of the modeling were distressing. The models that worked well were performed poorly on the structural details of turbulence, and the models that matched the structure of turbulence were generally unstable. Numerical methods for solving large eddy simulation were generally based on principles vastly different than those I worked on, which were useful for solving Los Alamos’ problems.

Having methods I worked on for codes that do solve our problems also model turbulence is tremendously attractive. The problem is the seemingly magical nature of this modeling. Being magical does not endow the modeling with confidence. The question that we constructed a research program around was “can we explain the magical capability of numerical methods with nonlinear stability to model turbulence?” We combined the observation that a broad class of methods seemed to provide effective turbulence modeling (or the universal inertial range physics). Basically the aspects of turbulence associated with the large-scale hyperbolic parts of the physics were captured. We found that it is useful to think of this as physics-capturing as an extension of shock-capturing. The explanation is technical, but astoundingly simple.

Upon study of the origins of large eddy simulation we discovered that the origins of the method were the same as shock capturing methods. Once the method was developed it evolved into its own subfield with its own distinct philosophy, and underlying assumptions. These assumptions had become limiting and predicated on a certain point-of-view. Shock capturing had also evolved in a different direction. Each field focused on different foundational principles and philosophy becoming significantly differentiated. For the most part they spoke different scientific languages. It was important to realize that their origins were identical with the first shock capturing method being precisely the first subgrid model for large eddy simulation. A big part of our research was bridging the divides that had developed over almost five decades and learn to translate from one language to the other.

We performed basic numerical analysis of nonlinearly stable schemes using a technique that produced the nonlinear truncation error. A nonlinear analysis is vital here. This uses a technique known as modified equation analysis. The core property of the methods empirically known to be successful in capturing the physics of turbulence is conservation (control volume schemes). It turns out that the nonlinear truncation error for a control volume method for a quadratic nonlinearity produces the fundamental scaling seen in turbulent flows (and shocks for that matter). This truncation error can be destabilizing for certain flow configurations, effectively being anti-dissipative. The nonlinear stability method keeps the anti-dissipative terms under control, producing physically relevant solutions (e.g., entropy-solutions).

A key observation makes this process more reasoned and connected to the traditional large eddy simulation community. The control volume term matches the large eddy simulation models that produce good structural simulations of turbulence (the so-called scale similarity model). The scale similarity model is unstable with classical numerical methods. Nonlinear stability fixes this problem with aplomb. We use as much scale similarity as possible without producing unphysical or unstable results. This helps explain why a disparate set of principles used to produce nonlinear stability provides effective turbulence modeling. Our analysis also shows why some methods are ineffective for turbulence modeling. If the dissipative stabilizing effects are too large and competitive with the scale similarity term, the nonlinear stability is ineffective as a turbulence model.

It is dangerous to be right in matters on which the established authorities are wrong.

― Voltaire

sankaran_fig1_360I should spend some time on some bad questions as examples of what shouldn’t be pursued. One prime example is offered as a seemingly wonderful question, the existence of solutions to the incompressible Navier-Stokes equations. The impetus for this question is the bigger question of can we explain, predict or understand fluid turbulence? This problem is touted as a fundamental building block in this noble endeavor. The problem is the almost axiomatic belief that turbulence is contained within this model. The key term is incompressible, which renders the equations unphysical on several key accounts: it gives the system infinite speed of propagation, and divorces the equations from thermodynamics. Both features sever the ties of the equations from the physical universe. The arguing point is whether these two aspects disqualify it from addressing turbulence. I believe the answer is yes.

In my opinion this question should have been rejected long ago based on the available evidence. Given that our turbulence theory is predicated on the existence of singularities in ideal flows, and the clear absence of such singularities in the incompressible Navier-Stokes equations, we should reject the notion that turbulence is contained in them. Despite this evidence, the notion that turbulence is contained whole cloth in these unphysical equations remains unabated. It is treated as axiomatic. This is an example of an assumption that has out lived its usefulness. It will eventually be tossed out, and progress will bloom the path of its departure. One of the key things missing from turbulence is a connection to thermodynamics. Thermodynamics is such a powerful scientific concept and for it to be so absent from turbulence is a huge gap. Turbulence is a fundamental dissipative process and the second law is grounded on dissipation. The two should be joined into a coherent whole allowing unity and understanding to reign where confusion is supreme today.

Unknown-2Another poorly crafted question revolves around the current efforts for exascale class computers for scientific computing. There is little doubt that an exascale computer would be useful for scientific computing. A better question is what is the most beneficial way to push scientific computing forward? How can we make scientific computing more impactful in the real world? Can the revolution of mobile computing be brought to science? How can we make computing (really modeling and simulation) more effective in impacting scientific progress? Our current direction is an example of crafting an obvious question, with an obvious answer, but failing to ask a more cutting and discerning question. The consequence of our unquestioning approach to science will be wasted money and stunted progress.

Trust is equal parts character and competence… You can look at any leadership failure, and it’s always a failure of one or the other.

― Stephen M.R. Covey

This gets at a core issue with how science is managed today. Science has never been managed more tightly and becoming more structurally mismanaged. The tight management of science as exemplified by the exascale computing efforts is driven by an overwhelming lack of trust in those doing science. Rather than ask people open-ended questions subject to refinement through learning, we ask scientists to work on narrowly defined programs with preconceived outcomes. The reality is that any breakthrough, or progress for that matter will take a form not envisioned at the outset of the work. Any work that pushes mankind forward will take a form not foreseeable. By managing so tightly and constraining work, we are predestining the outcomes to be stunted and generally unworthy of the effort put into them.

Whether you’re on a sports team, in an office or a member of a family, if you can’t trust one another there’s going to be trouble.

― Stephen M.R. Covey

This is seeded by an overwhelming lack of timagexsrust in people and science. Trust is a powerful concept and its departure from science has been disruptive and expensive. Today’s scientists are every bit as talented and capable as those of past generations, but society has withdrawn its faith in science. Science was once seen as a noble endeavor that embodied the best in humanity, but generally not so today. Progress in the state of human knowledge produced vast benefits for everyone and created the foundation for a better future. There was a sense of an endless frontier constantly pushing out and providing wonder and potential for everyone. This view was a bit naïve and overlooked the maxim that human endeavors in science are neither good or bad, producing outcomes dependent upon the manner of their use. For a variety of reasons, some embedded within the scientific community, the view of society changed and the empowering trust was withdrawn. It has been replaced with suspicion and stultifying oversight.

When I take a look at the emphasis in currently funded work, we see narrow vistas. There is a generally myopic and tactical view of everything. Long-term prospects, career development and broad objectives are obscured by management discipline and formality. Any sense of investment in the long-term is counter to the current climate. Nothing speaks more greatly to the overwhelming myopia is the attitude toward learning and personal development. It is only upon realizing that learning and research are essentially the same thing does it start to become clear how deeply we are hurting the scientific community. We have embraced a culture that is largely unquestioning with a well-scripted orthodoxy. Questions are seen as heresy against the established powers and punished. For most, learning is the acquisition of existing knowledge and skills. Research is learning new knowledge and skills. Generally speaking, those who have achieved mastery of their fields execute research. Since learning and deep career development is so hamstrung by our lack of trust, fewer people actually achieve the sort of mastery needed for research. The consequences for society are profound because we can expect progress to be thwarted.

Curiosity is more important than knowledge.

― Albert Einstein

One clear way to energize learning, and research is encouraging questioning. After encouraging a questioning attitude and approach to conducting work, we need to teach people to ask good questions, going back and refining questions, as better understanding is available. We need to identify and overcome assumptions subjecting them to unyielding scrutiny. The learning, research and development environment is equivalent to a questioning environment. By creating an unquestioning environment we short-circuit everything leading to progress, and ultimately cause much of the creative engine of humanity to stall. We would be well served by embracing the fundamental character of humanity as a creative, progressive and questioning species. These characteristics are parts of the best that people have to offer and allow each of us to contribute to the arc of history productively.

Curiosity is the engine of achievement.

― Ken Robinson

Brandt, Achi. “Multi-level adaptive solutions to boundary-value problems.” Mathematics of computation 31, no. 138 (1977): 333-390.

Briggs, William L., Van Emden Henson, and Steve F. McCormick. A multigrid tutorial. Society for Industrial and Applied Mathematics, 2000.

Kershaw, David S. “The incomplete Cholesky—conjugate gradient method for the iterative solution of systems of linear equations.” Journal of Computational Physics 26, no. 1 (1978): 43-65.

Melson, N. Duane, T. A. Manteuffel, and S. F. Mccormick. “The Sixth Copper Mountain Conference on Multigrid Methods, part 1.” (1993).

Puckett, Elbridge Gerry, Ann S. Almgren, John B. Bell, Daniel L. Marcus, and William J. Rider. “A high-order projection method for tracking fluid interfaces in variable density incompressible flows.” Journal of Computational Physics130, no. 2 (1997): 269-282.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Porter, David H., Paul R. Woodward, and Annick Pouquet. “Inertial range structures in decaying compressible turbulent flows.” Physics of Fluids 10, no. 1 (1998): 237-245.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Fefferman, Charles L. “Existence and smoothness of the Navier-Stokes equation.” The millennium prize problems (2006): 57-67.

Question, Analyze, Understand and Create,… Repeat

We don’t receive wisdom we must discover it for ourselves.

― Marcel Proust

Work is best when you start with a good question, analyze and learn until you discover and understand an answer to the question (questions often have many answers). Then you use this understanding to create something wonderful so that you can find a new and better question to answer. This virtuous cycle leads to be best work and provides the foundation for excellence. It is precisely the recipe for the best work experiences I’ve had, built my expertise and definitely how I’d prefer to keep doing work.

I’m on vacation this week (San Francisco is an amazing city!) and it is the perfect opportunity to think deeply about life and work. Work is an extremely important part of life, and I’ve concluded that some key things determine whether or not it is really good. The same things determine your ability to achieve excellence. What I’ve observed is a process that takes place leading up to my happiness and satisfaction. More importantly, it leads to great work, productivity and excellence. The elements of this successful recipe are founded on attacking a question that needs to be answered. This question can either come from something larger than myself, or simple innate personal curiosity. At the end of the process the question has been refined and answered yielding new understanding, knowledge, learning and tools to create something better. For me, the act of creation is the ultimate in job satisfaction for me. This is a virtuous cycle that leads to deep knowledge and the ability to recycle this process with an even better question using what has been learned and created.

Our real discoveries come from chaos, from going to the place that looks wrong and stupid and foolish.

― Chuck Palahniuk

patriley1The largest portion and most important part of this process is the analysis that allows us to answer the question. Often the question needs to be broken down into a series of simpler questions some of which are amenable to easier solution. This process is hierarchical and cyclical. Sometimes the process forces us to step back and requires us to ask an even better or more proper question. In sense this is the process working in full with the better and more proper question being an act of creation and understanding. The analysis requires deep work and often study, research and educating oneself. A new question will force one to take the knowledge one has and combine it with new techniques producing enhanced capabilities. This process is on the job education, and fuels personal growth and personal growth fuels excellence. When you are answering a completely new question, you are doing research and helping to push the frontiers of science forward. When you are answering an old question, you are learning and you might answer the question in a new way yielding new understanding. At worst, you are growing as a person and professional.

This is an utterly noble endeavor and embodies the best of mankind. At times you are simply pushing your self forward into areas others know very well already, but to you it is fresh and new. This is OK and even essential to get to the place where your work is unique. An under appreciated aspect of this sort of learning is the path you take is the potential to learn things in new ways. Your path is likely to be different than anyone else’s and grafts your own experience and understanding on to the topic anew. This is immensely valuable and can unveil new paths and depth to existing knowledge. Today this sort of thing is wholly unsupported and under appreciated. We need to make a new commitment to use this path to excellence.

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

― Marcel Proust

Sometimes the question being answered has been well studied and one is simply discovering knowledge others have already mastered. This is important growth for a professional getting to the point where the frontier of knowledge exists. This is a necessary element in getting to research, which doesn’t happen automatically. One needs to climb up the mountain of human knowledge before getting to the apex. This is the process of education as a professional and an immensely exciting calling. The mastery of a topic requires many essential elements be mastered drawing together knowledge from diverse forces. Often the best research draws together rather pedestrian bits of knowledge from diverse fields in novel manners heretofore unseen before. When we don’t support this sort of endeavor, we smother important avenues of discovery and deny our society of the most important discoveries. Charting new paths to knowledge is either a wondrous personal journey and/or an alternative way to understand.

Discovery consists of looking at the same thing as everyone else and thinking something different.

― Albert Szent-Györgyi

Ultimately the elements are drawn together and allow the question to be answered productively. This often produces a new kernel of understanding. This knowledge can often be harnessed to produce the wherewithal for something new. The understanding will allow a new and unique act of creation. Sometimes you are creating something that others already know about, but for you it is new. That is enough for excellence; it is the engine of personal excellence. If you complete this cycle often enough eventually the creation will be genuinely original and new. The deep and powerful educational elements of this process leads to outstanding professionals well before one gets to genuinely new and unique research. It is essential to realize that very few creations are completely original with most discoveries being the combination of elements that are well known in other applications. In many cases the analysis and study of the answer to the original question itself creates something new and wonderful of many forms.

What is wanted is not the will to believe, but the will to find out, which is the exact opposite.

― Bertrand Russell

mediocritydemotivatorOnce this creation is available, new questions can be posed and solved. These creations allow new questions to be asked answered. This is the way of progress where technology and knowledge builds the bridge something better. If we support excellence and a process like this, we will progress. Without support for this process, we simply stagnate and whither away. The choice is simple either embrace excellence by loosening control, or chain people to mediocrity.

Science is the process that takes us from confusion to understanding…

― Brian Greene

The Foundations of Verification: Solution Verification

A very great deal more truth can become known than can be proven.

― Richard Feynman

Solution verification involves examining error and results without the knowledge the imgresexact solution. This makes it a more difficult task than code verification where an exact solution is known removing a major uncertainty. A secondary issue associated with not knowing the exact solution is the implications on the nature of the solution itself. With an exact solution, a mathematical structure exists allowing the solution to be achievable analytically. Furthermore, exact solutions are limited to relatively simple models that often cannot model reality. Thus, the modeling approach to which solution verification is applied is necessarily more complex. All of these factors are confounding and produce a more perilous environment to conduct verification. The key product of solution verification is an estimate of numerical error and the secondary product is the rate of convergence. Both of these quantities are important to consider in the analysis.

The way to cope with this generally more hostile analysis environment involves improved analysis methods. One of the key elements in the analysis is contending with the lack of certainty about the solution, its nature and character mathematically. For this reason the knowledge and guarantees about the results is missing. For instance we don’t know what order of convergence to reasonably expect from the analysis and cannot use this to screen our results. Generally speaking if the verification result shows convergence at the theoretical rate for the method we can be sure we are solving a relatively simple “easy” problem. Usually the applied problems that modeling & simulation are attacking are mathematically difficult. Philosophically, the whole reason for modeling & simulation is solving problems that are beyond our analytical grasp. In a deep sense the complex and difficult character to problems is unavoidable for the practical a use of modeling with computers. When we’ve successfully attacked the problem of verification for a problem without an exact solution, the same analysis methodology can improve our code verification practice.chart-with-huge-error-bars

It is important to understand solution verification within the broader context of computational modeling. Solution verification contributes to the overall enterprise of analysis uncertainty quantification. The most classical investigation will involve comparing the modeled results with observations in the real World (ideally an experiment). There are many elements to the uncertainty in this case including the model parameters, the constitutive properties, the experimental measurements and the numerical solution. Solution verification is the process for examining and estimating the numerical error and specifying its uncertainty. Sometimes this is applied in the use of computational modeling for purposes of decision-making or scenario testing where no real World data exists. In this case the numerical error is an important element in the overall lack of certainty about the results. If the numerical error is well behaved it will be a bias from the exact continuum solution to the model. This bias is important to understand in how it might skew the results and any advise.

There are two ways to do great mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else — but persistent.

― Raoul Bott

When one lays out the mathematical framework for solution verification, the immediate impression is the added difficulty compared to code verification is the lack of direct knowledge of the precise solution. The full solution to the problem is inferred from the inaccurate numerical solutions. The equation to solve is the following S_0 = S_k + C h_k^a where the new unknown is the obstensible estimate of the exact solution S_0 that is the solution where h=0. The solutions used to determine this estimate are S_k the solutions found with h_k. We notice that we have imagesthree unknowns, S_0, C, a meaning the well-determined solution requires three pieces of determined data, S_k. As we will discuss this problem can be solved in a variety of ways including under-, fully and over-determined forms.

One of the key issues to recognize with solving this problem is an aspect of complexity because of the general nonlinearity of the determination of the model. The solution to this coupled system of nonlinear equations is generally subtle, and necessarily solved numerically. As such, the solution can have its own errors requiring some care and verification. The system of equations admits a simple analytical solution in special cases where the discrete solutions use a sequence of meshes where r = h_k/h_{k-1} is constant. In this case we can write the solution in closed form \log (E_{1,2}/E_{2,3}) / \log (r) , where E_{k,k-1} = S_k - S_{k-1}. More generally we need to attack this with a coupled nonlinear solve. If we deal with an over-determined version of the problem we will use a nonlinear least squares solver (or this is the knee-jerk response). As we discuss next, thinking about this decision opens the door to some more interesting and robust choices.

The general over-determined version of the solution verification equation (i.e., more than three grids) would be amenable to solution via nonlinear least squares method. This is not the only choice, and consideration of this opens the door to other choices. The solution to the over-determined problem is not unique, and the solution has the imprint of the method of solution. As such the choice of least squares implies a number of explicit assumptions that the typical practitioner doesn’t even know they are making. For example, one may choose to solve the over-determined problem in a different norm than the two norm (i.e., least squares). One may choose to solve a constrained problem instead of an unconstrained problem. In addition, one could consider solving an under-determined problem adding either constraints or regularizing the solution. A classical example of regularization is the Tikhonov method where a penalty is added to make the problem well determined. A popular recent approach focuses on a similar regularization, but in the one norm (compressed sensing, LASSO, …).

mediocritydemotivatorThere are several practical issues related to this whole thread of discussion. One often encountered and extremely problematic issue is insanely high convergence rates. After one has been doing verification or seeing others do verification for a while, the analysis will sometimes provide an extremely high convergence rate. For example a second order method used to solve a problem will produce a sequence that produces a seeming 15th order solution (this example is given later). This is a ridiculous and results in woeful estimates of numerical error. A result like this usually indicates a solution on a tremendously unresolved mesh, and a generally unreliable simulation. This is one of those things that analysts should be mindful of. Constrained solution of the nonlinear equations can mitigate this possibility and exclude it a priori. This general approach including the solution with other norms, constraints and other aspects is explored in the paper on Robust Verification. The key concept is the solution to the error estimation problem is not unique and highly dependent upon assumptions. Different assumptions lead to different results to the problem and can be harnessed to make the analysis more robust and impervious to issues that might derail it.

The techniques discussed in that paper were originally devised to deal with the all too often case where only one or two different grids are used and the error estimation problem is under-determined. The approach taken to solve this problem involves adding constraints to the solution based on expert knowledge and judgment. The overall approach was then approached when it was realized that the under- fully- and over-determined cases should all be treated consistently. The verification problem is solved repeatedly using different assumptions resulting in a natural variation in the results providing uncertainty in the error estimation and the rate of convergence. If the data is self consistent with a well-defined solution the uncertainty in the error will itself be small and the convergence rate will also be certain. Conversely if the data is conflicting or opposes expert expectations, the uncertainty will be large. This entire methodology produces a more robust numerical uncertainty that adapts to the data, and avoids using fixed size safety factors. It turns out that this expert judgment is usually called into action with verification, but in an ad hoc manner and only when the issues are serious. The robust verification adds the expert judgment from the outset so that more subtle issues are subject to the same treatment.

Instead of solving the verification equation once using a nonlinear least squares approach, robust verification solves the problem in a multitude of ways. This involves solving the verification problem using other error norms in a constrained minimization framework. The data is also used over. One standard assumption is that the solutions on the finer grids (smaller h) are closer to the exact solution, and this data is more prominent in the solution. The end result of the analysis is a multitude of estimates of the numerical error and convergence. These results are then subjected to robust statistical examination using median statistics. We report the median of the estimates as the error and convergence rate. The median deviation is used to place and uncertainty on this estimate. One of the key benefits of this estimation is its lack of susceptibility to corruption by outliers in the analysis. Outliers are further suppressed in the analysis by the use of expert judgment as constraints. For example, the absurdly large convergence rates are removed by the constraints if the rate of convergence is constrained to be below a given value.

forwardEulerBefore moving to examples of solution verification we will show how robust verification can be used for code verification work. Since the error is known, the only uncertainty in the analysis is the rate of convergence. As we can immediately notice that this technique will get rid of a crucial ambiguity in the analysis. In standard code verification analysis, the rate of convergence is never the exact formal order, and expert judgment is used to determine if the results is close enough. With robust verification, the convergence rate has an uncertainty and the question of whether the exact value is included in the uncertainty band can be asked. Before showing the results for this application of robust verification, we need to note that the exact rate of verification is only the asymptotic rate in the limit of h = 0. For a finite step size the rate of convergence should deviate from this value and for simple cases the value can be derived using a modified version of classical numerical analysis.

Our first example of solution verification will repeat our examination of simple ODE integrators, but disregard our knowledge of the exact solution. It is a useful example because we can examine the efficacy of solution verification with a precise knowledge of the true errors. We can use the data from our code verification study to good effect here. Here is the raw data used for the forward Euler study.

h Solution, t=1 Error, t=1
0.20 0.3277 0.0402
0.10 0.3487 0.0192
0.05 0.3585 0.0094
0.02 0.3642 0.0037
0.01 0.3660 0.0018
estimate 0.3678±0.0002  

For the code verification part of the example, the estimated truncation error is E=0.2030 h^{1.0245\pm0.0124}. The error bars do not take us to the theoretical convergence rate of one. The data is consistent with the rate being above one (and this is analytically expected). Using this same data for solution verification yields the following model, S(h) = 0.3678 \pm 0.0002 - 0.2080 h^{1.0386 \pm 0.0207}. Close examination shows that this solution is quite close to the exact solution 0.0001 and within the error bars. If we use the standard techniques of simply least square fitting the data we get the following model, S(h) = 0.3677 - 0.2239 h^{1.0717} . The error estimate here is 0.0017, which ends up being rather over generous when the standard safety factor of 1.25 is applied. Using the robust verification technique we get a better estimate of the exact solution, the actual convergence rate and a tighter error bound.

Supposing is good, but finding out is better.

― Mark Twain

It is also useful to look at a pathological case where the rate of convergence is absurd and standard analysis would be prone to missing it. The case we have at our fingertips involved very coarse grid solutions to large eddy simulation in a complex geometry relevant to heat transfer and fluid flow in nuclear reactors. Early calculations were used to estimate the mesh required for well-resolved calculations. As we found out, this is a perilous enterprise. A couple codes (one production and one research) we enlisted in thirodbundles study using some initial grids that were known to be inadequate. One of the codes was relatively well trusted for this class of applications and produced three solutions that for all appearances appeared reasonable. One of the key parameters is the pressure drop through the test section. Using grids 664K, 1224K and 1934K elements we got pressure drops of 31.8 kPa, 24.6 kPa and 24.4 kPa respectively. Using a standard curve fitting for the effective mesh resolution gave an estimate of 24.3 kPa±0.0080 kPa for the resolved pressure drop and a convergence rate of 15.84. This is an absurd result and needs to simply be rejected immediately. Using the robust verification methodology on the same data set, gives a pressure drop of 16.1 kPa±13.5 kPa with a convergence rate of 1.23, which is reasonable. Subsequent calculations on refined grids produced results that were remarkably close to this estimate confirming the power of the technique even when given data that was substantially corrupted.

drekarOur final example is a simple case of validation using the classical phenomena of vortex shedding over a cylinder at a relatively small Reynolds number. This is part of a reasonable effort to validate a research code before using in on more serious problems. The key experimental value to examine is the Stouhal number defined, St = f \ell/U the shedding frequency normalized by the size of cylinder and the velocity, which has the value experimentally of 0.164\pm 0.005 for a flow of Reynolds number 100 (the Reynolds number is the non-dimensional ratio of inertial to viscous force in a flow).

∆t RMS h St
0.002 0.054111988 0.110474853
0.002 0.023801688 0.152492294
0.002 0.010786082 0.164777976
0.002 0.005264375 0.165127187

When we apply the robust verification methodology to this data we find that the code produces a Strouhal number that is slightly larger than the experimental value St(h) = 0.1657\pm 0.0013 + C h^{1.8486\pm 0.1476}. Including error bars recovers the experimental value. This can be regarded as a modest success for the code’s ability to be considered for more complex flows.

The foundation of data gathering is built on asking questions. Never limit the number of hows, whats, wheres, whens, whys and whos, as you are conducting an investigation. A good researcher knows that there will always be more questions than answers.

― Karl Pippart III

Rider, William, Walt Witkowski, James R. Kamm, and Tim Wildey. “Robust verification analysis.” Journal of Computational Physics 307 (2016): 146-163.

 

The Foundations of Verification: Code Verification

A very great deal more truth can become known than can be proven.

― Richard Feynman

In modeling and simulation verification is a set of activities broadly supporting the quality. Verification consists of two modes of practice: code verification where the mathematical correctness of the computer code is assessed, or solution (calculation) verification where the numerical error (uncertainty) is estimated. Both activities are closely linked to each other and they are utterly complementary in nature. To a large extent the methodology used for both types of verification are similar, but the differences between the two are important to maintain.chart-with-huge-error-bars

Modeling and simulation is an activity where continuous mathematics is converted to discrete computable quantities. This process involves approximation of the continuous mathematics and in almost every non-pathological circumstance is inexact. The core of modeling and simulation is the solution of (partial) differential equations using approximation methods. Code verification is a means of assuring that the approximations used to make the discrete solution of differential equations tractable on a computer are correct. A key aspect of code verification is determining that the discrete approximation of the differential equation is consistent with the continuous version of the differential equation.

Consistency demands that the order of approximation of the differential equation be at least one. In other words the discrete equations produce solutions that are the original continuous equations plus terms that are proportional to the size of the discretization. This character may be examined by solving problems with an exact analytical solution (or a problem with very well controlled and characterized errors) using several discretization sizes allowing the computation of errors, and determining the order of approximation. The combination of consistency and stability of the approximation means the approximation converges to the correct solution of the continuous differential equation.

We will examine both the nature of different types of problems to determine code verification and the methods of determining the order of approximation. One of the key aspects of code verification is the congruence of the theoretical order of accuracy for a method, and the observed order of accuracy. It is important to note that the theoretical order of convergence also depends upon the problem being solved. The problem must possess enough regularity to support the convergence rate expected. At this point it is important to point out that code verification produces both an order of approximation and an observed error in solution. Both of these quantities are important. For code verification, the order of approximation is the primary quantity of interest. It depends on both the nature of the approximation method and the problem being solved. If the problem being solved is insufficiently regular and smooth, the order of accuracy will not match the theoretical expectations of the method.

The second form of verification is solution verification. This is quite similar to code verification, but its aim is the estimation of approximation errors in a calculation. When one runs a problem without an analytical solution, the estimation of errors is more intricate. One looks at a series of solutions and compute the solution that is indicated by the sequence. Essentially the question of what solution is the approximation appearing to converge toward is being asked. If the sequence of solutions converges, the error in the solution can be inferred. As with code verification the order of convergence and the error is a product of the analysis. Conversely to the code verification, the error estimate is the primary quantity of interest, and the order of convergence is secondary.

Unknown-2

 

 

The approach, procedure and methodology for both forms of verification are utterly complementary. Much of the mathematics and flow of work are shared in all verification, but details, pitfalls and key tips differ. In this post the broader themes of commonality are examined along with distinctions and a general rubric for each type of verification is discussed.

Code verification

Science replaces private prejudice with public, verifiable evidence.

― Richard Dawkins

When one conducts a code verification study there is a basic flow of activities and practices to conduct. One looks at a code to target and a problem to solve. Several key bits of information should be immediately being focused upon before the problem is solved. What is the order of accuracy for the method in the code being examined, and what is the order of accuracy that the problem being solved can expose? In addition the nature of the analytical solution to the problem should be carefully considered. For example what is the nature of the solution? Closed form? Series expansion? Numerical evaluation? Some of these forms of analytical solution have errors that must be controlled and assessed before the code’s method may be assessed. By the same token are there auxiliary aspects of the code’s solution that might pollute results? Solution of linear systems of equations? Stability issues? Computer roundoff or parallel computing issues? In each case these details could pollute results if not carefully excluded from consideration.

Next one needs to produce a solution on a sequence of meshes. For simple verificationforwardEuler using a single discretization parameter only two discretizations are needed for verification (two equations to solve for two unknowns). For code verification the model for error is simple, generally a power law, E = A h^a where the error is proportional to the discretization parameter h to the power (order) a. There is also a constant of proportionality. The order, a is the target of the study and one looks at its congruence with the expected theoretical order for the method on the problem being solved. It is almost always advisable to use more than the minimum number of meshes to assure that one simply isn’t examining anamolous behavior from the code.

One of the problems with code verification is the rarity of the observed order of convergence to exactly match the expected order of convergence. The question of how close is close enough haunts investigations. Invariably the observed order will deviate from the expected order by some amount. The question for the practitioner is how close is acceptable? Generally this question is given little attention. There are more advanced verification techniques that can put this issue to rest by producing uncertainties on the observed order, but the standard techniques simply produce a single result. Usually this results in rules of thumb that apply in broad brushes, but undermine the credibility of the whole enterprise. Often the criterion is that the observed order should be within a tenth of the theoretically expected result.

Another key caveat comes up when the problem is discontinuous. In this case the observed order is either set to one for nonlinear solutions, or weakly tied to the theoretical order of convergence. For the wave equation this result was studied by Banks, Aslam and Rider and admits an analytical and firmly determined result. In both cases the issue of inexact congruence with the expected rate of convergence remains. In addition for problems involving systems of equations will have multiple features each having a separate order of convergence, and the rates will combine within a solution. Ultimately in an asymptotic sense the lowest order of convergence will dominate as h \rightarrow 0. This is quite difficult to achieve practically.

The last major issue that comes up in code verification (and solution verification too) is the nature of the discrete mesh and its connection to the asymptotic range of convergence. All of the theoretical results apply when the discretization parameter is small in a broad mathematical sense. This is quite problem specific and generally ill defined. Examining the congruence of the numerical derivatives of the analytical solution with the analytical derivatives can generally assess this. When these quantities are in close agreement, the solution can be considered to be asymptotic. Again these definitions are loose and generally applied with a large degree of professional or expert judgment.

It is useful to examine these issues through a concrete problem in code verification. The example I’ll use is a simple ordinary differential equation integrator for a linear equation u_t = - u coded up in Mathematica. We could solve this problem in a spreadsheet (like MS Excel), python, or a standard programming language. The example will look at two first order methods, forwards u^{n+1} + h u^n =u^{n} and backwards u^{n+1} + h u^{n+1} = u^n Euler methods. Both of these methods produce leading first order errors in an asymptotic sense, E = C h + O(h^2) . If h is large enough, the high order terms will pollute the error and produce deviations from the pure first-order error. Let’s look at this example and the concrete analysis from verification. This will be instructive in getting to similar problems encountered in general code verification.

Here is the code

ForwardEuler[h_, T_, a_] :=

(

uo = 1;

t = 0.0;

While[t < T,

(* integration *)

t = t + h;

un = uo + a h uo;

Print[“t= “, t, ” u(t) = “, un, ” err = “, Abs[un – Exp[a t]]];

uo = un

];

)

 

BackwardEuler[h_, T_, a_] :=

(

uo = 1;

t = 0.0;

While[t < T,

(* integration *)

t = t + h;

un = uo/(1 + a h);

Print[“t= “, t, ” u(t) = “, un, ” err = “, Abs[un – Exp[a t]]];

uo = un

];

)

Let’s look at the forward Euler integrator for several different choices of h, different end times for the solution and number of discrete solutions using the method. We will do the same thing for the backwards Euler method, which is different because it is unconditionally stable with respect to step size. For this simple ODE, the method is stable to a stepsize of h=2 and we can solve the problem to two stopping times of T=1.0, T=10.0 and T=100.0. The analytical solution is always, u(T) = \exp^{-T}. We can solve this problem using a set of step sizes, h=1.0, h=0.5, h=0.25, h=0.125.

I can give results for various pairs of step sizes with both integrators, and see some common pathologies that we must deal with. Even solving such a simple problem, with simple methods can prove difficult and prone to heavy interpretation (arguably the simplest problem with the simplest methods). Much different results are achieved when the problem is run until different stopping times. We see the impactbdf2-orderof accumulated error (since I’m using Mathematica so aspects of round-off error are pushed aside). In these cases round-off error would be another complication. Furthermore the backward Euler method for multiple equations would involve a linear (or nonlinear) solution that itself has an error tolerance that may significantly impact verification results. We see good results for T=1.0 and a systematic deviation for longer ending times. To get acceptable verification results would require much smaller step sizes (for longer calculations!). This shows how easy it is to scratch the surface of really complex behavior in verification that might mask correctly implemented methods. What isn’t so well appreciated is that this behavior is expected and amenable to analysis through standard methods extended to look for it.

h FE T=1 FE T=10 FE T=100 BE T=1 BE T=10 BE T=100
1 1.64 0.03 ~0 0.79 1.87 16.99
0.5 1.20 0.33 4e-07 0.88 1.54 11.78
0.25 1.08 0.65 0.002 0.93 1.30 7.17
0.125 1.04 0.83 0.05 0.96 1.16 4.07
0.0625 1.02 0.92 0.27 0.98 1.08 2.40
0.03125 1.01 0.96 0.55 0.99 1.04 1.63

Computed order of convergence for forward Euler (FE) and backward Euler (BE) methods for various stopping times and step sizes.

Types of Code Verification Problems and Associated Data

 Don’t give people what they want, give them what they need.

― Joss Whedon

The problem types are categorized by the difficulty of providing a solution coupled withimages-2the quality of the solution that can be obtained.   These two concepts go hand-in-hand. As simple closed form solution is easy to obtain and evaluation. Conversely, a numerical solution of partial differential equations is difficult and carries a number of serious issues regarding its quality and trustworthiness. These issues are addressed by an increased level of scrutiny on evidence provided by associated data. Each of benchmark is not necessarily analytical in nature, and the solutions are each constructed in different means with different expected levels of quality and accompanying data. This necessitates the differences in level of required documentation and accompanying supporting material to assure the user of its quality.

Next, we provide a list of types of benchmarks along with an archetypical example of each. This is intended to be instructive to the experienced reader, who may recognize the example. The list is roughly ordered in increasing level of difficulty and need for greater supporting material.

  • Closed form analytical solution (usually algebraic in nature). Example: Incompressible, unsteady, 2-D, laminar flow over an oscillating plate (Stokes oscillating plate) given in Panton, R. L. (1984). Incompressible Flow, New York, John Wiley, pp. 266-272.
  • Analytical solution with significantly complex numerical evaluation
    • Series solution. Example: Numerous classical problems, in H. Lamb’s book, “Hydrodynamics,” Dover, 1932. Classical separation of variables solution to heat conduction. Example: Incompressible, unsteady, axisymmetric 2-D, laminar flow in a circular tube impulsively started (Szymanski flow), given in White, F. M. (1991). Viscous Fluid Flow, New York, McGraw Hill, pp. 133-134.
    • Nonlinear algebraic solution. Example: The Riemann shock tube problem, J. Gottleib, C. Groth, “Assessment of Riemann solvers for unsteady one-dimensional inviscid flows of perfect gases,” Journal of Computational Physics, 78(2), pp. 437-458, 1988.
    • A similarity solution requiring a numerical solution of nonlinear ordinary differential equations.
    • Manufactured Solution. Example: Incompressible, steady, 2-D, turbulent, wall-bounded flow with two turbulence models (makes no difference to me), given in Eça, L., M. Hoekstra, A. Hay and D. Pelletier (2007). “On the construction of manufactured solutions for one and two-equation eddy-viscosity models.” International Journal for Numerical Methods in Fluids. 54(2), 119-154.
  • Highly accurate numerical solution (not analytical). Example: Incompressible, steady, 2-D, laminar stagnation flow on a flat plate (Hiemenz flow), given in White, F. M. (1991). Viscous Fluid Flow, New York, McGraw Hill. pp. 152-157.
  • Numerical benchmark with an accurate numerical solution. Example: Incompressible, steady, 2-D, laminar flow in a driven cavity (with the singularities removed), given in Prabhakar, V. and J. N. Reddy (2006). “Spectral/hp Penalty Least-Squares Finite Element Formulation for the Steady Incompressible Navier-Stokes Equations.” Journal of Computational Physics. 215(1), 274-297.
  • Code-to-code comparison data. Example: Incompressible, steady, 2-D, laminar flow over a back-step, given in Gartling, D. K. (1990). “A Test Problem for Outflow Boundary Conditions-Flow Over a Backward-Facing Step.” International Journal for Numerical Methods in Fluids. 11, 953-967.

Below is a list of the different types of data associated with verification problems defined above. Depending on the nature of the test problem only a subset of these data are necessary. This will be provided below the list of data types. As noted above, benchmarks with well-defined closed form analytical solutions require relatively less data than a benchmark associated with the approximate numerical solution of PDEs.

  • Detailed technical description of the problem (report or paper)
  • Analysis of the mathematics of the problem (report or paper)
  • Computer analysis of solution (input file)
  • Computer solution of the mathematical solution
  • Computer implementation of the numerical solution
  • Error analysis of the “exact” numerical solution
  • Derivation of the source term and software implementation or input
  • Computer implementation of the source term (manufactured solution)
  • Grids for numerical solution
  • Convergence and error estimation of approximate numerical solution
  • Uncertainty and sensitivity study of numerical solution
  • Description and analysis of computational methods
  • Numerical analysis theory associated with convergence
  • Code description/manuals
  • Input files for problems and auxiliary software
  • Patch test description, Derivation, input and analysis
  • Unusual boundary conditions (inflow, piston, etc.…)
  • Physics restrictions (boundary layer theory, inviscid,)
  • Software quality documents
  • Scripts and auxiliary software for verification
  • Source code
  • Metric descriptions
  • Verification results including code version, date, etc.
  • Numerical sensitivity studies
  • Feature coverage in verification

Below, we briefly describe the characteristics of each type of benchmark documentation (could be called artifacts or meta-data) associated with a code verification benchmarks. These artifacts take a number of concrete forms such as a written document, computer code, mathematical solution in document or software form, input files for executable codes, input to automatic computer analysis, output from software quality systems, among others.

  • Detailed technical description of the benchmark (report or paper): This can include a technical paper in a journal or conference proceeding describing the benchmark and its solution. Another form would be a report informal or formal from an institution providing the same information.
  • Analysis of the mathematics (report or paper): For any solution that is closed form, or requiring a semi-analytical solution, the mathematics must be described in detail. This can be included in the paper (report) discussed previously or in a separate document.
  • Computer analysis of solution (input file): If the mathematics or solution is accomplished using a computerized analysis, the program used and the input to the program should be included. Some sort of written documentation such as a manual for the software ideally accompanies this artifact.
  • Computer solution of the mathematical solution: The actual computerized solution of the mathematical problem should be included in whatever form the computerized solution takes. This should include any error analysis completed with this solution.
  • Computer implementation of the numerical solution: The analytical solution should be implemented in a computational form to allow the comparison with the numerical solution. This should include some sort of error analysis in the form of a report.
  • Derivation of the source term and software implementation or input: In the case of the method of manufactured solutions, the source term used to drive the numerical method must be derived through a well-defined numerical procedure. This should be documented through a document, and numerical tools used for the derivation and implementation.
  • Computer implementation of the source term (manufactured solution): The source term should be included in a form amenable to direct use in a computer code. The language for the computer code should be clearly defined as well as the compiler and computer system used.
  • Grids for numerical solution: If a solution is computed using another simulation code all relevant details on the numerical grid(s) used must be included. This could be direct grid files, or input files to well-defined grid generation software.
  • Convergence and error estimation of numerical solution: The numerical solution must include a convergence study and error estimate. These should be detailed in an appropriately peer-reviewed document.
  • Uncertainty and sensitivity study of numerical solution: The various modeling options in the code used to provide the numerical solution must be examined vis-a-vis the uncertainty and sensitivity of the solution to these choices. This study should be used to justify the methodology used for the baseline solution.
  • Description and analysis of computational methods: The methods used by the code used for the baseline solution must be completely described and analyzed. This can take the form of a complete bibliography of readily available literature
  • Numerical analysis theory associated with convergence: The nature of the convergence and the magnitude of error in the numerical solution must be described and demonstrated. This can take the form of a complete bibliography of readily available literature.
  • Code description/manuals: The code manual and complete description must be included with the analysis and description.
  • Input files for benchmarks and auxiliary software: The input file used to produce the solution must be included. Any auxiliary software used to produce or analyze the solution must be full described or included.
  • Unusual boundary conditions (inflow, piston, outflow, Robin, symmetry, …): Should the benchmark require unusual or involved boundary or initial conditions, these must be described in additional detail including the nature of implementation.
  • Physics restrictions (boundary layer theory, inviscid, parabolized Navier-Stokes, …): If the solution requires the solution of a reduced or restricted set of equations, this must be fully described. Examples are boundary layer theory, truly inviscid flow, or various asymptotic limits.
  • Software quality documents: Of non-commercial software used to produce solutions, the software quality pedigree should be clearly established by documenting the software quality and steps taken to assure the maintenance of the quality.
  • Scripts and auxiliary software for verification: Auxiliary software or scripts used to determine the verification or compute error estimates for a software used to produce solution should be included.
  • Source code: If possible the actual source code for the software along with instructions for producing an executable (makefile, scripts) should be included with all other documentation.
  • A full mathematical or computational description of metrics used in error analysis and evaluation of solution implementation or numerical solution.
  • Verification results including code version, date, and other identifying characteristics: The verification basis for the code used to produce the baseline solution must be included. This includes any documentation of verification, peer-review, code version, date completed and error estimates.
  • Feature coverage in verification: The code features covered by verification benchmarks must be documented. Any gaps where the feature used for the baseline solution are not verified must be explicitly documented.

Below are the necessary data requirements for each category of benchmark, again arranged in order of increasing level of documentation required. For completeness each data type would expected to be available to describe a benchmark of a given type.

  • Common elements for all types of benchmarks (it is notable that the use of proper verification using an analytical solution results in the most compact set of requirements for data, manufactured solutions also).
  1. Paper or report
  2. Mathematical analysis
  3. Computerized solution and input
  4. Error and uncertainty analysis
  5. Computer implementation of the evaluation of the solution
  6. Restrictions
  7. Boundary or initial conditions
  • Closed form analytical solution
  1. Paper or report
  2. Mathematical analysis
  3. Computerized solution and input
  4. Error and uncertainty analysis
  5. Computer implementation of the evaluation of the solution
  6. Restrictions
  7. Boundary or initial conditions
  • Manufactured Solution
  1. Paper or report
  2. Mathematical analysis
  3. Computational solution and input
  4. Error and uncertainty analysis
  5. Computer implementation of the evaluation of the solution
  6. Derivation and implementation of the source term
  7. Restrictions
  8. Boundary or initial conditions
  • Numerical solution with analytical solution
  • Series solution, Nonlinear algebraic solution, Nonlinear ODE solution
  1. Paper or report
  2. Mathematical analysis
  3. Computerized solution and input
  4. Error and uncertainty analysis
  5. Computer implementation of the evaluation of the solution
  6. Input files
  7. Source code
  8. Source code SQA
  9. Method description and manual
  10. Restrictions
  11. Boundary or initial conditions
  • Highly accurate numerical solution (not analytical), numerical benchmarks or code-to-code comparisons.
  1. Paper or report
  2. Mathematical analysis
  3. Computational solution and input
  4. Error and uncertainty analysis for the solution
  5. Computer implementation of the evaluation of the solution
  6. Input files
  7. Grids
  8. Source code
  9. Source code SQA
  10. Method description and manual
  11. Method analysis
  12. Method verification analysis and coverage
  13. Restrictions
  14. Boundary or initial conditions

The use of direct numerical simulation requires a similar or even higher level of documentation than analytical solutions. This coincides with the discussion of the last type of verification benchmark where a complex numerical method with significant approximations is utilized to produce the solution. As a numerically computed benchmark, the burden of proof is much larger.   Code verification is best served by exact analytical solutions because of the relative ease in assuring benchmark solution accuracy. Nonetheless, it remains a common practice due to its inherent simplicity. It also appeals to those who have a vested interest in the solutions produced by a certain computer code. The credibility of the comparison is predicated on the credibility of the code producing the “benchmark” used as the surrogate for truth. Therefore the documentation of the benchmark must provide the basis for the credibility.

dag006The use of DNS as a surrogate for experimental data has received significant attention. This practice violates the fundamental definition of validation we have adopted because no observation of the physical world is used to define the data. This practice also raises other difficulties, which we will elaborate upon. First the DNS code itself requires that the verification basis further augmented by a validation basis for its application.   This includes all the activities that would define a validation study including experimental uncertainty analysis numerical and physical equation based error analysis. Most commonly, the DNS serves to provide validation, but the DNS contains approximation errors that must be estimated as part of the “error bars” for the data. Furthermore, the code must have documented credibility beyond the details of the calculation used as data. This level of documentation again takes the form of the last form of verification benchmark introduced above because of the nature of DNS codes. For this reason we include DNS as a member of this family of benchmarks.

There are two ways to do great mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else — but persistent.

― Raoul Bott

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Kamm, James R., Jerry S. Brock, Scott T. Brandon, David L. Cotrell, Bryan Johnson, Patrick Knupp, W. Rider, T. Trucano, and V. Gregory Weirs. Enhanced verification test suite for physics simulation codes. No. LLNL-TR-411291. Lawrence Livermore National Laboratory (LLNL), Livermore, CA, 2008.

Rider, J., James R. Kamm, and V. Gregory Weirs. “Verification, Validation and Uncertainty Quantification Workflow in CASL.” Albuquerque, NM: Sandia National Laboratories (2010).

Rider, William J., James R. Kamm, and V. Gregory Weirs. “Procedures for Calculation Verification.” Simulation Credibility (2016): 31.

How’s? What’s? Why’s?

He who has a why to live for can bear almost any how.
― Friedrich Nietzsche

At work we often justify the research we do through declaring that it is mission-relevant, or mission-focused. The work is automatically important and necessary if it supports our mission. Defining the mission is then essential to this conversation. Currently in my work, the discussion of what the mission is focuses on high performance computing. The pregnant question is whether my work’s mission is high performance computing?Crays-Titan-Supercomputer

I unilaterally reject this as a mission.

High performance computing is a “how” and so is “modeling and simulation” for that matter. Both things are tools to conduct science and engineering specialized to a purpose. Neither is a viable mission or reason in and of itself. Missions are better defined as “what’s” like nuclear weapons, economic competitiveness or scientific investigation. The high performance computing is how modeling and simulation is done, which is how aspects of nuclear weapons work or science or industrial work is done, but certainly not all of any of these. We still haven’t gotten to why we do these things. Why we fund high performance computing for modeling and simulation to support the nuclear weapons stockpile is an intricate question worth some further exploration.

A knee jerk response is “National Security,” which avoids a deeper discussion. The defense of a Nation State is associated with ability of the citizens of that Nation to imgresachieve a degree of access to resources that raise their access to a good life. With more resources the citizens can aspire toward a better, easier more fulfilled life. In essence the security of a Nation can allow people to exist higher on Maslow’s hierarchy needs. In the United States this is commonly expressed as “freedom”. Freedom is a rather superficial thing when used as a slogan. The needs of the citizens begin with having food and shelter than allow them to aspire toward a sense of personal safety. Societal safety is one means of achieving this (not that safety and security are pretty low on the hierarchy). With these in hand, the sense of community can be pursued and then sense of an esteemed self. Finally we get to the peak and the ability to pursue ones full personal potential.

At the lowest part of the hierarchy is subsistence, the need for basic resources to surviveimages. If one exists at this level, life isn’t very good, but its achievement is necessary for a better life. Gradually one moves up the hierarchy requiring greater access to resources and ease of maintaining the lower positions on the hierarchy. A vibrant National Security should allow this to happen, the richer a Nation becomes the higher on the hierarchy of needs its citizens reside. It is with some recognition of irony that my efforts and the Nation is stuck at such a low level on the hierarchy. Efforts toward bolstering the community the Nation forms seem to be too difficult to achieve today. We seem to be regressing from being a community or achieving personal fulfillment. We are stuck trying to be safe and secure. The question is whether those in the Nation can effectively provide the basis for existing high on the hierarchy of needs without being there themselves?

My observation about my work is that the people doing the work to support National Security are moving to lower and lower levels of the hierarchy by being isolated from the “why’s” of their work, and pushed into a subsistence existence focused on the “how’s”. Increasingly the work is even divorced from the “what’s” and the “why” is never even considered. As a result people simply do what they are told without considering what it is for, or why they are doing it. The result is a decline in the quality and applicability of the foundational work, which should adapt to the needs of its use and inspired by the underlying reasons. This issue is rampant in high performance computing where its utility for modeling and simulation is intellectually threadbare, and those working in computing barely consider what any of their work will be used for.

We are seeing our scientific community pushed to ever lower rungs Maslow’s pyramid. Part of this is the pervasive distrust of experts and education in the United States and perhaps the entirety of the West. These problems are harbingers of decline and hardly support the expansion and vibrancy of democracy or freedom.

 

 

 

Good Validation Practices are our Greatest Opportunity to Advance Modeling and Simulation

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

It is an oft-stated maxim that we should grasp the lowest hanging fruit. In real life this often is hidden in plain sight with modeling and simulation being a prime example in my mind. Even a casual observer could see that the emphasis today is focused on computing speed and power as the path to the future. At the same time one can also see that the push for faster computers is foolhardy and hardly comes at an 500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9opportune time. Moore’s law is dead, and may be dead at all scales of computation. It may be the highest hanging fruit pursued at great cost while lower hanging fruits rots away without serious attention, or even conscious neglect. Perhaps nothing typifies this issue more than the state of validation in modeling and simulation.

Validation can be simply stated, but is immensely complex to do correctly. Simply put, validation is the comparison of observations with modeling and simulation results with the intent of understanding the fitness of the model for its intended purpose. More correctly, it is an assessment of modeling correctness, which demands observational data to ground the comparison in reality. It involves deep understanding of experimental and observational science including inherent error and uncertainty. It also involves equally deep understanding of errors and uncertainty of the model. It must be couched in the proper context philosophically including understanding what a model is. Each of these endeavors is in itself a complex and difficult professional activity, and validation is the synthesis of all of it. Being so complex and difficult it is rarely done correctly, and its value is grossly underappreciated. A large part of the reason for this state of affairs is the tendency to completely accept genuinely shoddy validation. I used to give a talk on the validation horrors in the published literature and finding targets for critique basically comes down to looking at almost any paper that does validation. The hard part is finding examples where the validation is done well.

The-most-powerful-Exascale-Computer

One of the greatest tenets of modeling is the quote by George Box, “all models are wrong, but some are useful.” We have failed to recognize one of the most important, but poorly appreciate maxims of modeling and simulation and corollary to Box’s observation. It is that no amount of computer speed, algorithmic efficiency or accuracy of approximation can make a bad model better. If the model is wrong, solving it faster or more accurately or more efficiently will not improve it. A question that should immediately come to mind “what is useful?” “what is a bad?” and “what is better?” In a deep sense both of these questions are completely answered by a comprehensive validation assessment of the simulation of a model. One needs to define what is bad and what is better. Both concepts depend deeply upon deciding what one wants from a model. What is its point and purpose, and most likely what question is it designed to answer. A question to start things off first understands, “what is a model?”

images“What is a model?”

A model is virtually everything associated with a simulation including the code itself, the input to the code, the computer used for the computation, and the analysis of the results. Together all these elements comprise the model. At the core of the model and the code are the theoretical equations being used simulating the real World? More often than not, this is a system of differential equations or something more complex (like integral differential equations for example). These equations are then solved using methods, approximations and algorithms all of which leave their imprint on the results. Putting all of this involves creating a computer code, creating a discrete description of the World and computing that result. Each of these steps constitutes a part of the model. Once the computation has been completed, the results need to analyze and results drawn out of the mountain of numbers produced by the computer. All of these comprise the model we are validating. To separate one thing from another requires good disciplined work and lots of rigor. Usually this discipline is lacking and rigor is replaced by assumptions and slothful practices. In very many cases we are watching willful ignorance in action, or simple negligence. We know how to do validation; we simply don’t demand that people practice it. People are often comforted not knowing and don’t want to actually understand the depth of their structural ignorance.

Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

Observing and understanding are two different things.

― Mary E. Pearson

vyxvbzwxTo conduct a validation assessment you need observations to compare to. This is an absolute necessity; if you have no observational data, you have no validation. Once the data is at hand, you need to understand how good it is. This means understanding how uncertain the data is. This uncertainty can come from three major aspects of the process: errors in measurement, errors in statistics, and errors in interpretation. In the order of how these were mentioned each of these categories become more difficult to assess and less common to actually be assessed in practice. Most commonly assessed is measurement error that is the uncertainty of the value of a measured quantity. This is a function of the measurement technology or the inference of the quantity from other data. The second aspect is associated with the statistical nature of the measurement. Is the observation or experiment repeatable? If it is not how much might the measured value differ due to changes in the system being observed? How typical are the measured values? In many cases this issue is ignored in a willfully ignorant manner. Finally, the hardest part of observational bias often defined as answering the question, “how do we know that we a measuring what we think we are?” Is there something systematic in our observed system that we have not accounted for that might be changing our observations. This may come from some sort of problem in calibrating measurements, or looking at the observed system in a manner that is inconsistent. These all lead to potential bias and distortion of the measurements.

imagesThe intrinsic benefit of this approach is a systematic investigation of the ability of the model to produce the features of reality. Ultimately the model needs to produce the features of reality that we care about, and can measure. This combination is good to balance in the process of validation, the ability to produce the reality necessary to conduct engineering and science, but also general observations. A really good confidence builder is the ability of model to produce proper results on things that we care as well as those don’t care about. One of the core issues is the high probability that many of the things we care about in a model cannot be observed, and the model acts as an inference device for science. In this case the observations act to provide confidence that the model’s inferences can be trusted. One of the keys to the whole enterprise is understanding the uncertainty intrinsic to these inferences, and good validation provides essential information for this.

One of the things few people recognize is the inability of other means to provide remediation from problems with the model. If a model is flawed there is no amount of computer power that can rectify its shortcomings. A computer of infinite speed would (should) only make the problems more apparent. This obvious outcome only becomes available with a complete, rigorous and focused validation of the model. Slipshod validation practices simply allow the wrong model to be propagated without necessary feedback. It is bad science plain and simple. No numerical method or algorithm in the code could provide relief either. The leadership in high performance computing is utterly oblivious to this. As a result almost no effort whatsoever is being put into validation, and models are being propagated forward without any thought regarding their validity. No serious effort exists to put the models to the test either. If our leadership is remotely competent this is an act of willful ignorance, i.e., negligence. While our models today are wonderful in many regards, they are far from perfect (remember what George Box said!). A well-structured scientific and engineering enterprise would make this evident, and employ means to improving them. These new models would open broad new vistas of utility in science and engineering. A lack of recognition of this opportunity makes modeling and simulation self-limiting in its impact.

A prime example where our modeling and simulation are deficient is reproducing the variability seen in the real World. In many cases the experimental practice is equally deficient. For most phenomena of genuine interest and challenge, events and engineered products the exact same response cannot be produced. There are variations in the response because of small differences in the system being studied coming from external conditions (boundary conditions) or the state of system (initial conditions), or simply a degree of heterogeneous character in the system itself. In many cases the degree of variation in response is very large and terribly important. In engineered systems this leads to the application of large and expensive safety factors along with the risk of disaster. This depends to some extent on the nature of the response be sought. The more localized the response, the greater the tendency to be variable, while global-integrated responses can be far more reliably reproduced.

Crays-Titan-SupercomputerOur scientific and engineering attention is being drawn increasingly to the local responses for significant events, and their importance is growing. These are often worst-case conditions that we strive to avoid. At the same time our models are completely ill suited to address these responses. Our models cannot effectively simulate these sorts of features. Our models are almost without exception focused on a mean-field model producing a model of the average system involving far more homogeneous properties and responses than seen in reality. As such the extremes in response are removed a priori. By the same token our observational and experimental practices are not arrayed to unveil this increasingly essential aspect of reality. The ability of modeling and simulation to impact the real World effectively suffers and its impact is limited by failing to progress.

…if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

― Richard Feynman

incompetencedemotivatorOne of the greatest issues in validation is “negligible” errors and uncertainties. In many cases these errors are negligible by assertion and no evidence is given. A standing suggestion is that any negligible error or uncertainty be given a numerical value along with evidence for that value. If this cannot be done, the assertion is most likely to be specious, or at least poorly thought through. If you know it is small then you should know how small and why. It is more likely is that it is based on some combination of laziness and wishful thinking. In other cases this practice is an act of negligence, and worse yet it is simply willful ignorance on the part of practitioners. This is an equal opportunity issue for computational modeling and experiments. Often (almost always!) numerical errors are completely ignored in validation. The most brazen violators will simply assert without evidence that the errors are small or the calculated is converged without offering any evidence beyond authority.

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

― Daniel J. Boorstin

Similarly, in experiments measurements will be offered without any measurement error, and often no evidence along with an assertion that the error is too small to be concerned about. Experimental or observational results are also highly prone to ignore variability in outcomes and treat each case as a well-determined result even when the physics of the problem is strongly dependent on the details of the initial conditions (or the prevailing models strongly imply this!). Similar sins are committed with modeling uncertainties where an incomplete assessment is made of uncertainty, and no accounting is made of the incompleteness and its impact. To make matters worse other obvious sources of uncertainty are ignored. The result of these patterns of conduct is an almost universal under-estimate of uncertainty from both modeling and observations. This under-estimate results in modeling and simulation being applied in a manner that is non-conservative from a decision-making perspective.

The result of these rather sloppy practices is a severely limited capacity to properly offer an assessment of model validation. Using rather complete uncertainties can produce the sort of result needed to produce definitive results that offer feedback on modeling. If uncertainties can be driven small enough we can drive improvement in the underlying science and engineering. For example, very precise and well-controlled experiments with small uncertainties can produce evidence that models must be improved. Exceptionally small modeling uncertainty could produce a similar effect in pushing experiments. Too often the work is conducted with a strong confirmation bias that takes the possibility of model incorrectness off the table. The result is a stagnant situation where models are not improving and shoddy professional practice is accepted. All of this stems from a lack of understanding or priority for proper validation assessment.

Confidence is ignorance. If you’re feeling cocky, it’s because there’s something you don’t know.

― Eoin Colfer

A mature realization for scientists is that validation is never complete. Models are validated, not codes. The model is a broad set of simulation features, including the model equations, and the code, but also a huge swath of other things. The validation is simply an assessment of all those things. This assessment looks at whether the model and the data are consistent with each other given the uncertainties in each. This assessment is predicated on the completeness of the uncertainty estimation. In the grand scheme of things one wants drive the uncertainties down in either the model or the observations of reality. The big scientific endeavor is locating the source of error in the model; is it in how the model is solved? Or are the model equations flawed? A flawed theoretical model can be a major scientific result requiring a deep theoretical response. Repairing these flaws can open new doors of understanding and drive our knowledge forward in miraculous ways. We need to adopt practices that allow us to identify problems that new models are needed for. The current modeling and simulation practice removes this outcome as a possibility at the outset.

A man is responsible for his ignorance.

― Milan Kundera

Rider, William J. A Rogue’s Gallery of V&V Practice. No. SAND2009-4667C. Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), 2009.

Rider, William J. What Makes A Calculation Good? Or Bad?. No. SAND2011-7666C. Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), 2011.

Rider, William J. What is verification and validation (and what is not!)?. No. SAND2010-1954C. Sandia National Laboratories, 2010.

Tricks of the trade: Making a method robust

Rigor alone is paralytic death, but imagination alone is insanity.

― Gregory Bateson

Solving hyperbolic conservation laws provides a rich vein of knowledge to mine and utilize more generally. Most canonically this involves solving the equations of gas dynamics, but the lessons apply to a host of other equations such as magneto-hydrodynamics, multi-material hydrodynamics, oil-water flow, solid mechanics, and on and on. Gas dynamics is commonly visible in the real world, and includes lots of nonlinearity, structure and complexity to contend with. The structures and complexity includes compressible flow structures such as shock waves, rarefactions and contract discontinuities. In more than one dimension you have shear waves, and instabilities that ultimately lead to turbulent flows. With all of these complications to deal with, these flows present numerous challenges to their robust, reliable and accurate simulation. As such, gas dynamics provides a great springboard for robust methods in many fields as well as a proving ground for much of applied math, engineering and physics. It provides a wonderful canvas for the science and art of modeling and simulation to be improved in the vain, but beneficial atimagestempt at perfection.

Getting a basic working code can come through a variety of ways using some basic methods that provide lots of fundamental functionality. Some of the common options are TVD methods (ala Sweby for example), high order Godunov methods (ala Colella et al), FCT methods (ala Zalesak), or even WENO (Shu and company). For some of these methods the tricks leading to robustness make it into print (Colella, Zalesak come to mind in this regard). All of these will give a passable to even a great solution to most problems, but still all of them can be pushed over the edge with a problem that’s hard enough. So what defines a robust calculation? Let’s say you’ve taken my advice and brutalized your code https://wjrider.wordpress.com/2017/06/09/brutal-problems-make-for-swift-progress/ and now you need to fix your method https://wjrider.wordpress.com/2017/06/16/that-brutal-problem-broke-my-code-what-do-i-do/. This post will provide you with a bunch of ideas about techniques, or recipes to get you across that finish line.

You can’t really know what you are made of until you are tested.

― O.R. Melling

The starting basis for getting a robust code is choosing a good starting point, a strong foundation to work from. Each of the methods above would to a significant degree define this. Opinions differ on which of the available options are best, so I won’t be prescriptive about it (I prefer high-order Godunov methods in the interest of transparency). For typical academic problems, this foundation can be drawn from a wide range of available methods, but these methods often are not up to the job in “real” codes. There are a lot more things to add to a method to get you all the way to a production code. These are more than just bells and whistles; the techniques discussed here can be the difference between success and failure. Usually these tricks of the trade are found through hard fought battles and failures. Each failure offers the opportunity to produce something better and avoid problems. The best recipes produce reliable results for the host of problems you ask the code to solve. A great method won’t fall apart when you ask it to do something new either.

The methods discussed above all share some common things. First and foremost is reliance upon a close to bulletproof first order method as the ground state for the higher order method. This is the first step in building robust methods, start with a first-order method that is very reliable and almost guaranteed to give a physically admissible solution. This is easier said than done for general cases. We know that theoretically the lowest dissipation method with all the necessary characteristics is Godunov’s method (see Osher’s work from the mid-1980’s). At the other end of the useful first-order method spectrum is Lax-Friedrichs method, the most dissipative stable method. In a sense these methods give us our bookends. Any method we use as a foundation will be somewhere between these two. Still coming up with a good foundational first-order method is itself an art. The key is choosing a Riemann solver that provides a reliable solution under even pathological circumstances (in lieu of a Riemann solver, a dissipation that is super reliable).

You cannot build a dream on a foundation of sand. To weather the test of storms, it must be cemented in the heart with uncompromising conviction.

― T.F. Hodge

Without further ado let’s get to the techniques that one ought to use. The broadest category of techniques involves adding smart dissipation to methods. This acknowledges that the methods we are using already have a lot of dissipative mechanisms built into them. As a result the added dissipation needs to be selective as hell. The starting point is a careful statement of where the methods already have dissipation. Usually it lies in two distinct places, the most obvious being Riemann solvers or artificial viscosity. The Riemann solver adds an upwind bias to the approximation, which has an implicit dissipative content. The second major source of dissipation is the discretization itself, which can include biases that provide implicit dissipation. For sufficiently complex or nonlinear problems the structural dissipation in the methods are not enough for nonlinear stability. One of the simplest forms for this dissipation is the addition of another dissipative form. For Godunov methods the Lapidus viscosity can be useful because it works at shocks, and adds a multidimensional character. Other viscosity can be added through the Riemann solvers (via so-called entropy fixes, or selecting larger wavespeeds since dissipation is proportional to that). It is important that the dissipation be mechanistically different than the base viscosity, meaning that hyperviscosity (https://wjrider.wordpress.com/2016/03/24/hyperviscosity-is-a-useful-and-important-computational-tool/) can really be useful to augment dissipation. A general principle is to provide multiple alternative routes to nonlinear stability to support each other effectively.

The building blocks that form the foundation of your great and successful future, are the actions you take today

― Topsy Gift

The second source of dissipation is the fundamental discretization, which implicitly provides it. One of the key aspects of modern discretization are limiters that provide nonlinear stability through effectively adapting the discretization to the solution. These limiters come in various forms, but they all provide the means for the method to choose a favorable discretization for the nature of the solution (https://wjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/, https://wjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://wjrider.wordpress.com/2016/06/03/nonlinear-methods-a-key-to-modern-modeling-and-simulation/ ). One of the ways for additional dissipation to enter the method is through a deliberate choice of different limiters. One can bias the adaptive selection of discretization toward more dissipative methods when the solution calls for more care. These choices are important to make when solutions have shock waves, complex nonlinear structures, oscillations, or structural anomalies. For example the minmod limiter based method is the most dissipative second-order, monotonicity-preserving method. It can be used as a less dissipative alternative safety net instead of the first-order methods although its safety is predicated on a bulletproof first order method as a foundation.

Except for all but the most ideal circumstances, the added dissipation is not sufficient to produce a robust method. Very strong nonlinear events that confound classical analysis can still produce problems. All oscillations are very difficult to remove from the solutions and can work to produce distinct threats to the complete stability of the solution. Common ways to deal with these issues in a rather extreme manner are floors and ceilings for various variables. One of the worst things that can happen is a physical quantity moving outside its physically admissible bounds. The simplest example is a density going negative. Density is a positive definite quantity and needs to stay that way for the solution to be physical. It behooves robustness to make sure this does not happen. If it does it is usually catastrophic for the code. This is a simple case and in general quantities should lie within reasonable bounds. Usually when quantities fall outside reasonable bounds the code’s solutions are compromised. It makes sense to explicitly guard against this specifically where a quantity being out of bounds would general a catastrophic effect. For example, sound speeds involve densities and pressure plus a square root operation; a negative value would be disaster.

One can take things a long way toward robustness through using methods that more formally produce bounded approximations. The general case of positivity, or more bounded approximation has been pursued actively. I described the implementation of methods of this sort earlier, (https://wjrider.wordpress.com/2015/08/06/a-simple-general-purpose-limiter/ ). These methods can go a very long way to giving the robust character one desires, but other means discussed above are still necessary. A large production code with massive meshes and long run times will access huge swaths of phase space and as the physical complexity of problems increases, almost anything that can happen will. It is foolish to assume that bad states will not occur. One also must contend with people using the code in ways the developers never envisioned, and putting the solver into situations where it must survive even when it was designed for them. As a consequence it would be foolish to completely remove the sorts of checks that avert disaster (this could be done, but only with rigorous testing far beyond what most people ever do).

What to concretely do is another question where there are multiple options. One can institute a floating-point trap that locally avoids the possibility of using a negative value for the square root. This can be done in a variety of ways with differing benefits and pitfalls. One simple approach is to take the square root of the absolute value, or one might choose the max of the density and some sort of small floor value. This does little to address the core reason that the approximations produced an unphysical value. There is also little control on the magnitude of the density (the value can be very small), which has rather unpleasant side effects. A better approach would get closer to the root of the problems, which almost without fail comes from the inappropriate application of high-order approximations. One way for this to be applied is to replace the high-order approximations with a physically admissible low-order approximation. This relies upon the guarantees associated with the low-order (first order) approximation as a general safety net for the computational method. The reality is that the first-order method can also go bad, and the floating-point trap may or certainly be necessary even there.

A basic part of the deterministic solution to many problems is the ability to maintain symmetry. The physical world almost invariably breaks symmetry, but it is arguable that numerical solutions to the PDEs should not (I could provide the alternative argument vigorously too). If you want to maintain such symmetry, the code must be carefully designed to do this. A big source of the symmetry breaking is upwind approximations, especially if one choses a bias where zero isn’t carefully and symmetrically treated. One approach is the use of smoothed operators that I discussed at length (https://wjrider.wordpress.com/2017/03/24/smoothed-operators/, https://wjrider.wordpress.com/2017/03/29/how-useful-are-smoothed-operators/, https://wjrider.wordpress.com/2017/04/04/results-using-smoothed-operators-in-actual-code/ ). More generally the use of “if” tests in the code will break symmetry. Another key area for symmetry breaking is the solution of linear systems by methods that are not symmetry preserving. This means numerical linear algebra needs to be carefully approached.

images-1For gas dynamics, the mathematics of the model provide us with some very general character to the problems we solve. Shock waves are the preeminent feature of compressible gas dynamics, and a relatively predominant focal point for methods’ development and developer attention. Shock waves are nonlinear and naturally steepen thus countering dissipative effects. Shocks benefit through their character as garbage collectors, they are dissipative features and as a result destroy information. Some of this destruction limits the damage done by poor choices of numerical treatment. Being nonlinear one has to be careful with shocks. The very worst thing you can do is to add too little dissipation because this will allow the solution to generate unphysical noise or oscillations that are emitted by the shock. These oscillations will then become features of the solution. A lot of the robustness we seek comes from not producing oscillations, which can be best achieved with generous dissipation at shocks. Shocks receive so much attention because their improper treatment is utterly catastrophic, but they are not the only issue; the others are just more subtle and less apparently deadly.

Rarefactions are the benign compatriot to shocks. Rarefactions do not steepen and usually offer modest challenges to computations. Rarefactions produce no dissipation and their spreading nature reduces the magnitude of anything anomalous produced by the simulation. Despite their ease relative to shock waves, the rarefactions do produce some distinct challenges. The simplest case involves centered rarefactions where the characteristic velocity of the rarefaction goes to zero. Since dissipation in methods is proportional to the characteristic velocity, the presence of zero dissipation can trigger disaster and can generate completely unphysical rarefaction shocks (rarefaction shocks can be physical for exotic BZT fluids). More generally for very strong rarefactions one can see small and very worrisome deviations from adhering to the second law, these should be viewed with significant suspicion. The other worrisome feature of most computed rarefactions is the structure of the head of the rarefaction. Usually there is a systematic bump there that is not physical and may produce unphysical solutions for problems featuring very strong expansion waves. This bump actually looks like a shock when viewed through the lens of Lax’s version of the entropy conditions (based on characteristic velocities). This is an unsolved problem at present and represents a challenge to our gas dynamics simulations. The basic issue is that a strong enough rarefaction cannot be solved in an accurate, convergent manner by existing methods.

A third outstanding feature of gas dynamics are contact discontinuities, which are classified as linearly degenerate waves. They are quite similar to linear waves meaning that the characteristics do not change across the wave; for the ideal analysis the jumps neither steepen nor spread across the wave. One of the key results is that any numerical error is permanently encoded into the solution. One needs to be careful with dissipation because it never goes away. For this reason people consider steepening the wave to keep it from artificially spreading. It is a bit of an attempt to endow the contact with a little of the character of a shock courting potential catastrophe in the process. This can be dangerous if it is applied to with an interaction with a nonlinear wave, or instability for multidimensional flows. Another feature of the contact is their connection to multi-material interfaces as a material interface can ideally be viewed as a contact. Multi-material flows are a deep well of significant problems and a topic of great depth unto themselves (Abgrall and Karni is an introduction which barely scratches the surface!).

IMG_5467The fourth standard feature is shear waves, which are a different form of linearly degenerate waves. Shear waves are heavily related to turbulence, thus being a huge source of terror. In one dimension shear is rather innocuous being just another contact, but in two or three dimensions our current knowledge and technical capabilities are quickly overwhelmed. Once you have a turbulent flow, one must deal with the conflation of numerical error, and modeling becomes a pernicious aspect of a calculation. In multiple dimensions the shear is almost invariably unstable and solutions become chaotic and boundless in terms of complexity. This boundless complexity means that solutions are significantly mesh dependent, and demonstrably non-convergent in a point wise sense. There may be a convergence in a measure-valued sense, but these concepts are far from well defined, fully explored and technically agreed upon.

A couple of general tips for developing a code involves the choice of solution variables. All most without fail, the worst thing you can do is define the approximations using the conserved quantities. This is almost always a more fragile and error prone manner to compute solutions. In general the best approach is to use the so-called primitive variables (https://wjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/ ). The variables are clear in their physical implications and can be well-bounded using rational means. Using primitive variables is better for almost anything you want to do. The second piece of advice is to use characteristic variables to as great an extent as possible. This always implies some sort of one-dimensional thought. Despite this limitation, the benefits of characteristic variables are so extreme as to justify their use even under these limited circumstances.

A really good general rule is to produce thermodyanically consistent solutions. In other words, don’t mess with thermodynamic consistency, and particularly with the second law of thermodynamics. Parts of this thermodynamic consistency are the dissipative nature of physical solutions and the active adherence to entropy conditions. There are several nuances to this adherence that are worth discussing in more depth. It is generally and commonly known that shocks increase entropy. What isn’t so widely appreciated is the nature of this increase being finite and adheres to a scaling that is proportional to the size of the jump. The dissipation does not converge toward zero, but rather toward a finite value related to the structure of the solution.

The second issue is dissipation free nature of the rest of the flow especially rarefactioimgres copy 3ns. The usual aim of solvers is to completely remove dissipation, but that runs the risk of violating the second law. It may be more advisable to keep a small positive dissipation working (perhaps using a hyperviscosity partially because control volumes add a nonlinear anti-dissipative error). This way the code stays away from circumstances that violate this essential physical law. We can work with other forms of entropy satisfaction too. Most notably is Lax’s condition that identifies the structures in a flow by the local behavior of the relevant characteristics of the flow. Across a shock the characteristics flow into the shock, and this condition should be met with dissipation. These structures are commonly present in the head of rarefactions.

One of the big things that can be done to improve solutions is the systematic use of high-order approximations within methods. These high-order elements often involve formal accuracy that is much higher than the overall accuracy of the methods. For example a fourth-order approximation to the first derivative can be used to great effect with a method that only provides an overall second-order accuracy. With methods like PPM and FCT this can be taken to greater extremes. There one might use a fifth or sixth-order approximation for edge values even though the overall method is third order in one dimension or second-order in two or three dimensions. Another aspect of high order accuracy is better accuracy at local extrema. A usual approach to limiting to provide nonlinear stability clips extrema and computes them at first-order accuracy. In moving to limiters that do not clip extrema so harshly, excessive care must be taken so that the resulting method is not fragile and prone to oscillations. Alternatively, extrema-preserving methods can be developed that are relatively dissipative even compared to the better extrema clipping methods. Weighted ENO methods of almost any stripe are examples where the lack of extrema clipping is purchased at the cost of significant dissipation and relatively low overall computational fidelity. A better overall approach would be to use metlogohods I have devised or the MP methods of Suresh and Huynh. Both of these methods are significantly more accurate than WENO methods.

One of the key points of this work is to make codes robust. Usually these techniques are originally devised as “kludges” that are crude and poorly justified. They have the virtue of working. The overall development effort is to guide these kludges into genuinely defensible methods, and then ultimately to algorithms. One threads the needle between robust solutions and technical rigor that lends confidence and faith in the simulation. The first rule is get an answer, then get a reasonable answer, then get an admissible answer, and then get an accurate answer. The challenges come through the systematic endeavor to solve problems of ever increasing difficulty and expand the capacity of simulations to address an ever-broader scope. We then balance this effort with the availability of knowledge to support the desired rigor. Our standards are arrived at philosophically through what constitutes an acceptable solution to our modeling problems.

Consistency and accuracy instills believability

― Bernard Kelvin Clive

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21, no. 5 (1984): 995-1011.

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics 126, no. 1 (1996): 202-228.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

Colella, Phillip. “A direct Eulerian MUSCL scheme for gas dynamics.” SIAM Journal on Scientific and Statistical Computing 6, no. 1 (1985): 104-117.

Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.

Zalesak, Steven T. “Fully multidimensional flux-corrected transport algorithms for fluids.” Journal of computational physics 31, no. 3 (1979): 335-362.

Zalesak, Steven T. “The design of Flux-Corrected Transport (FCT) algorithms for structured grids.” In Flux-Corrected Transport, pp. 23-65. Springer Netherlands, 2012.

Quirk, James J. “A contribution to the great Riemann solver debate.” International Journal for Numerical Methods in Fluids 18, no. 6 (1994): 555-574.

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Abgrall, Rémi, and Smadar Karni. “Computations of compressible multifluids.” Journal of computational physics 169, no. 2 (2001): 594-623.

Osher, Stanley. “Riemann solvers, the entropy condition, and difference.” SIAM Journal on Numerical Analysis 21, no. 2 (1984): 217-235.

Suresh, A., and H. T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” Journal of Computational Physics 136, no. 1 (1997): 83-99.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

 

 

We all live in incredibly exciting times; It totally sucks

The mystery of human existence lies not in just staying alive, but in finding something to live for.

― Fyodor Dostoyevsky

There is little doubt that we are living through monumental times in the history of humanity. I was born the child of the Cold War seeing that conflict end as I transitioned to adulthood and my professional life then seeing the birth of globalism powered by ArtilleryShelltechnology and communication unthinkable a generation before. My chosen profession looks increasingly like a relic of that Cold War, and increasingly irrelevant to the World that unfolds before me. My workplace is failing to keep pace with change, and I’m seeing it fall behind modernity in almost every respect. It’s a safe, secure job, but lacks most of the edge the real World offers. All of the forces unleashed in today’s World make for incredible excitement, possibility, incredible terror and discomfort. The potential for humanity is phenomenal, and the risks are profound. In short, we live in simultaneously in incredibly exciting and massively sucky times. Yes, both of these seemingly conflicting things can be true.

We are seeing a massive collision of the past and future across the globe pitting the forces of stability and progress against each other. We see the political power of conservative, anti-progress forces gaining the upper hand recently. The resurgent right wing seeks to push back against the changes pushed by demographics and empowered by technology. The reinvigorated tendency to use bigotry, hate and hqdefaultviolence increasingly in a state-sponsored way is increasing. The violence against change is growing whether it is by governments or terrorists. What isn’t commonly appreciated is the alliance in violence of the right wing and terrorists. Both are fighting against the sorts of changes modernity is bringing. The fight against terror is giving the violent right wing more power. The right wing and Islamic terrorists have the same aim, undoing the push toward progress with modern views of race, religion and morality. The only difference is the name of the prophet the violence is done in name of.

This unholy alliance draws its power from the bookends of society. On one hand you have the poorer, uneducated masses of the population, and on the other you have the entrenched interests of the rich and powerful. The poor and uneducated are not able to manage the changes in society and want the brakes put on progress. The rich and powerful like the status quo and fear progress because it might not favor them as well. Together they ally against the middle class, who stands to benefit the most from progress. The alliance empowers violence among the poor, grows their ranks through polices that favor the wealthy. We see this happening across the globe where right wing, racist, Nationalist, overtly religious movements are attacking any societal progress. The rich and powerful pull the strings for their own purposes, enhancing their power and wealth at the expense of society as a whole.

The most insidious force of this unhealthy alliance is terrorism. The right wing wants to fight terrorism through enhancing the police-National-security state, maxresdefault copywhich provides the rich and powerful the tools to control their societies as well. They also wage war against the sources of terrorism creating the conditions and recruiting for more terrorists. The police states have reduced freedom and lots of censorship, imprisonment, and opposition to personal empowerment. The same police states are effective at repressing minority groups within nations using the weapons gained to fight terror. Together these all help the cause of the right wing in blunting progress. The two allies like to kill each other too, but the forces of hate; fear and doubt work to their greater ends. In fighting terrorism we are giving them exactly what they want, the reduction of our freedom and progress. This is also the aim of right wing; stop the frightening future from arriving through the imposition of traditional values. This is exactly what the religious extremists want be they Islamic or Christian.

05VOWS1-master675Driving the conservatives to such heights of violence and fear are changes to society of massive scale. Demographics are driving the changes with people of color becoming impossible to ignore, along with an aging population. Sexual freedom has emboldened people to break free of traditional boundaries of behavior, gender and relationships. Technology is accelerating the change and empowering people in amazing ways. All of this terrifies many people and provides extremists with the sort of alarmist rhetoric needed to grab power. We see these forces rushing headlong toward each other with society-wide conflict the impact. Progressive and conservative blocks are headed toward a massive fight. This also lines up along urban and rural lines, young and old, educated and uneducated. The future hangs in the balance and it is not clear who has the advantage.

The purpose of life is to contribute in some way to making things better.

― Robert F. Kennedy

The cell phone becoming a platform for communication and commerce at a scale unimaginable a decade ago has transformed the economics of the World. People are now connected globally through the cell phone, and these mini-computers allow new models of economic activity to sprout up constantly. The changes to how we live are incalculable with the cell phone shaping both our social and economic structures in deep and unforeseen ways. The social order is being shaped by the ability to connect to people in deeply personal ways without the necessity of proximity. We meet people online now and form relationships without ever meeting physically. Often people have had ongoing relationships with people they’ve never met or met rarely. They can cell-phonemaintain communication via text, audio or video with amazing ease. This starts to form different communities and relationships that are shaking culture. It’s mostly a middle class phenomenon and thus the poor are left out, and afraid.

The way we live, work, sleep, eat, and relate to each other has transformed completely in a decade. The balance of personal empowerment and threat to privacy sits at the precipice. The cell phone has made the Internet personal and ubiquitous redefining our social order. This excites people like me, and scares the shit out of others. We are seeing a change in our society at a pace unseen in the course of history. Violence is a likely result of the pace and intensity of this change. Economic displacements are accelerating, and our standard systems are poor at coping with everything. Everything from schools too many employers are incapable of evolving fast enough to benefit from the change. I perfect example are institutions like I work for. Despite being high tech and full of educated people, it is conservative, rule following, compliant, security conscious (too a paranoid level) and obsessively bureaucratic, largely incapable of adopting technology or taking risks with anything. My level of frustration is growing with each passing day. I’m seeing modernity pull ahead of me in the workplace due to fear and incompetence.

facebook-friends.jpg.pagespeed.ce_.UPAsGtTZXHFor example in my public life, I am not willing to give up the progress, and feel that the right wing is agitating to push the clock back. The right wing feels the changes are immoral and frightening and want to take freedom away. They will do it in the name of fighting terrorism while clutching the mantle of white nationalism and the Bible in the other hand. Similar mixes are present in Europe, Russia, and remarkably across the Arab world. My work World is increasingly allied with the forces against progress. I see a deep break in the not to distant future where the progress scientific research depends upon will be utterly incongruent with the values of my security-obsessed work place. The two things cannot live together effectively and ultimately the work will be undone by its inability to commit itself to being part of the modern World. The forces of progress are powerful and seemingly unstoppable too. We are seeing the unstoppable force of progress meet the immovable object of fear and oppression. It is going to be violent and sudden. My belief in progress is steadfast, and unwavering. Nonetheless, we have had episodes in human history where progress was stopped. Anyone heard of the dark ages? It can happen.

We’ll be remembered more for what we destroy than what we create.

 Chuck Palahniuk

I’ve been trying to come to grips with the changes in where I work. My workplace is adapting too slowing to modern life, and the meaning in work has become difficult to square with realities. The sense that my work doesn’t matter anymore has become increasingly palpable. I’ve gone from working for a clear sense of purpose and importance to a job that increasingly seethes with irrelevance. Every action from my masters communicates the lack of value of my work. Creating a productive and efficient workplace is never-ever important. We increasingly have no sense of importance in ourPhoneComputer work. This is delivered with disempowering edicts and policies that conspire to shackle me, and keep me from doing anything. The real message is that nothing you do is important enough to risk fucking up. I’m convinced that the things making work suck are strongly connected to everything else.

With each passing day the nature and forces at work and the rest of my life become more separate. It becomes hard to feel like I’m making the best, most productive use of my time and effort in work when I know so much effective productive effort is being left behind. The motives and rules of my employer are focused on safety and security above all else. Everything that makes modern life tick makes them uncomfortable. In addition things are evolving so fast for technology that organizations like mine cannot keep up. I can adapt quickly and learn, but if I do my desires and knowledge will become a source of tension rather than an advantage. If I keep up with technology I am more likely to be frustrated than productive. This is already happening and it is driving me up the wall.

The problem many organizations (companies, universities, Laboratories) are that they are large and move slowly. With a pace of change that is large by any historical standard, they don’t keep up or adapt. You get into a position where the organization is falling behind, and the forces of fear and hesitation are deadly. Add in a government who supplies the funding with rules galore and strings attached and you have the recipe for falling behind. Unless the organization, its leadership, and its people commit tocapitol-building-from-gala-300x200 adapting and pushing themselves forward, the organization will stagnate or be left behind by the exciting innovations shaping the World today. When the motivation of the organization fails to emphasize productivity and efficiency, the recipe is disastrous. Modern technology offers the potential for incredible advances, but only if they’re seeking advantage. If minds are not open to making things better, it is a recipe for frustration.

This ends up being a microcosm of today’s World. Modern advances in technology or society offer tremendous advantages for those willing to be bold, but they come with risk and discomfort. Many are deeply tied to the past and the old way of doing things along with old-fashioned sense of purpose. There is an inability or unwillingness to accept the potential of the modern. Without bold and willing endeavor, the new World passes organization by. This is where I feel the greatest pain, the passive manner in which the modern World passes my organization by. It is not unlike the broader social themes of those who accept the modern World, and the conservative resistance to everything. Organizations who fail to embrace the potential of the modern world imagesunwittingly enter into the opposition to change, and assist the conservative attempt to hold onto the past.

The question to ask is whether these organization whose history and greatness are grounded in a time gone by can evolve into something modern. Do these organizations even want to change? Can they make the change even if they want to while tethered to the past? For those of us who are part of publically focused organizations, we are caught in between the forces raging politically. The institutions of government will almost certainly resist and avoid change. The result will be a slow decline and almost certain death for these organizations as the World changes. The alternative is the World not changing and progress stopping, this outcome is certainly worse for humanity.

History is always written by the winners. When two cultures clash, the loser is obliterated, and the winner writes the history books-books which glorify their own cause and disparage the conquered foe. As Napoleon once said, ‘What is history, but a fable agreed upon?

― Dan Brown