We need better theory and understanding of numerical errors

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

In modeling and simulation numerical error is an extremely important yet generally csm_group1_2c3e352676unsatisfactorily understood thing. For general nonlinear problems dominating the use and utility of high performance computing, the state of affairs is quite incomplete. It has a central role in modeling and simulation making our gaps in theory, knowledge and practice rather unsettling. Theory is strong for linear problems where solutions are well behaved and smooth (i.e., continuously differentiable, or a least many derivatives exist). Almost every problem of substance driving National investments in computing is nonlinear and rough. Thus, we have theory that largely guides practice by faith rather than rigor. We would be well served by a concerted effort to develop theoretical tools better suited to our reality.

Sometimes a clearly defined error is the only way to discover the truth

― Benjamin Wiker

We have a fundamental existence theory for convergent solutions defined by Lax’s early work (the fundamental theorem of numerical analysis). It is quite limited, rigorously applying to linear differential equations, yet defining basic approaches to numerical approximations for models that are almost invariably nonlinear. The theorem states that when a stable approximation is consistent (approximates the differential equation properly), it will converge to the correct solution. By convergent we meaPeter_Laxn that the solution approaches the exact solution is the manner of approximation grows closer to a continuum, which is associated with small discrete steps/mesh and more computational resource. This theorem provides the basis and ultimate drive for faster, more capable computing. We apply it most of the time where it is invalid. We would be greatly served by having a theory that is freed of these limits. Today we just cobble together a set of theories, heuristics and lessons into best practices and we stumble forward.

Part of making use of this fundamental theorem is producing a consistent approximation to the model of choice. The tool for accomplishing this is a thing like Taylor series, maxresdefaultpolynomials and finite elements. All of these methods depend to some degree on solutions being well behaved and nice. Most of our simulations are neither well behaved nor nice. We assume an idealized nice solution then approximate using some neighborhood of discrete values. Sometimes this is done using finite differences, or cutting the world into little control volumes (equivalent in simple cases), or creating finite elements and using variational calculus to make approximations. In all cases the underlying presumption is smooth, nice solutions while most of the utility of approximations violates these assumptions. Reality is rarely well behaved or nice, so we have a problem. Our practice has done reasonably well and taken us far, but a better more targeted and useful theory might truly unleash innovation and far greater utility.

The aim of science is not to open the door to infinite wisdom, but to set a limit to infinite error.

― Bertolt Brecht

05 editedWe don’t really know what happens when the theory falls apart, and simply rely upon bootstrapping ourselves forward. We have gotten very far with very limited theory, and simply moving forward largely on faith. We do have some limited theoretical tools, like conservation principles (Lax-Wendroff’s theorem), and entropy solutions (converging toward solutions associated with viscous regularization consistent with the second law of thermodynamics). The thing we miss is general understanding of what is guiding accuracy and defining error in these cases. We cannot design methods specifically to produce accurate solution in these circumstances and we are guided by heuristics and experience rather than rigorous theory. A more rigorous theoretical construct would provide a springboard for productive innovation. Let’s look at a few of the tools available today to put things in focus.

One of the first things one encounters in putting together discrete approximations in realistic circumstances is a choice. For nonlinear features leading to general and rough solutions, one can decide to track features in the solution explicitly. The archetype of this is shock tracking where the discrete evolution of a shock wave is defined explicitly in t6767444295_259ef3e354he approximation. In essence the shock wave (or whatever wave is tracked) becomes an internal boundary condition allowing regular methods to be used everywhere else. This typically involves the direct solution of the Rankine-Hugoniot relations (i.e. the shock jump conditions, algebraic relations holding at a discontinuous wave). The problems with this approach are extreme, including unbounded complexity if all waves are tracked, or with solution geometry in multiple dimensions. This choice has been with us since the dawn of computation including the very first calculations at Los Alamos that used this technique, but it rapidly becomes untenable.

john-von-neumann-2To address the practical aspects of computation shock capturing methods were developed. Shock capturing implicitly computes the shock wave on a background grid through detecting its presence and adding a physically motivated dissipation to stabilize its evolution. This concept has made virtually all of computational science possible. Even when tracking methods are utilized the explosion of complexity is tamed by resorting to shock capturing away from the richtmyer_robert_b1dominant features being tracked. The origin of the concept came from Von Neumann in 1944, but lacked a critical element for success, dissipation or stabilization. Richtmyer added this critical element with artificial viscosity in 1948 while working at Los Alamos on problems whose complexity was advancing beyond the capacity of shock tracking to deal with. Together Von Neumann’s finite differencing scheme and Richtmyer’s viscosity enabled shock capturing. It was a proof of principle and its functionality was an essential springboard for others to have faith in computational science.

What one recognizes is that when dealing with shock wave physics must be added to the discrete representation. This happens explicitly in tracking where the shock itself becomes as discrete element or implicit with shock capturing where the approximation is adapted using the physics of shocks. Of course, shock capturing is useful for more than just shocks. It can be used to stabilize the computation of any feature. The overall methodology has some additional benefits not immediately recognized by its originators. For computing turbulence without fully resolving features shock capturing methods are essential (i.e., not DNS, but DNS can be criticized in its practice). Large eddy simulation was born out of adding the original Richtmyer-Von Neumann viscosity to weather modeling, and resulted in the creation of the Smagorinsky eddy viscosity. Other shock capturing methods developed for general purposes have provided the means for implicit Large Eddy Simulation. These methods all have the same origin, and rely upon the basic principles of shock capturing. The fact that all of this has the same origin almost certainly has a deep meaning that is lost in most of today’s dialog. We would be Global_Atmospheric_Modelwell served by aggressively exploring these connections in an open-minded and innovative fashion.

One of the key things about all of this capability is the realization of how heuristic it is at its core. Far too much of what we currently do in computational science is based upon heuristics, and experience gained largely through trial and error. Far too little is based upon rigorous theory. The advancement of our current approaches through theory would be a great service to the advancement of the field. Almost none of the current efforts are remotely associated with advancing theory. If one gets down to brass tacks about the whole drive for exascale, we see that it is predicated on the concept of convergence whose theoretical support is extrapolated from circumstances that don’t apply. We are really on thin ice, and stunningly unaware of the issues. This lack of awareness then translates to lack of action, lack of priority, lack of emphasis and ultimately lack of money. In today’s world if no one pays for it, it doesn’t happen. Today’s science programs are designed to be funded, rather than designed to advance science. No one speaks out about how poorly thought through our science programs are; they simply are grateful for the funding.Titan-supercomputer

When I was a kid, they had a saying, ‘to err is human but to really fuck it up takes a computer.’

― Benjamin R. Smith

There are a host of technologies and efforts flowing out from our current efforts that could all benefit from advances in theory for numerical approximation. In addition to the development of larger computers, we see the application of adaptive mesh refinement (AMR) to define enhanced resolution. AMR is even more highly bootstrapped and leveraged in terms of theory. By the same token, AMR’s success is predicated on best practices and experience from a wealth of applications. AMR is an exciting technology that produces stunning results. Better and more appropriate theory can turn these imagesresults from the flashy graphics AMR produces to justifiable credible results. A big part of moving forward is putting verification and validation into practice. Both activities are highly dependent on theory that is generally weak or non-existent. Our ability to rigorously apply modeling and simulation to important societal problems is being held back by our theoretical failings.

Another area with critical importance and utter lack of support is subgrid closure modeling especially where it depends on the mesh scale itself. The general thinking about closure modeling is completely haphazard and heuristic. The combination of numerical modeling and closure at the mesh scale is poorly thought out, and generally lacking any theoretical support. Usually the closure models are tied directly to the mesh scale, yet numerical methods rarely produce good solutions on the smallest mesh, but rather over a number of mesh cells (or elements). We rarely think about we defined or resolved solution structures and how it connects to modeling. Instead models are thought of solely geometrically in terms of scale and tied to the mesh scale. As a result we don’t have consistency between our mesh, numerical solution and the resolution-fidelity of the numerical method. Often this leaves the modeling in the code as being completely mesh-dependent, and produces no chance of mesh independence.

dag006A big issue is a swath of computational science where theory is utterly inadequate much of it involving chaotic solutions where there is extreme dependence on initial conditions. Turbulence is the classical problem most closely related to this issue. Our current theory and rigorous understand is vastly inadequate to spur progress. In most cases we are let down by both the physics modeling, mathematical and numerical theory. In every case we have weak to non-existent rigor leading to heuristic filled models and numerical solvers. Extensions of any of this work are severely hampered by the lack of theory (think higher order accuracy, uncertainty quantification, optimization,…). We don’t know how any of this converges, we just act like it does and use it to justify most of our high performance computing investments. All of our efforts would be massively assisted by almost any progress theoretically. Most of the science we care about is chaotic at a very basic level and lots of interesting things are utterly dependent on understanding this better. The amount of focus on this matter is frightfully low.

My overall view is that the lack of investment and attention to our theoretical shortcomings is a significant burden. The flipside is the loss of a massive opportunity to make some incredible advances. Instead of solving a whole new class of problems powered by deeper understanding of physics and mathematics, we are laboring under vast gaps. This lowers the effectiveness of everything we do, and every dollar we spend. While a focus on advancing theory and understanding is quite risky, the benefits are extreme. If we are not prepared to fail, we will not succeed.

Success is not built on success. Not great success. Great success is built on failure, frustration, even catastrophe.

— Sumner Redstone

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

Von Neumann, John. “Proposal and analysis of a new numerical method for the treatment of hydrodynamical shock problems.” The collected works of John von Neumann 6 (1944).

Richtmyer, R. D. “Proposed numerical method for calculation of shocks.” LANL Report, LA 671 (1948): 1-18.

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21, no. 3 (1950): 232-237.

Mattsson, Ann E., and William J. Rider. “Artificial viscosity: back to the basics.” International Journal for Numerical Methods in Fluids 77, no. 7 (2015): 400-417.

Richtmyer, Robert D., and Keith W. Morton. “Difference methods for initial-value problems.” Malabar, Fla.: Krieger Publishing Co.,| c1994, 2nd ed. (1994).

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. The basic experiment.” Monthly weather review 91, no. 3 (1963): 99-164.

Smagorjnsky, Joseph. “The beginnings of numerical weather prediction and general circulation modeling: early recollections.” Advances in Geophysics 25 (1983): 3-37.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

 

 

 

Rethinking the meaning of Trump

 

Nationalism is power hunger tempered by self-deception.
— George Orwell

The day after the Presidential election in November left me reeling. The decision to elect Donald Trump was incomprehensible because of his deep flaws and utter lackshutterstock_318051176-e1466434794601-800x430 of preparation and qualification for the office of President. Since he has taken office, none of Trump’s actions have provided any relief from these concerns. Whether I’ve looked at his executive orders, appointments, policy directions, public statements, conduct or behavior, the conclusion is the same, Trump is unfit to be President. He is corrupt, crude, uneducated, prone to fits of anger, engages in widespread nepotism, and acts utterly un-Presidential. He has nothing to mitigate any of the concerns I felt that fateful Wednesday when it was clear that he had been elected President. At the same time virtually all of his supporters have been unwavering in support for him. The Republican Party seems impervious to the evidence before them about vast array of problems Trump represents, supporting him, if not enabling his manifest dysfunctions.

Over the past month and especially the last week my views of what Trump means have shifted. If anything my conclusions about the meaning of his reign in the White House are worse than before. Mr. Trump was elected President due to the actions of the Russian Federation and their unprecedented hacking activities and seeding of false narratives into the public conscience. The Russians deeply favored Trump in the election for two clear reasons, their dislike and fear of Clinton and the congruence of Trump’s tendencies with Putin’s in terms of basic philosophy. In addition, Trump’s manifest incompetence would weaken the United States’ role internationally. We have effectively lost the role as leaders of the Free World, and ironically put Germany in that role. Trump’s erratic actions and lack of Presidential skills, knowledge and behavior makes the United States weak, and unable to stand up against a resurgent Russia. The whole thing is actually worse than all of this because Trump represents a new direction for the United States. He represents a new commitment to authoritarian rule, diminishment of freedom, plutocracy, kleptocracy and erratic jingoism.

This gets to the core of what I’ve realized about the meaning of Trump. The reason the Republicans are not disturbed by the Russian influence on the election or the President is their Sympatico with the Russians. The ruling philosophy of Trump and Republicans is the same as the Russians. They use traditional religious and Nationalist values to build support among the populace while ruling to slant the entire government toward two roles, putting money in the hands of the wealthy and authoritarian policies to control the populace. Both scapegoat lots of minorities and fringe groups with bigoted and even violent responses. Neither the Republicans or the Russians are interested in Democratic principles and act steadfastly to undermine voting rightsmaxresdefault at every turn. The Party and its leader in turn driving a strong support among the common man are defending the core traditional National identity. This gives both Putin and Trump their political base from which they can deliver benefits to the wealthy ruling class while giving the common man red meat in oppression of minorities and non-traditional people. All of this is packaged up with a strongly authoritarian leadership with lots of extra law enforcement and military focus. Both Putin and Trump will promote defending the Homeland from the enemies external and internal. Terrorism provides a handy and evil external threat to further drive the Nationalist tendencies.

Here is the difference between Trump and Putin. Putin is a mastermind and a truly competent leader whose main interests are power for himself and Russian by proxy. Trump is an imbecilic and utterly incompetent whose interests are personal greed and power. He cares nothing for the Country or its people. Whether he is a witting or unwitting pawn of Putin doesn’t matter at some level. He is Putin’s pawn and his rule is a direct threat to our Nation’s future and place in the World. The situation we find ourselves in is far graver simply having an idiotic narcissist as President; we have a President who is undermining or Nation through both direct and indirect actions. We have a ruling political party that acts to enable this and making a foreign power more effective in the process.

The combination of the Republican Party and its leader in the President are fundamentally reshaping the United States in a corrupt and incompetent mirror to Putin’s Russia. Only time will tell how far this will go or what the long-term consequences will be. The end result will be a United States that loses its position as the sole superpower in the World. The only ones benefiting from this change are Russia and the cadre of wealthy people served by both regimes. The rest of us will suffer.

Sometimes the first duty of intelligent men is the restatement of the obvious.
— George Orwell

 

 

Numerical Approximation is Subtle, and we don’t do subtle!

We are losing the ability to understand anything that’s even vaguely complex.

― Chuck Klosterman

I get asked, “what do you do?” quite often in conversation, and I realize the truth needs to be packaged carefully for most people. One of my issues is that advertise what I do on my body with some incredibly nerdy tattoos including an equation that describes oneIMG_3502 form of the second law of thermodynamics. What I do is complex and highly technical full of incredible subtlety. Even when talking with someone from a nearby technical background the subtlety of approximating physical laws numerically in a manner suitable for computing can be daunting. For someone without a technical background it is positively alien. This character comes to play rather acutely in the design and construction of research programs where complex, technical and subtle does not sell. This is especially true in today’s world where expertise and knowledge is regarded as suspicious, dangerous and threatening to so many. In today’s world one of the biggest insults to hurl at some one is to accuse them of being one of the “elite”. Increasingly it is clear that this isn’t just an American issue, but Worldwide in its scope. It is a clear and present threat to a better future.

21SUPERCOMPUTERS1-master768I’ve written often about the sorry state of high performance computing. Our computing programs are blunt and naïve constructed to squeeze money out of funding agencies and legislatures rather then get the job done. The brutal simplicity of the arguments used to support funding is breathtaking. Rather than construct programs to be effective and efficient getting the best from every dollar spent, we construct programs to be marketed at the lowest common denominator. For this reason something subtle, complex and technical like numerical approximation gets no play. In today’s world subtlety is utterly objectionable and a complete buzz kill. We don’t care that it’s the right thing to do, or that it is massively greater in return than simply building giant monstrosities of computing. It would take an expert from the numerical elite to explain it, and those people are untrustworthy nerds, so we will simply get the money to waste on the monstrosities instead. So here I am, an expert and one of the elite using my knowledge and experience to make recommendations on how to be more effective and efficient. You’ve been warned.

Truth is much too complicated to allow anything but approximations.

— John Von Neumann

If we want to succeed at remaining a high performance computing superpower, we need change our approach and fast. Part of what is needed is a greater focus on numerical approximation. This is part of deep need to refocus on the more valuable aspects of the scientific computing ecosystem. The first thing to recognize is that our current hardware first focus is oriented on the least valuable part of the ecosystem, the computer itself. A computer is necessary, but horribly insufficient for high performance computing supremacy. The real value for scientific computing is the opposite end of the spectrum where work is grounded in physics, engineering and applied mathematics.Crays-Titan-Supercomputer

Although this may seem a paradox, all exact science is dominated by the idea of approximation.

— Bertrand Russell

I’ve made this argument before and it is instructive to unpack it. The model solved via simulation is the single most important aspect of the simulation. If the model is flawed, no amount of raw computer speed, numerical accuracy, or efficient computer code can rescue the solution and make it better. The model must be changed, improved, or corrected to produce better answers. If a model is correct the accuracy, robustness, fidelity and efficiency of its numerical solution is essential. Everything upstream of the numerical solution aimed toward the computer hardware is less important. We can move down the chain of activities all of which are necessary seeing the same effect, the further you get from the model of reality, the less efficient the measures are. This whole thing is referred to an ecosystem these days and every bit of it needs to be in place.3_code-matrix-944969 What also needs to be in place is a sense of the value of each activity, and priority placed toward those that have the greatest impact, or the greatest opportunity. Instead of doing this today, we are focused on the thing with least impact, farthest from reality and starving the most valuable parts of the ecosystem. One might argue that the hardware is a subject of opportunity, but the truth is the opposite. The environment for improving the performance of hardware is at a historical nadir; Moore’s law is dead, dead, dead. Our focus on hardware is throwing money at an opportunity that has passed into history.

I’m a physicist, and we have something called Moore’s Law, which says computer power doubles every 18 months. So every Christmas, we more or less assume that our toys and appliances are more or less twice as powerful as the previous Christmas.

Michio Kaku

At some point, Moore’s law will break down.

— Seth Lloyd

There is one word to describe this strategy, stupid!

500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9At the core of the argument is a strategy that favors brute force over subtleties understood mainly by experts (or the elite!). Today the brute force argument always takes the lead over anything that might require some level of explanation. In modeling and simulation the esoteric activities such as the actual modeling and its numerical solution are quite subtle and technical in detail compared to the raw computing power that can be understood with ease by the layperson. This is the reason the computing power gets the lead in the program, not because of its efficacy in improving the bottom line. As a result our high performance-computing world is dominated by meaningless discussions of computing power defined by a meaningless benchmark. The political dynamics is basically a modern day “missile gap” like we had during the Cold War. It has exactly as much virtue as the original “missile gap”; it is a pure marketing and political tool with absolutely no technical or strategic validity aside from its ability to free up funding.

Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation because we know that we do not know all the laws as yet.

— Richard P. Feynman

Once you have an entire program founded on bullshit arguments, it is hard to work your way back to technical brilliance. It is easier to double down on the bullshit and simply define everything in terms of the original fallacies. A big part of the problem is the application of modern verification and validation to the process. Both verification and validation are modern practices to accumulate evidence on the accuracy, correctness and fidelity of computational simulations. Validation is the comparison of simulation with experiments and in this comparison the relative correctness of models is determined. Verification determines the correctness and accuracy of the numerical solution of the vyxvbzwxmodel. Together the two activities should help energize high quality work. In reality most programs consider them to be nuisances and box checking exercises to be finished and ignored as soon as possible. Programs like to say they are doing V&V, but don’t want to emphasize or pay for doing it well. V&V is a mark of quality, but the programs want its approval rather than attend to its result. Even worse, if the results are poor or indicate problems, they are likely to be ignored or dismissed as being inconvenient. Programs get away with this because the practice of V&V is technical and subtle and in the modern world highly susceptible to bullshit.

Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.

— John W. Tukey

Numerical methods for solving models are even more technical and subtle. As such they are the focus of suspicion and ignorance. For high performance computing today they are considered to be yesterday’s work and largely a finished, completed product now simply needing a bigger computer to do better. In a sense this notion is correct, the bigger computer will produce a better result. The issue is that using the computer power, as the route to improvement is inefficient under the best of circumstances. We are not living under of the best of circumstances! Things are far from efficient, as we have been losing the share of computer power advances useful for modeling and bh_computers_09simulation for decades now. Let us be clear, when we receive an ever-smaller proportion of the maximum computing power as each year passes. Thirty years ago we would commonly get 10, 20 or even 50 percent of the peak performance of the cutting edge supercomputers. Today even one percent of the peak performance is exceptional, and most codes doing real application work are significantly less than that. Worse yet, this dismal performance is getting worse with every passing year. This is one element of the autopsy of Moore’s law that we have been avoiding while its corpse rots before us.

So we are prioritizing improvement in an area where the payoffs are fleeting and suboptimal. Even these improvements are harder and harder to achieve as computers become ever more parallel and memory access costs become ever more extreme. Simultaneously we are starving more efficient means of improvement of resources and emphasis. Numerical methods and algorithms are two key areas not getting any significant attention or priority. Moreover support for these areas is actually diminishing so that support for the inefficient hardware path can be increased. Let’s not mince words; we are emphasizing a crude naïve and inefficient route to improvement at the cost of a complex and subtle route that is far more efficient and effective.

Numerical approximations and algorithms are complex and highly technical things john-von-neumann-2poorly understood by non-experts even if they are scientists. The relative merits of one method or algorithm compared to another is difficult to articulate. The merits and comparison is highly technical and subtle. Since creating new methods and algorithms makes progress, this means improvements are hard to explain and articulate to non-experts. In some cases both methods and algorithms can produce breakthrough results and produce huge speed-ups. These cases are easy to explain. More generally a new method or algorithm produces subtle improvements like more robustness or flexibility or accuracy than the older options. Most of these changes are not obvious, but making this progress over time leads to enormous improvements that swamp the progress made by faster computers.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

The huge breakthroughs are far and few between but provide much greater value than any hardware over similar periods of time. To get these huge breakthroughs requires continual investment in research for extended periods of time. For much of the time the research is mostly a failure producing small or non-existent improvements, until they don’t. Without the continual investment, the failure and the expertise failure produces, the breakthroughs will not happen. They are mostly serendipitous and the end product of many unsuccessful ideas. Today the failures and lack of progress is not supported; we exist in a system where insufficient trust exists to support the sort of failure needed for progress. The result is the addiction to Moore’s law and its seemingly guaranteed payoff because it frees us from subtlety.

Often a sign of expertise is noticing what doesn’t happen.

― Malcolm Gladwell

A huge aspect of expertise is the taste for subtlety. Expertise is built upon mistakes and idiocracyfailure just as basic learning is. Without the trust to allow people to gloriously make professional mistakes and fail in the pursuit of knowledge, we cannot develop expertise or progress. All of this lands heavily on the most effective and difficult aspects of scientific computing, the modeling and solution of the models numerically. Progress on these aspects is both highly rewarding in terms of improvement, and very risky being prone to failure. To compound matters progress is often highly subjective itself needing great expertise to explain and be understood. In an environment where the elite are suspect and expertise is not trusted such work is unsupported. This is exactly what we see, the most important and effective aspects of high performance computing are being starved in favor of brutish and naïve aspects, which sell well. The price we pay for our lack of trust is an enormous waste of time, money and effort.

Wise people understand the need to consult experts; only fools are confident they know everything.

― Ken Poirot

Again, I’ll note that we still have so much to do. Numerical approximations for existing models are inadequate and desperately in need of improvement. We are burdened by theory that is insufficient and heavily challenged by our models. Our models are all flawed and the proper conduct of science should energize them to improve.

…all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind… [Co-author with Norman R. Draper]

— George E.P. Box

What we still don’t get about numerical error

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

― Nick Trefethen

Modern modeling and simulation is viewed as a transformative technology for science and engineering. Invariably the utility of modeling and simulation is grounded on tmaxresdefaulthe solution of models via numerical approximations. The fact that numerical approximation is the key to unlocking its potential seems largely lost in the modern perspective, and engaged in any increasingly naïve manner. For example much of the dialog around high performance computing is predicated on the notion of convergence. In principle, the more computing power one applies to solving a problem, the better the solution. This is applied axiomatically and relies upon a deep mathematical result in numerical approximation. This heritage and emphasis is not considered in the conversation to the detriment of its intellectual depth.

Where all think alike there is little danger of innovation.

― Edward Abbey

At this point, the mathematics and specifics of numerical approximation is then images-2systematically ignored by the dialog. The impact of this willful ignorance is felt across the modeling and simulation world, a general lack of progress and emphasis on numerical approximation is evident. We have produced a situation where the most valuable aspect of numerical modeling is not getting focused attention. People are behaving as if the major problems are all solved and not worthy of attention or resources. The nature of the numerical approximation is the second most important and impactful aspect of modeling and simulation work. Virtually all the emphasis today is on the computers themselves based on the assumption of their utility in producing better answers. The most important aspect is the modeling itself; the nature and fidelity of the models define the power of the whole process. Once a model has been defined, the numerical solution of the model is the second most important aspect. The nature of this numerical solution is most dependent on the approximation methodology rather than the power of the computer.

The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.

― Anthony Jay

People act as if the numerical error is so small as not to be important on one hand, while encouraging great focus on computing power where the implicit reasoning for the computing power is founded on reducing numerical error. To make matters worse with this corrupt logic, the most effective way to reduce numerical error is being starved for attention and resources having little or no priority. The truth is that numerical errors are still too large, and increasing computing power is lousy way and inefficient to make them smaller. We are committed to a low-risk path that is also highly inefficient because the argument is accessible to the most naïve people in the room.

What is important is seldom urgent and what is urgent is seldom important.

― Dwight D. Eisenhower

Another way of getting to the heart of the issue is the efficacy of using gains in computer power to get better solutions. Increases in computing power are a terrible way to produce better results; it is woefully inefficient. One simply needs to examine the rate of solution improvement based on scaling arguments. First, we need to recognize that practical problems converge quite slowly in terms of the application of enhanced computational resources. For almost any problem of true real world applicability, high-order convergence (higher than first-order) is never seen. Generally we might expect solutions to improve at first-order with the inverse of mesh size. If we look at three dimensional, time dependent problems and we want to halve the numerical error, we need to apply at least 16 times the computing power. Usually convergence rates are less than first order, so the situation is actually even worse. As a result we are investing an immense amount in progressing in an incredibly inefficient manner, and starving more efficient means of progress. To put more teeth on the impact of current programs, the exascale initiative wants to compute things fifty times better, which will only result is reducing errors by slightly more than one half. So we will spend huge effort and billions of dollars in making numerical errors smaller by half. What an utterly shitty return on investment! This is doubly shitty when you realize that so much more could be done to improve matter by other means.

The first thing we need to recognize for progress is relative efficacy of different modes of investment. The most effective way to progress in modeling and simulation are better models. Better models require work on theory and experiment with deeply innovative thinking based on inspiration and evidence of limitations of current theory and modeling. For existing and any new models the next step is solving the models numerically. This involves detailed and innovative numerical approximations of the models. The power of modeling and simulation with computers is predicated on the ability to solve complex models that cannot be understood analytically (or analytically without severe restrictions or assumptions). The fidelity of the numerical approximations is the single most effective way to improve results once modeling errors have been addressed. Numerical approximations can make a huge difference in the accuracy of simulations far more effectively than computer power.

Don’t tell me about your effort. Show me your results.

― Tim Fargo

titanSo why are we so hell bent on investing in a more inefficient manner of progressing? Our mindless addiction to Moore’s law providing improvements in computing power over the last fifty years for what in effect has been free for the modeling and simulation community.

imagesOur modeling and simulation programs are addicted to Moore’s law as surely as a crackhead is addicted to crack. Moore’s law has provided a means to progress without planning or intervention for decades, time passes and capability grows almost if by magic. The problem we have is that Moore’s law is dead, and rather than moving on, the modeling and simulation community is attempting to raise the dead. By this analogy, the exascale program is basically designed to create zombie computers that completely suck to use. They are not built to get results or do science, they are built to get exascale performance on some sort of bullshit benchmark.

This gets to the core of the issue, our appetite for risk and failure. Improving numerical approximations is risky and depends on breakthroughs and innovative thinking. Moore’s law has sheltered the modeling and simulation community from risk and failure in computing hardware for a very long time. If you want innovation you need to accept risk and failure; innovation without risk and failure simply does not happen. We are intolerant of risk and failure as a society, and this intolerance dooms innovation literally strangling it to death in its crib. Moore’s law allowed progress without risk, as if it came for free. The exascale program will be the funeral pyre for Moore’s law and we are threatening the future of modeling and simulation with our unhealthy addiction to it.

If failure is not an option, then neither is success.

― Seth Godin

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

The key thing to realize about this discussion is that improving numerical7b8b354dcd6de9cf6afd23564e39c259 approximations is risky and highly prone to failure. You can invest in improving numerical approximations for a very long time without any seeming progress until one gets a quantum leap in performance. The issue in the modern world is the lack of predictability to such improvements. Breakthroughs cannot be predicted and cannot be relied upon to happen on a regular schedule. The breakthrough requires innovative thinking and a lot of trial and error. The ultimate quantum leap in performance is founded on many failures and false starts. If these failures are engaged in a mode where we continually learn and adapt our approach, we eventually solve problems. The problem is that it must be approached as an article of faith, and cannot be planned. Today’s management environment is completely intolerant of such things, and demands continual results. The result is squalid incrementalism and an utter lack of innovative leaps forward.

Civilizations… cannot flourish if they are beset with troublesome infections of mistaken beliefs.

― Harry G. Frankfurt

What is the payoff for methods improvement?

If we improve a method we can achieve significantly better results without a finer computational mesh. This results in a large saving in computational cost as long as the improved method isn’t too expensive. As I mentioned before one needs 16 times the computational resources to knock error down by half for a 3-D time dependent calculation. If I produce a method with half the error, it can be more efficient if it is less than 16 times as expensive. In other word, the method can use 16 times the computational resource and still be more efficient. This is a lot of headroom to work with!

The most dangerous ideas are not those that challenge the status quo. The most dangerous ideas are those so embedded in the status quo, so wrapped in a cloud of inevitability, that we forget they are ideas at all.

― Jacob M. Appel

For some cases the pMRISB2ayoff is far more extreme than these simple arguments. The archetype of this extreme payoff is the difference between first and second order monotone schemes. For general fluid flows, second-order monotone schemes produce results that are almost infinitely more accurate than first-order. The reason for this stunning claim are acute differences in the results comes from the impact of the form of the truncation error expressed via the modified equations (the equations solved more accurately by the numerical methods). For first-order methods there is a large viscous effect that makes all flows laminar. Second-order methods are necessary for simulating high Reynolds number turbulent flows because their dissipation doesn’t interfere directly with the fundamental physics.

As technology advances, the ingenious ideas that make progress possible vanish into the inner workings of our machines, where only experts may be aware of their existence. Numerical algorithms, being exceptionally uninteresting and incomprehensible to the public, vanish exceptionally fast.

― Nick Trefethen

swirly2We don’t generally have good tools for numerical error approximation in non-standard (or unresolved) cases. One digestion of one of the key problems is found in Banks, Aslam, Rider where sub-first-order convergence is described and analyzed for solutions of a discontinuous problem for the one-way wave equation. The key result in this paper is the nature of mesh convergence for discontinuous or non-differentiable solutions. In this case we see sub-linear fractional order convergence. The key result is a general relationship between the convergence rate and the formal order of accuracy for the method, p, which is \frac{p}{p+1}. This comes from the analysis of the solution to the modified equation including the leading order truncation error. For nonlinear discontinuous solutions, the observed result is first-order where one establishes a balance between the regularization and the self-steepening in shock waves. At present there is no theory of what this looks like theoretically. Seemingly this system of equations could be analyzed as we did for the linear equations. Perhaps this might provide guidance for numerical method development. It would seemingly be worthy progress if we could analyze such systems more theoretically providing a way to understand actual accuracy.

Another key limitation of existing theory is chaotic solutions classically associated with turbulent or turbulent-like flows. These solutions are extremely (perhaps even infinitely) sensitive to initial conditions. It is impossible to get convergence results for point values, and the only convergence is for integral measures. These measures are generally convergent very slowly and they are highly mesh-dependent. This issue is huge in high performance computing. One area of study is measure-valued solutions where convergence is examined statistically. This is a completely reasonable approach for convergence of general solutions to hyperbolic PDE’s.

dag006The much less well-appreciated aspect comes with the practice of direct numerical simulation of turbulence (DNS really of anything). One might think that having a DNS would mean that the solution is completely resolved and highly accurate. They are not! Indeed they are not highly convergent even for integral measures. Generally speaking, one gets first-order accuracy or less under mesh refinement. The problem is the highly sensitive nature of the solutions and the scaling of the mesh with the Kolmogorov scale, which is a mean squared measure of the turbulence scale. Clearly there are effects that come from scales that are much smaller than the Kolmogorov scale associated with highly intermittent behavior. To fully resolve such flows would require the scale of turbulence to be described by the maximum norm of the velocity gradient instead of the RMS.

If you want something new, you have to stop doing something old

― Peter F. Drucker

Peter_LaxWhen we get to the real foundational aspects of numerical error and limitations, we come to the fundamental theorem of numerical analysis. For PDEs it only applies to linear equations and basically states that consistency and stability is equivalent to convergence. Everything is tied to this. Consistency means you are solving the equations in a valid and correct approximation, stability is getting a result that doesn’t blow up. What is missing is the theoretical application to more general nonlinear equations along with deeper relationships to accuracy, consistency and stability. This theorem was derived back in the early 1950’s and we probably need something more, but there is no effort or emphasis on this today. We need great effort and immensely talented people to progress. While I’m convinced that we have no limit on talent today, we lack effort and perhaps don’t develop or encourage the talent to develop appropriately.

bh_computers_01Beyond the issues with hardware emphasis, today’s focus on software is almost equally harmful to progress. Our programs are working steadfastly on maintaining large volumes of source code full of the ideas of the past. Instead of building on the theory, methods, algorithms and idea of the past, we are simply worshiping them. This is the construction of a false ideology. We would do far greater homage to the work of the past if we were building on that work. The theory is not done by a long shot. Our current attitudes toward high performance computing are a travesty, and embodied in a national program that makes the situation worse only to serve the interests of the willfully naive. We are undermining the very foundation upon which the utility of computing is built. We are going to end up wasting a lot of money and getting very little value for it.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― Ta-Nahisi Coates

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Fjordholm, Ulrik S., Roger Käppeli, Siddhartha Mishra, and Eitan Tadmor. “Construction of approximate entropy measure-valued solutions for hyperbolic systems of conservation laws.” Foundations of Computational Mathematics (2015): 1-65.

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.

 

Science is political and it always has been

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

marchforscience-1024x512On Saturday I participated in the March for Science in downtown Albuquerque along with many other marches across the World. This was advertised as a non-partisan event, but to anyone there it was clearly and completely partisan and biased. Two things united the people at the march: a philosophy of progressive and liberalism and opposition to conservatism and Donald Trump. The election of a wealthy paragon of vulgarity and ignorance has done wonders for uniting the left wing of politics. Of course, the left wing in the United States is really a moderate wing, made to seem liberaIMG_5229l by the extreme views of the right. Among the greater proponents of the left wing are science as an engine of knowledge and progress. The reason for this dichotomy is the right wing’s embrace of ignorance, fear and bigotry as its electoral tools. The right is really the party of money and the rich with fear, bigotry and ignorance wielded as tools to “inspire” enough of the people to vote against their best (long term) interests. Part of this embrace is a logical opposition to virtually every principle science holds dear.

041917_RE_science-march_mainThe premise that a march for science should be non-partisan is utterly wrong on the face of it; science is and has always been a completely political thing. The reasoning for this is simple and persuasive. Politics is the way human beings settle their affairs, assign priorities and make decisions. Politics is an essential human endeavor. Science is equally human in its composition being a structured vehicle for societal curiosity leading to the creation of understanding and knowledge. When the political dynamic is arrayed in the manner we see today, science is absolutely and utterly political. We have two opposing views of the future, one consistent with science favoring knowledge and progress, with the other inconsistent with science favoring fear and ignorance. In such an environment science is completely partisan and political. To expect things to be different is foolish and naïve.

The essence of Conservatism is fear: fear of what is not understood, fear of change, fear of the new, fear of what’s different, fear of the other.

― Gene Pozniak

IMG_5231One of the key things to understand is that science has always been a political thing although the contrast has been turned up in recent years. The thing driving the political context is the rightward movement of the Republican, which has led to their embrace of extreme views including religiosity, ignorance and bigotry. Of course, these extreme views are not really the core of the GOP’s soul, money is, but the cult of ignorance and anti-science is useful in propelling their political interests. The Republican Party has embraced extremism in a virulent form because it pushes its supporters to unthinking devotion and obedience. They will support their party without regard for their own best interests. The republican voter base hurts their economic standing in favor of policies that empower their hatreds and bigotry while calming their fear. All forms of fact and truth have become utterly unimportant unless they support their world-view. The upshot is the rule of a political class hell bent on establishing a ruling class in the United States composed of the wealthy. Most of the people voting for the Republican candidates are simply duped by their support of extreme fear, hate and bigotry. The Democratic Party is only marginally better since they have been seduced by the same money leaving voters with no one to work for them. The rejection of science by the right will ultimately be the undoing of the Nation as other nations will eventually usurp the United States militarily and economically.

Increasingly science and progress are rejected because they invariantly upset the status quo. When the rich and powerful make the rules and have all the advantage, any perturbation in the balance of society must be resisted. The social structure and the system of laws are already setup in service to the ruling class. Any change is viewed as a danger. Science in all its forms is a threat to power. The only science that

is favored should serve the needs of the powerful. Increasingly the science supported by the ruling class is carefully controlled and obsessively applied in nature. Pure research and acquisition of knowledge is dangerous and feared because its results cannot be controlled. Pure research has the potential to upset the order the powerful have carefully established. This order is an engine to enrich them and further tap into the wealth of society for their own benefit. Science is political because it is an engine of progress, and progress is the enemy of the powerful.

The South, which is peopled with ardent and irascible beings, is becoming more irritated and alarmed.

 Alexis de Tocqueville

GOP 2016 Debate

Republican presidential candidate, businessman Donald Trump stands during the Fox Business Network Republican presidential debate at the North Charleston Coliseum, Thursday, Jan. 14, 2016, in North Charleston, S.C. (AP Photo/Chuck Burton)

In a deep way the latest election is simply the culmination of forty years of political evolution (I can’t help but note the irony of using evolution to describe Republicans!). Corruption, hatred and abuse of power are a common thread for this entire time. The foundation of this change happened under Richard Nixon whose “Southern Strategy” allowed the GOP to embrace the toxic legacy of bigotry that defines the old Confederacy as its own brand. Along with the change of the GOP into the embodiment of Confederate values came a tidal wave of ignorance ushered in by Ronald Reagan whose generally unintellectual approach is revered by the Republicans almost as if he was a demi-God. In this way Donald Trump is almost the perfect Republican, corrupt and hateful to the core like Nixon, ignorant and uneducated like Reagan with the mantle of wealth their voters admire. We now see a political movement completely devoted to fear, hatred and ignorance as their brand. They wear these values with pride and attack the elite whosel-600-415-f9eb0bbb-2cf7-4682-8448-ad8d4415e565values of progress, love and knowledge are viewed as weakness. In this lens it is no wonder that science is rejected.

Ignorance is a progressive thinker’s enemy; acceptance, awareness, accountability and action are the tools of the informed; of people who want change and do all they can to achieve it.

― Carlos Wallace

The assault on science is based on its incompatibility with the values of modern extreme conservatism. Again and again knowledge and progress will challenge the conservative mind with truths they do not want to hear. The best way to avoscience-politicsid this problem is kill the knowledge before it is produced. We can find example after example of science being silenced because it is likely to produce results that do not match their view of the world. Among the key engines of the ignorance of conservatism is its alliance with extreme religious views. Historically religion and science are frequently at odds because the faith and truth are often incompatible. This isn’t necessarily all religious faith, but rather that stemming from a fundamentalist approach, which is usually grounded in old and antiquated notions (i.e., classically conservative and opposing anything looking like progress). Fervent religious belief cannot deal with truths that do not align with dictums. The best way to avoid this problem is get rid of the truth. When the government is controlled by extremists this translates to reducing and controlling
science to avoid such truths.

Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron.

― Dwight D. Eisenhower

With this background we can see how this imprints onto science rather clearly. Fear and money both like science that leads to National security and defense, so the right supports activities that either provide people with protection or allow us to kill more effectively and efficiently. The problem is that the right only supports this kind of work in its most naked and applied sense. The right does5064not support deeper research that forms the foundation allowing us to develop technology. As a result our ability to be the best at killing people is at risk in the long run. Eventually the foundation of science used to create all our weapons will run out, and we no longer will be the top dogs. The basic research used for weapons work today is largely a relic of the 1960’s and 1970’s. The wholesale diminishment in societal support for research during the 1980’s and onward will start to hurt us more obviously. In addition we have poisoned the research environment in a fairly bipartisan way leading to a huge drop in the effectiveness and efficiency of the fewer research dollars spent.

The right wing has become the enemy of intellectualism. Increasingly they have formed open opposition of educating citizens in anything beyond a rote traditional form of learning. They have systematically undermined a system that teaches people to think for themselves. Science is a form of high thinking that the right opposes for vehemently, and as such to be diminished as a threat to their agenda. At the apex of the educational system are universities. Universities are the factories for the elite t0-CBthX9R0resYdC1O.jpghey hate so much. The right wing has been engaged in an all out assault on universities in part because they view them as the center of left wing views. Attacking and contracting science is part of this assault. In addition to a systematic attack on universities is an increasing categorization of certain research as unlawful because its results will almost certainly oppose right wing views. Examples of this include drug research (e.g. marijuana in particular), anything sexual, climate research, health effects of firearms, evolution, and the list grows. The deepest wounds to science are more subtle. They have created an environment that poisons intellectual approaches and undermines the education of the population because educated intellectual people naturally oppose their ideas.

It is a well known fact that reality has liberal bias.

― Stephen Colbert

Let’s get to the core of science that the right opposes. One of the areas where science rubs the right wing the wrong way could broadly be characterized as ecological and environmental research. The archetype of this is climate research and the concept of climate change. In general, research in Earth sciences leads to conclusions thaturl upset the status quo hurting the ability of traditional industry to make money (with the exception of geological work associated with energy and mining). Much of the ecological research has informed us how human activity is damaging the environment. Industry does not want to adopt practices that preserve the environment primarily due to greed. Moreover the religious extremism opposes ecological research because it opposes the dictums of their faith as chosen people who may exploit the Earth to the limits of their desire. Climate change is the single greatest research threat to the conservative worldview. The fact that mankind is a threat to the Earth is blasphemous to the right wing extremists either impacting their greed or the religious conviction. The denial of climate change is based primarily on religious faith and greed, the pillars of modern right wing extremism.

An area where science and public policy come into great conflict is public health. Americans love their firearms and possess them in great numbers. Americans also die in great numbers at the end of their firearms. While gun violence captures the attention and imagination, most gun deaths are self-inflicted, or accidental. It is a public health issue of great consequence. Science should be studying it, and yet it is not allowed by federal funding. The reason is the gun lobby (i.e., the NRA), which own the right wing. They don’t want the facts known and have made government funded research on the topic effectively illegal. They are worried that knowledge of the level of public health threat will result in a desire for regulation, and a public outcry. Instead of working with facts and knowledge, we simply suppress it.

Another huge area of intrusion of politics into science is a drug especially illegal ones. We have made research into certain drugs and their medical benefit illegal (i.e., marijuana). Several things are obvious to even the most casual observer, marijuana is not a particularly dangerous drug (definitely not “Schedule 1”), and it has medical benefits. The right (and a lot of the left) wing is opposed to adding science to the discussion. This is yet another example where facts and truth are excluded from the dialog. We have a number of purely political reasons for this. A huge one is the war on drugs, which is supported by law enforcement and prison lobbies (and the people enriched by these “industries”). These lobbies work to damage society as a whole, andimages.jpg enable the implicit bigotry in how laws are enforced and people are imprisoned ignoring the damage to society at large.

The impact of the drug laws has enabled the Jim Crow laws of the past to be enforced by a new mechanism. Again, the impact of the illegal nature of the drugs is ignored because their illegality serves conservative political interests. Their illegal nature is boon to criminal cartels that use them to generate huge incomes. Instead of cutting the cartels source of money out and defuse criminal violence, we keep it fully charged because law enforcement wants power, not a solution to the problem. A perverse fact of modern life is the greater threat to public health of legal drugs (i.e., the opioid crisis), and illegal drugs like marijuana could mitigate the legal drug crisis. We maintain a ridiculous perspective on these drugs by suppressing research. Other drugs like alcohol are legal while the public health impacts are obviously severe. We don’t have the ability to understand the benefits of these drugs from scientific research.

Fundamentalism isn’t about religion, it’s about power.

― Salman Rushdie

Medicine is not immune from the political aspects especially as the money associated with it becomes enormous. Increasingly the whole of medical research starts to run into issues associated with morality. This is true with both the beginning and the end of life. Genetics is fraught with moral implications that naturally drive political response. We already see huge issues on the left with the response to GMO’s. The left wing response to GMO’s is reactionary and Luddite in character. At the same time the right’s view of GMO is cavalier and geared toward money motives above all else. Properly managed GMO’s could be a boon for mankind if we can balance the risks and benefits. The increasingly destructive political dialog without compromise assures no good outcomes.

We are living in modern times throughout the world and yet are dominated by medieval minds.

― Eqbal Ahmad

In no place does politics become more poisonous than with sex (at least in the United States). Whether we are talking about reproductive rights, public health, or sexual education, science is ignored in favor of religious moralization delivered through politics. We have the power to make huge difference in people’s lives through giving them control over their reproduction, yet this progress is undermined by the right wing. Ultimately the right’s position is to criminalize sex by using reproduction as aphoto_odile_fillod.jpgpunishment. This works to destroy people’s potential for economic advancement and burden the World with poor, unwanted children. Sex education for children is another example where ignorance is promoted as the societal response. Science could make the World far better and more prosperous, and the right wing stands in the way. It has everything to do with sex and nothing to do with reproduction.

The ludicrous idea of abstinence-based education is the rights appro1440707119737828244.jpgach even though it is utterly and completely ineffective and actually damages people. Those children are then spared the knowledge of their sexuality and their reproductive rights through an educational system that suppresses information. Rather than teach our children about sex in a way that honors their intelligence and arms them to deal with life, we send them out ignorant. This ignorance is yet another denial of reality and science by the right. The right wing does not want to seem like they are endorsing basic human instinct around reproduction and sex for pleasure. The result is to create more problems, more STD’s, more unwanted children, and more abortions. It gets worse because we also assure that a new generation of people will reach sexual maturity without the knowledge that could make their lives better. We know how to teach people to take charge of their reproductive decisions, sexual health and pleasure. Our government denies them this knowledge largely driven by antiquated moral and religious ideals, which only serve to give the right voters.

Take a stand for science, knowledge and progress by embracing politics. In today’s world nothing is more dangerous than truth and facts. People whose fundamental principles and philosophy are utterly opposed to the foundational principles of science govern us. Conflict between their approach to governance and science is inevitable. People who are completely terrified of reality increasingly rule us. Science is one of the key ways humans understand their reality. Science is a completely human thing and political to its core. We are rapidly moving to a world where it is radical to simply speak facts and the truth. Before long doing science may be an act of dissidence against the ruling class. As I noted above in many cases it already is.

If by a “Liberal” they mean someone who looks ahead and not behind, someone who welcomes new ideas without rigid reactions, someone who cares about the welfare of the people-their health, their housing, their schools, their jobs, their civil rights and their civil liberties-someone who believes we can break through the stalemate and suspicions that grip us in our policies abroad, if that is what they mean by a “Liberal”, then I’m proud to say I’m a “Liberal.”

― John F. Kennedy

Mission Focused Research Is Better Research

Great research depends on taking big risks with a large chance of failure and mission-focus is a clear way to get there. Failure is the key to learning, and research is fundamentally learning. We need to avoid failing too early by shying away from solving the harder problems and devotion to a mission provides the drive to keep focus on results that provide value to that mission.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

Unknown-3For a lot of people working at a National Lab there are two divergent paths for work, the research path that leads to lots of publishing, deep technical work and strong external connection, or the mission path that leads to internal focus and technical shallowness. The research path is for the more talented and intellectual people who can compete in this difficult world. For the less talented, creative or intelligent people, the mission world offers greater security at the price of intellectual impoverishment. Those who fail at the research focus can fall back onto the mission work and be employed comfortably after such failure. This perspective is a cynical truth for those who work at the Labs and represents a false dichotomy. If properly harnessed the mission focus and empower and energize better research, but it must be mindfully approached.

The measure of greatness in a scientific idea is the extent to which it stimulates thought and opens up new lines of research.

― Paul A.M. Dirac

As I stated, I believe the dichotomy of mission versus research is false. The mission imposes limitations and constraints on research. In a nutshell, the mission imposes a fixed problem to solve and one must adapt the solution to impact this mission. Conversely, pure research is unconstrained by a mission, which encourages people to change the problem to fit a solution. The fixed problem, adaptive solution mindset is imagesmuch better for engaging innovation and producing breakthrough results. It also means a great amount of risk and lots of failure. Pure research can chase unique results, but the utility of those results is often highly suspect. This sort of research entails less risk and less failure as well. If the results are necessarily impactful on the mission, the utility is obvious. The difficulty is noting the broader aspects of research applicability that mission application might hide.

Examples of great mission-focused research abound and our modern world is testimony to the breakthrough nature of Cold War defense focused research. The shape of the modern world is a testament to the power of mission-focused work to succeed. images copyUbiquitous aspects of modernity such as the Internet, cell phones and GPS all owe their existence to Cold War research focused on some completely different mission. All of these technologies were created through steadfast focus on utility that drove innovation as a mode of problem solving. This model for creating value has fallen into disrepair due to its uncertainty and risk. Risk is something we have lost the capacity to withstand as a result the failure necessary to learn and succeed with research never happens.

Failure is a greater teacher than success

― Clarissa Pinkola Estés

Mission focused research falls prey to concerns over risk. In many mission organizations there is a fear of taking on too much risk in adopting research results into the mission delivery. The thought is that the research might not pan out aimages-1nd the mission will suffer as result. This is both shortsighted and foolhardy. The truth is vastly different than this fear-based reaction and the only thing that suffers from shying away from research in mission-based work is the quality of the mission-based work. Doing research causes people to work with deep knowledge and understanding of their area of endeavor. Research is basically the process of learning taken to the extreme of discovery. In the process of getting to discovery one becomes an expert in what is known and capable of doing exceptional work. Today to much mission focused work is technically shallow and risk adverse. It is over-managed and underled in the pursuit of false belief that risk and failure are bad things.

There is a key tension to maintain in harnessing this engine of knowledge. The successful delivery of value and success to the mission work must take priority. Those conducting the research should have a deep commitment to the mission and its success. Ultimately the success at the mission work must supersede the research objectives. Even better the research objectives need to be guided by the mission needs. In this sense the mission acts to constrain the research and shape its direction and focus. This sort of dynamic must be carefully and adroitly managed if it can be achieved. Unconstrained research without mission focus is quite tempting and much simpler to manage. It is also less successful at producing real value for society. Almost every breakthrough of great significance was the result of results-focused work although many of the breakthroughs had far greater reach beyond their intended use.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

– Werner Heisenberg

mellado_turb_mixing_01In my own experience the drive to connect mission and research can provide powerful incentives for personal enrichment. For much of my early career the topic of turbulence was utterly terrifying, and I avoided it like the plague. It seemed like a deep, complex and ultimately unsolvable problem that I was afraid of. As I began to become deeply engaged with a mission organization at Los Alamos it became clear to me that I had to understand it. Turbulence is ubiquitous in highly energetic systems governed by the equations of fluid dynamics. The modeling of turbulence is almost always done using dissipative techniques, which end up destroying most of the fidelity in numerical methods used to compute the underlying ostensibly non-turbulent flow. These high fidelity numerical methods were my focus at the time. Of course these energy rich flows are naturally turbulent. I came to the conclusion that I had to tackle understanding turbulence.

One Winter break my laptop broke leaving without the abilicontentty to work on my computer codes over the break (those were the days!). So I went back to my office (those were the days!) and grabbed seven books on turbulence that had been languishing on my bookshelves unread due to my overwhelming fear of the topic. I started to read these books cover to cover, one by one and learn about turbulence. I’ve included some of these references below for your edification. The best and most eye opening was Uriel Frisch’s “Turbulence: the Legacy of A. N. Kolmogorov”. In the end, the mist began to clear and turbulence began to lose its fearful nature. Like most things one fears; the lack of knowledge of a thing gives it power and turbulence was no different. Turbulence is actually kind of a sad thing; its not understood and very little progress is being made.

First-Time-Measurements-of-Turbulent-MixingThe main point is that the mission focus energized me to attack the topic despite my fear of it. The result was a deeply rewarding and successful research path resulting in many highly cited papers and a book. All of a sudden the topic that had terrified me was understood and I could actually conduct research in it. All of this happened because I took contributing work to the mission as an imperative. I did not have the option of turning my back on the topic because of my discomfort over it. I also learned a valuable lesion about fearsome technical topics; most of them are fearsome because we don’t know what we are doing and overelaborate the theory. Today the best things we know about turbulence are simple, and old discovered by Kolmogorov as he evaded the Nazis in 1941.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

― Peter F. Drucker

In today’s world we have allowed a system to come into power that funds useless research. We have created vast swaths of safe research topics that ultimately produce rafts of papers, but little or no real utility for society. A big driver behind this mentality is the need to fund “sure things” that can’t fail. This pushes research into the realm of squalid incrementalism. Incremental research is safe and almost never prone to the risk of failure. It is also an actual waste of money that can produce the appearance and guise of success without actual achievement. Our unremitting fearful society including the worry of the appearance of scandal has driven us to this horrible point. Research has become cowardly and uninspired so that it doesn’t ever fail. Being mission-focused is too hard and too risky because the mission is too important to ever fail at. The true attitude should be that the mission is too important not fail at!

The main reason of fear of failure lies in people’s being unaware of the impact of misfortune on achieving success

― Sunday Adelaja

The current sorry state in high performance computing is a direct result of the current milieu where mission-focus is neglected in favor of carefully managed projects with sure things as targets. Project management is not leadership, and without leadership we will continue to steadfastly underachieve. For example, we have utterly eviscerated applied mathematics by pushing a product-oriented approach that demands the delivery of Unknown-2results in software. Producing software in the conduct of applied mathematics used to be a necessary side activity instead of the core of value and work. Today software is the main thing produced and actual mathematics is often virtually absent. Actual mathematical research is difficult, failure prone and hard to measure. Software on the other hand is tangible and managed. It is still is hard to do, but ultimately software is only as valuable as what it contains, and increasingly our software is full of someone else’s old ideas. We are collectively stewarding other people’s old intellectual content, and not producing our own, nor progressing in our knowledge.

This trend would be bad enough on its own, but it is the tip of a proverbial iceberg of underachievement. The second pillar for underachievement in high performance computing is, ironically, a devotion to computer hardware. Again computer hardware is tangible and easy to measure. To a naïve person (or congressman) the ability to measure our ability to do things with computers should be a one-to-one match with the raw power of our computers. Nothing could be farther from the truth as computing is a completely multi-disciplinary field depending on a huge swath of science for success. The computer hardware is actually one of the least important components in our modeling and simulation competence. Instead of producing a program that strives for true success in modeling and simulation based on real mission value, we have constructed programs that are intellectually vacuous because they are easier to fund and explain to unsophisticated people. The hardware program more naturally lends itself to management and simple metrics of success. It can be sold to uninformed people. Its current form is the abdication of leadership and antithetical to the concept of mission-focus. Our approach to high performance computing is only likely to achieve supremacy for the Chinese in the field.

titan

Success is stumbling from failure to failure with no loss of enthusiasm.

― Winston S. Churchill

What is gained by this mission focus? The focus on mission-focused research means the problem being solved is fixed and unwavering, and the results and knowledge must contribute to the solution of this problem. This forces the research to adapt itself to the needs5064 of the problem rather than the problem to the research. The result of this model is the tendency to confront difficult thorny issues rather than shirk them. At the same time this form of research can also lead to failure and risk manifesting itself. This tendency is the rub, and leads to people shying away from it. We are societally incapable of supporting failure as a viable outcome. The result is the utter and complete inability to do anything hard. This all stems from a false sense of the connection between risk, failure and achievement.

If a single characteristic is contributing to a societal feeling that we have lost greatness, it is that we cannot accept failure. Without failure, great things cannot be achieved. Failure is the vehicle of achievement and learning whether we are talking about individuals, organizations or nations. The inability to accept failure as a possible outcome is the tacit acceptance of not wanting to do anything that matters, or anything great. The road to greatness is paved with many failures and the unerring drive to learn and grow from these failures. For the complex missions we are charged with, the commitment to mission focus in research means accepting failure as a necessary outcome of endeavor. This is the hard message that our spineless politicians and managers cannot give us. The inability to grasp this core truth is utter societal cowardice. True leadership would provide us the necessary support and encouragement to be courageous and steadfast. Instead we succumb to fear and the false belief that achievement can be managed and had without risk.

leadersResearch is about learning at a fundamental, deep level, and learning is powered by failure. Without failure you cannot effectively learn, and without learning you cannot do research. Failure is one of the core attributes of risk. Without the risk of failing there is a certainty of achieving less. This lower achievement has become the socially acceptable norm for work. Acting in a risky way is a sure path to being punished, and we are being conditioned to not risk and not fail. For this reason the mission-focused research is shunned. The sort of conditions that mission-focused research produces are no longer acceptable and our effective social contract with the rest of society has destroyed it.

If we are to successfully do great things again as people, as organizations, as laboratories and as a nation, the irony is that we need to fail a lot more. One way to assure the sort of failure we need is mission-focused research where providing value to a difficult mission is the primal goal of research. Better research is founded on devotion to meaningful outcomes, taking big risks and tolerating lots of failure.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

Launder, Brian Edward, and Dudley Brian Spalding. “Lectures in mathematical models of turbulence.” (1972).

Frisch, Uriel, and Russell J. Donnelly. “Turbulence: the legacy of AN Kolmogorov.” (1996): 82-84.

Pope, Stephen B. “Turbulent flows.” (2001): 2020.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Margolin, L. G., W. J. Rider, and F. F. Grinstein. “Modeling turbulent flow with implicit LES.” Journal of Turbulence 7 (2006): N15.

dt161005 copy

Communicating is the hardest thing we do

Many times in life I’ve regretted the things I’ve said without thinking. But I’ve never regretted the things I said nearly as much as the words I left unspoken.

― Lisa Kleypas

I’m a scientist and I tackle lots of difficult intellectual topics, higher-level mathemalogotics and deep physical principle daily. All of these things are very complex and require immense amounts of training, experience and effort. For most people the things I do, think about, or work on are difficult to understand or put into context. None of this is the hardest thing I do every day. The thing that we trip up on, and fail at more than anything is simple, communication. Scientists fail to communicate effectively with each other in a myriad of ways leading to huge problems in marshaling our collective efforts. Given that we can barely communicate with each other, the prospect of communicating with the public becomes almost impossible.

Listen with curiosity. Speak with honesty. Act with integrity. The greatest problem with communication is we don’t listen to understand. We listen to reply. When we listen with curiosity, we don’t listen with the intent to reply. We listen for what’s behind the words.

― Roy T. Bennett

It is as much a problem of listening as talking, and we do neither very well. It is arguable that the art of listening is something in deep crisis society wide. We seem to be quite capable of expressing a wealth of opinions to each other, but incapable of listening and attempting to understand each other. This makes every problem we have worse and stymies our efforts to solve them. In the sciences, these issues are generally compounded by the nature of the people capable of carrying out the deep scientific work. With all that effort and energy put toward the intellectual labor and their basic nature as people, little is left over to do the heavy lifting of communication. This leaves this essential effort languishing from a combination of lack of effort and outright incompetence.

If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be ‘meetings.

― Dave Barry

Meetings are at the heart of an effective organization, and each meeting is an opportunity to clarify issues, set new directions, sharpen focus, create alignment, and move objectives forward.

― Paul Axtell

A big conduit for communication is the “meeting” a proper target of derision and ridicule. We all spend way too much time in horrible meetings that masquerade as communication. In addition to time wasting, the worst thing about them is thaimgrest they give people the impression that communication has taken place when it hasn’t. The meeting doesn’t provide effective broadcast of information and it’s even worse as a medium for listening. Our current management culture seems to have gotten the idea that a meeting is sufficient to do the communication job. Meetings seem efficient in the sense that everyone is there and words are spoken, and even time for questions is granted. With the meeting, the managers go through the motions. The problems with this approach are vast and boundless. The first issue is the general sense that the messaging is targeted for a large audience and lacks the texture that individuals require. The message isn’t targeted to people’s acute and individual interests. Conversations don’t happen naturally, and people’s questions are usually equally limited in scope. To make matters worse, the managers think they have done their communication job.

People who enjoy meetings should not be in charge of anything.

― Thomas Sowell

The tendency to do everything through meetings results in the illusion that communication has happened. The same happens with mass e-mails where managemeimagesnt assumes the exchange of information was successful. A lot of the necessary vehicles for communication are overlooked or discounted in the process. Managers avoid the one-on-one conversations needed to establish deep personal connections and understanding. We have filled manager’s schedules with lots of activities involving other managers and paperwork, but not prioritized and valued the task of communication. We have strongly tended to try to make it efficient, and not held it in the esteem it deserves. Many hold office hours where people can talk to them rather than the more effective habit of seeking people out. All of these mechanisms give the advantage to the extroverts among us, and fail to engage the quiet introverted souls or the hardened cynics whose views and efforts have equal value and validity. All of this gets to a core message that communication is pervasive and difficult. We have many means of communicating and all of them should be utilized. We also need to assure and verify that communication has taken place and is two ways.

Employees hate meetings because they reveal that self-promotion, sycophancy, dissimulation and constantly talking nonsense in a loud confident voice are more impressive than merely being good at the job – and it is depressing to lack these skills but even more depressing to discover one’s self using them.

― Michael Foley

We haven’t touched on the other context of the word “meeting” for scientists. The technical talk, seminar or conference version of the word has great importance. The act of formally giving a presentation of technical work is one of the key wUnknown-1 copy 13ays of delivering information in a professional setting. It forms a core of opportunity for peer review in a setting that allows for free exchange. Conferences are an orgy of this and should form a backbone of information exchange. Instead conferences have become a bone of contention. People are assumed to only have a role there as speakers, and not part of the audience. Again the role of listening as an important aspect of communication is completely disregarded in the dynamic. The digestion of information and learning or providing peer feedback provide none of the justification for going to conferences, yet these all provide invaluable conduits for communication in the technical world.

Americans no longer talk to each other, they entertain each other. They do not exchange ideas, they exchange images. They do not argue with propositions; they argue with good looks, celebrities and commercials.

― Neil Postman

Part of the irony of this problem is the role of science in providing technology that makes communication easier than ever. The issue is the content of that
imgres-2communication, which varies depending on people’s innate taste for clarity and focus. We have issues with transparency of communication even with automatic and pervasive use of all the latest technological innovations. These days we have e-mail, instant messaging, blogging, Internet content, various applications (Twitter, Snapchat,…), social media and other vehicles information transfer through people. The key to making the technology work to enable better performance still comes down to people’s willingness to pass along ideas within the vehicles available. This problem is persistent whether communications are on Twitter or in-person. Again the asymmetry between broadcasting and receiving is amplified by the technology. I am personally guilty of the sin that I’m pointing out, we never prize listening as a key aspect of communicating. If no one listens, it doesn’t matter who is talking.

We are still woeful in our ability to confront difficult issues. Concepts like “ghosting” or “breadcrumbing” are useful for online relationships, and actually have great meaning in non-romantic settings. For difficult issues there are a strong tendency to avoid discussion and contention, this is just like ghosting where people disappear to avoid a real breakup. Breadcrumbing is pretending to have a deep relationship that really has no basis in fact and gets nursed along by a stream of limited communications. Both practices are emblematic of deeply toxic engagements that exhibit a lack of humanity, empathy and understanding. Each of these characteristics is deeply helpful in aiding the depth of communication, and things scientists and engineers are generally quite ill equipped to deal with.

Scientists and engineers tend to be introverts and not necessarily good at communicating. Nothing about technology changes these facts; it only gives more mechanics for communication to be poorly executed. All the various communication vehicles require skill to execute and basic ideas to be presented. The ideas are often complicated by their esoteric nature meaning that standard communication is already stressed. Add controversy and tension to the communication will simply lead to greater stress and encourage bad behavior. This greater stress will serve to bait people into various ghosting or breadcrumbing exercises. All of these elements go into undermining the depth of communication possible if all the means of achieving it are exercised.

imgres-2I work on vastly complex projects involving big ideas and massively deep technical issues. The concepts and ideas in the project are deep and need vast wells of technical expertise to even understand much less solve. In spite of this the biggest issue holding us back is communication. We are more challenged by having access to information and knowledge of the full nature of the problems we face. The complex issues typically need the expertise of many people to understand, and solutions are equally dependent on wide ranging expertise. More than simply understanding being fuelled by communication, the solutions need collaborations across many technical disciplines to work together. Instead of harnessing the possibility and potential that communication has to unleash productivity and innovation, we embrace secrecy and information hiding that only empowers control and destroys potential. Information hiding has become the standard in reaction to the possibilities of greater transparency that the modern world offers.

When the trust account is high, communication is easy, instant, and effective.

― Stephen R. Covey

One of the key issues contributing to our systematic communication issues is conflict aversion. If we talk about our collective problems, we run the risk of conflict. Problems and issues that hold us back are often the source of conflict. Our problems are usecretsually associated with vested interests, or solving the problems might involve some sort of trade space where some people win or lose. Most of the time we can avoid the conflict for a bit longer by ignoring it. Facing the problems means entering into conflict, and conflict terrifies people (this is where ghosting and breadcrumbing come in quite often). The avoidance is usually illusory; eventually the situation will devolve to the point where the problems can no longer be ignored. Usually the situation is much worse, and the solution is much more painful. We need to embrace means of making facing up to problems sooner rather than later, and seek solutions when problems are small and well confined.

Given that scientists can barely communicate with each other effectively, the ge11neral public is almost impossible to manage. In many ways the modern world acts to amplify the issues that the technical world has with communication to an almost unbearable level. Perhaps excellence in communication is too much to ask, but the inabilities to talk and listen effectively with the public are hurting science. If science is hurt than society also suffers from the lack of progress and knowledge advancement science provides. When science fails, everyone suffers. Ultimately we need to have understanding and empathy across our societal divides whether it is scientists and lay people or red and blue. Our failure to focus on effective, deep two-way communication is limiting our ability to succeed at almost everything.

The single biggest problem in communication is the illusion that it has taken place.

― George Bernard Shaw

Validation is much more than uncertainty quantification

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

In looking at the dynamic surrounding verification and validation recently I’ve noticed imagesa very grim evolution of the narrative. Two things have happened to undermine the maturity of V&V. One I’ve spoken about in the past, the tendency to drop verification and focus solely on validation, which is bad enough. In the absence of verification, validation starts to become rather strained and drift toward calibration. Assurances that one is properly solving the model they are claiming to be solving are unsupported by evidence. This is bad enough all by itself. The use of V&V as a vehicle for improving modeling and simulation credibility is threatened by this alone, but something worse looms even larger.

A more common and pervasive trend is the conflation of validation with uncertainty quantification. It has become very common for uncertainty quantification (UQ) to be defined as the whole of validation. To some extent this is fueled by a focus on high performance computing where UQ provides a huge appetite for computationimgresal cycles thus eliciting lots of love and support in HPC circles. Validation must be about experiments and a broad cross section of uncertainties that may only be examined through a devotion to multi-disciplinary work and collaboration. One must always remember that validation can never be separated from measurements in the real world whether experimental or observational. The experiment-simulation connection in validation is primal and non-negotiable.

There are three types of lies — lies, damn lies, and statistics.

― Benjamin Disraeli

A second part of the issue is the hot topic nature of UQ. UQ has become a buzzword and seems to be a hot issue in publishing and research. Saying you’re doing UQ seems to be a means to squeezing money out of funding agents. In addition UQ can be done relatively automatically and mechanically. Tools and techniques exist to enable UQ to be dfig10_roleone without much deep thought even though it touches upon many deep technical topics. Actual validation is far harder and more holistic. The core to any work in validation is serious experimental expertise and hard-nosed comparison with simulations. The detailed nature of the experiment and its intrinsic errors and uncertainties is the key to any comparison. Without knowing the experimental uncertainty any computational uncertainty is context free. My grumpy intellectual would quip that validation requires thinking and that leads people to avoid it because thinking is so hard. The deeper issues are that validation is complex and mutli-disciplinary in nature making it collaborative and difficult. Experts in a single discipline can do UQ, so it is an easy out.

Five percent of the people think;

ten percent of the people think they think;

and the other eighty-five percent would rather die than think.

― Thomas A. Edison

quote-thinking-is-hard-work-that-s-why-so-few-do-it-albert-einstein-79-80-43.jpg

One of the biggest issues is the stunning incompleteness of UQ in general. Most commonly UQ is done via an exploration of the variation of parameters in models. Complex models of reality have a lot of constants that are not known with great precision. Various techniques may be utilized to efficiently examine the variation in imgrescomputational solutions due to changes in these parameters. Among the methods used are things like Markov Chain Monte Carlo (MCMC), polynomial chaos, and other sampling methods. The results from this work are useful and sound, but form a rather incomplete view of uncertainty. Even in these cases the sampling is often subject to lack of certainty with the assessment driven by the difficulty of determining uncertainty in high dimensional spaces. Modeling and simulation suffers from a host of other uncertainties not covered by these methodologies. For example most simulations have some degree of numerical error that may be quite large. Numerous techniques exist for exploring its magnitude and nature. Many systems being modeled have some stochastic or variability associated with them. Modeling assumptions are often made in simulating a system or experiment. The solution may change greatly on the basis of these assumptions or modeling approximations. A different computational modeler may make much different assumptions and produce a different solution.

Judge a man by his questions rather than by his answers.

― Voltaire

If validation is to be done properly a fairly complete accounting of modeling and simulation uncertainty is needed. One also needs to also understand the experimental error and uncertainty with equal completeness. One must be acutely aware of the intrinsic lack of certainty in the estimation of uncertainty. The combination of the solutions and the sizes of each uncertainty provides a modeling and simulation solution into proper context. Without knowledge of the uncertainties in each data source, the distance between solutions cannot be judged. For example if the experimental precision is very good and the uncertainty is quite small, the simulation needs to be equally precise to be judged well. Conversely a large experimental uncertainty would allow model to be much looser, and still be judged well. More critically the experiment wouldn’t provide actionable evidence on research needs, and expert judgment would reign.

ZmachineThe whole of the uncertainty provides an important source of scientific tension. If experimental uncertainty is small, it requires modeling and simulation to be equally precise to imply good results. It pushes the modeling to improve to meet the high standard of the experiment. If the modeling and simulation is very good, but the experiments have large uncertainty, it should push the experiments to improve because they fail to constrain and distinguish between models. By having a deep and complete understanding of uncertainty, we can define where we need to put resources to improve. We know what aspects of our current knowledge are the most in need of attention and limiting progress.

One must always be aware of the significant attraction of short changing uncertainty estimation. Doing a complete job of estimating uncertainty almost always results in an increase in the magnitude of uncertainty. This is where science as a fundamentally human enterprise comes into play. People would rather think uncertainties are small than large. Uncertainty is uncomfortable and people shy away from discomfort. By under-estimating uncertainty people unconsciously put themselves at ease by doing incomplete work. A more rigorous and complete approach almost always produces a discomforting result. When one combines discomfort with difficulty of accomplishment, the necessary factors for lack of effort and completeness becomes clear. With this temptation in mind the tendency to take the easy route must be acknowledged.

The bottom line is the necessity understanding uncertainty in a holistic manner can produce useful and defensible context for science. It can allow us to understand where we need to improve our knowledge or practice. Without this accounting the whole images-1issue falls into the area of relying upon expert judgment or politics to make the decisions. We fail to understand where our knowledge is weak and potentially overlook experiments necessary for understanding. We may have the right experiments, but cannot make measurements of sufficient accuracy. We might have models of insufficient complexity, or numerical solutions with too much numerical error. All of these spell out different demands for resource allocation.

Much of the tension is captured in these two quotes although I hope Eddington was probably trying to be ironic!

Never trust an experimental result until it has been confirmed by theory

― Arthur Stanley Eddington

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

What makes a production code, a production code?

It is not enough for code to work.

― Robert C. Martin

One of the big selling points for high performance computing is real world impact. A focus for this impact revolves around computer codes with the capability to produce answers for people working on real world problems. In organizations such as mine where modeling and simulation produces analysis used to assist decision-making, these codes are known as production codes, or the production codes. As these codesthree-men-looking-at-data-on-screens.jpg.rendition.intel.web.480.270 provide real value to our applied programs, and with this value provides generous financial support. This support is necessary for the codes to do their job and creates financial stability. With this support comes acute responsibility that needs to be acknowledged and serious effort needs to apply to meeting these.

Being a production code means producing results that are meaningful. The results are important by virtue of their utility in decision-making that impacts the real world. In the past those using production codes as expert users provided the credibility needed to make these codes important. In other words credibility was granted by the weight of the expertise of the users. This element in expertise-based credibility is still alive today, but increasingly being replaced by systematic approaches to bolster the purely human element. The appropriate and proper use of verification, validation, and uncertainty quantification along with software quality assurance provides a system for assessing credibility and constructing evidence. More and more this evidence is demanded to document credibility and reassure decision makers.

One of the main things production codes give is the ability to model important problems in the real world. This is the primary characteristic of production codes, modeling HPCcapability for real problems. While this character is primary in defining production codes everything else important in high performance computing is eclipsed by the modeling imperative. These codes are essential for the utility of high performance computing resources and often become the first codes to make their way and use high-end resources. They quite often are the explicit justification for the purchase of such computing hardware. This character usually dominates and demands certain maturity of software professionalism.

csm_group1_2c3e352676On the flipside there are significant detrimental aspects of such codes. For example the methods and algorithms in production codes are often crude and antiquated in comparison to state of the art. The same can be said for the models, the algorithms and often the computer code itself. The whole of the production code credibility is deeply impacted by these pedigrees and their impact on real World programs and things. This issue comes from several directions; the codes are often old and used for long periods of time. The experts who traditionally define the credibility drive this to some extent. It often takes a long time to develop the code to the level needed to solve the hard real world problems as well as the expertise to navigate the code’s capability into results that have real world meaning. Older methods are robust, proven and trusted (low order, and dissipative is usually how robust happens). Newer methods are more fragile, or simply can’t deal with all the special cases and issues that threaten the solution of real problems. Again, the same issues are present with models, algorithms and the nature or quality of the computer code itself.

Knowledge is something which you can use.
Belief is something which uses you.

― Idries Shah

In the final analysis the production code status must be earned and can not granted by fiat. Generally one might intend for a code to become a production code, but it only becomes a production code when it begins to produce. That production must be delivered with credibility and trust. It is an utterly organic process that cannot be forced. While significant support comes with production code status, it also comes with responsibilities as well. Increasingly in a modern context these responsibilities include software quality, verification and validation to be executed in a serious professional manner. Hopefully the “customers” for production code analysis will be more demanding and insistent on a more systematic pedigree. Even today this pull for systematic pedigree is poorly supported by the customers of production code results creating an environment where counter-productive practices and results continue to supported.

imagesIn far too many instances, the systematic pedigree defining steps are being skipped for the old system of person-centered credibility. The old person-centered system is simple and straightforward. You trust somebody and develop a relationship that supports credibility. This person’s skills include detailed technical analysis, but also inter-personal relationship building skills. If such a system is in place there is not a problem as long as the deeper modern credibility is also present. Too often the modern credibility is absent or shorted and effectively replaced by a cult of personality. If we put our trust in people who do not value the best technical work available in favor of their force of personality or personal relationships we probably deserve the substandard work that results.

Tell me what you pay attention to and I will tell you who you are.

― José Ortega y Gasset

 

Results using smoothed operators in actual code

Reality is that which, when you stop believing in it, doesn’t go away.

― Philip K. Dick

I applied the smoothed operators to the actual solution of a scalar advection law, and wanted to show how the methods impact the solution. This should put the discussion of the past couple of weeks into a bit sharper and more relevant focus. We can also explore the impact of the degree of regularization chosen in the smoothing. The good things below are finding out that my analysis seems to completely coincide with the results, and in that case the results are the lowest error and highest quality solution. More surprisingly, the best result is obtained with a smoothed function, not the original discontinuous ones!

We will solve the scalar advection equation u_t + u_x = 0 for a square wave on a mesh of 200 cells in one dimension using a Sweby version of Fromm’s scheme. The solution will do one rotation on the grid using 400 time steps. I’ll plot the solution and the error plus display the L1, L2 and L-infinity norms of the error.

First with the standard unsmooth functions.

standarderror-standard

Error= 0.0314099 0.00649721 0.438233

Now with smooth functions all based on using the \mbox{softsign}(x) = \tanh(10 x) and \mbox{softabs}(x) = x\tanh(10 x) .

tanh10.jpgerror-tanh10.jpg

Error= 0.0313124 0.00648103 0.4384

Now with smooth functions all based on using the \mbox{softsign}(x) = x/(|x| + 0.1) and \mbox{softabs}(x) = x^2 /(|x| + 0.1) .

linear.jpgerror-linear.jpg

Error= 0.0350856 0.00694473 0.454124

Now with smooth functions all based on using the \mbox{softsign}(x) = x/(|x| + 0.1) and \mbox{softabs}(x) = |x| + 0.1 .

inverse-linear.jpgerror-inverse-linear.jpg

Error= 0.0257311 0.00578931 0.420996

What we see is that the smoothed operators produce high quality and lower error solutions in most cases. The one case with the linear version of the smoothed sign function the larger error is supported by the analysis I gave last week. Generally as the regularization allows the function to come as close as we might like to the original functions.