What Makes A Calculation Useful?

 

It is quality rather than quantity that matters.

― Seneca
The utility of calculations and scientific computing is taken to be axiomatic, yet we cannot easily articulate why a calculation is useful. By virtue of this dynamic, we also can’t tell you why a calculation isn’t useful either. This axiomatic belief underlies the investment by the nation into high performance computing (HPC), yet the lack of clarity on utility clouds any discussion. Worse yet, the clouding of the reasons for utility produces counter-productive investment decisions and suboptimal programs. Perhaps it is high time to wrestle with this issue and try to see our way clear to some greater clarity.

Useful calculations shed light and understanding on issues existing in the real world. The utility of calculation for scientific investigations lies in their ability to study hypotheses or (help to) explain observations. A successful model of reality implies a certain level of understanding that can be comforting, contrasted with an unsuccessful or highly calibrated that drives new work. With sufficient confidence, the calculation allows one to play “what if” scenarios and study the impact of changes to a physical system. This guides physical studies, measurements and observations, which can yield unequivocal evidence. Computations usually do not provide this, but show the way to finding it. The same thing happens in engineering where calculations are often used in a comparative sense to understand how to optimize designs, or fix problems with existing designs. In other cases, the calculation can help to explain why things don’t work, or broke, or behaved in a manner that was unexpected. For calculations need to take a valued role in science and engineering the demonstration the ability to provide these varied aspects of real world functionality are essential. Once calculations step into the role of advisor, sage, and detective, the confidence, trust and credibility follows. This dynamic is never present in any discourse on HPC and current HPC programs almost callously disregard this legacy. This is dangerous and threatens progress.logo

When one looks at HPC, the press is drawn to the biggest, most expensive, most time consuming calculations and the science community allows itself to bullshit people on their utility. Plainly put, the giant calculations inhabiting press releases are simply demos at best, stunts at worst and very rarely science of any note. These hero calculations are not useful for science or engineering. As one of my most senior colleagues has quipped, single calculations will never be the right answer for hard problems. These stunts and demos are single, one-off calculations that have no established pedigree and dubious credibility. The problem is that the broader dynamic in HPC is poisoned by the devotion to the myth of utility of hero calculations. At best, these calculations are harbingers of the calculations that might be useful five to ten years from now and little else. These calculations push and pace progress in HPC, but their use for engineering and domain science is minimal.

What we have is another serious case of bullshit hurting our ability does deal with reality. In HPC, the effort and funding is chasing the biggest calculations while the important work done with smaller calculations simply fails to capture the attention and focus of the community. Increasingly the funding follows the bullshit narrative instead of the actual utility narrative, which is undermining the reality of modeling & simulation impact. The danger in the distance between focus and utility is the loss of opportunity to make HPC really matter and produce unambiguous impact. The community has allowed this fiction to persist and color funding decisions for decades. The efficacy of HPC for science and engineering is suffering as a result. The depth of the issue is great and clarity is actually easy to grasp.

One of the clearest issues with HPC utility is the prevalence of faith in individuals being definitive in credibility. Even today, the analysts involved in conducting calculation matter more to real world concerns using modeling & simulation than any technical work underpinning utility. The importance of the analyst also overwhelms the importance of the code itself. We persist with this false narrative around the importance of codes. One of the clearest results of this gap is the continuing lack of impact for verification and validation. In fact I see regression instead of progress in the impact of technical work for credibility and greater focus on the personal aspect of credibility. In other words, it is gesamthubschrauber-01more important who does a calculation than how the work is done although these two items are linked. This was true 25 years ago with ASCI as it is today. The progress has not happened in large part because we let it, and failed to address the core issues while focusing on press releases and funding profiles. We see the truth squashed because it doesn’t match rhetoric. Now we see lack of funding and emphasis on calculation credibility in the Nation’s premier program for HPC. We continue to trumpet the fiction that the bigger the calculation and computer, the more valuable a calculation is a priori.

supercomputer-2016-1-100693249-large-3x2Even today with vast amounts of computer power, the job of modeling reality is subtle and nuanced. The modeler who conspires to represent reality on the computer still makes the lion’s share of the decisions necessary for high fidelity representations of reality. All of the items associated with HPC impact a relatively small amount of the overall load of analysis credibility. The analyst decides how to model problems in detail including selection of sub-models, meshes, boundary conditions, and the details included and neglected. The computer power and the mesh resolution usually end up being an afterthought and minor detail. The true overall modeling uncertainty is dominated by everything in the analyst’s power. In other words, the pacing uncertainty in modeling & simulation is not HPC; it is all the decisions made by the analysts. Even with the focus on “mesh resolution” the uncertainty associated with the finite integration of governing equations is rarely measured or estimated. We are focusing on a small part of the overall modeling & simulation capability to the exclusion of the big stuff that drives utility.

As usual, the issue is related to the relative sex appeal of the details in modeling & simulation. All the analyst-controlled details are dull and unexciting while HPC is sexy and exciting. The HPC things are easily marketed and receive funding while the analyst details are boring, but essential. The result is a focus on the sexy HPC stuff while the important work done by analysts goes by with meager, haphazard and disparate support. More deeply, the analyst support is defined purely through application work and generally divorced from the HPC work. As such the divide just grows and grows. Moreover the HPC side of the work can dutifully ignore the analyst stuff that matters because the political weight says that the important details matter little. In the HPC work all the glue between the computer-focused HPC work and applications is poorly funded or not funded at all.

One of the core issues in this entire dynamic associated with the utility of computational modeling and simulation is predictivity. Predictive simulations are a sort of “Holy Grail” for computational science. Predictive calculations are not necessarily useful. Useful computations can come from sources that are far from predictive, and the utility is far more driven by the flexibility of computational capability combined with the ability of analysts to wield the computational power. The utility, flexibility and understanding cannot come from the lumbering computational behemoths driving funding. If a calculation is predictive; so much the better it is for utility. The key to predictivity is it demands a lot of evidence and a systematic investigation, which is the whole practice of verification and validation (V&V).

Where utility ends and decoration begins is perfection.

― Jack Gardner

One of the single greatest issues is a general failure to measure prediction, modeling & simulation uncertainties in a holistic manner. Generally uncertainty estimation is limited to parametric epistemic uncertainty, which is an important, but small part of the overall uncertainty budget. Numerical uncertainty is usually not estimated at all, but declaration is made regarding the lack of mesh dependence, or simply the massive size of the calculation renders numerical errors small by fiat. In many cases systems have intrinsic variability that provides an important source of uncertainty (turbulence canonically comes to mind). This is also rarely estimated. Finally we come to the uncertainty directly associated with the analyst’s decisions. When this issue has been studied, the uncertainty associated with analyst modeling decisions or analyst assumptions tends to huge compared to other sources. The easier and common thing to do is a declaration that the calculation is predictive by definition thus avoids any real quantification of the uncertainty.

imgresThe current HPC belief system believes that massive computations are predictive and credible solely by the virtue of overwhelming computational power. In essence they use proof by massive computation as the foundation of belief. The problem is that science and engineering do not work this way at all. Belief comes from evidence and the evidence that matters are measurements and observations of the real World (i.e., this would be validation). Models of reality can be steered and coaxed into agreement via calibration in ways that are anathema to prediction. Part of assuring that this isn’t happening is verification. We ultimately want to make sure that the calculations are getting the right answers for the right reasons. Deviations from correctness should be understood at a deep level. Part of putting everything in proper context is uncertainty quantification (UQ). UQ is part of V&V. Unfortunately UQ has replaced V&V in much of the computational science community, and UQ estimated is genuinely incomplete. Now in HPC most of UQ has been replaced by misguided overconfidence.

This issue is another view of the dynamic where we have allowed alternative facts to displace reality. We are paving the road for a reality where bullshit and facts cannot be separated. It is everyone’s fault for allowing this to happen. Too many of us simply comply with the need for declarative success when admission of failure would suit progress and truth far better. Too often the emphasis is placed on marketing and spin rather than the truth. In the process we have systematically undermined core principles of quality in every corner of life. Perception has been allowed to become more important that truth and reality. Into this vacuum propaganda quickly become the medium of discourse. We may be too far-gone to fix this, and reality will bite back in a viscous manner to restore balance. This restoration will probably be very painful to experience.

bullshit_everywhere-e1345505471862At the core of the problem with bullshit as a technical medium is a general lack of trust, and inability to accept outright failure as an outcome. This combination forms the basis for bullshit and alternative facts becoming accepted within society writ large. When people are sure they will be punished for the truth, you get lies, and finely packaged lies are bullshit. If you want the truth you need to accept it and today, the truth can get you skewered. The same principle holds for the acceptance of failure. Failures are viewed as scandals and not accepted. The flipside of this coin is the truth that failures are the fuel for progress. We need to fail to learn, if we are not failing, we are not learning. Instead of hiding, or bullshitting our way through in order to avoid being labeled failures, we avoid learning, and also corrode our foundational principles. We are locked in a tight downward spiral and all our institutions are under siege. Our political, scientific and intellectual elite are not respected because truth is not valued. False success and feeling good is acceptable as an alternative to reality. In this environment bullshit reigns supreme and being useful isn’t enough to be important.

Raise your quality standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this, because it is the only way of discovering how that boundary should be moved forward.

― Edsger W. Dijkstra

 

It is High Time to Envision a Better HPC Future

Honest differences are often a healthy sign of progress.

― Mahatma Gandhi

Last week I attended a rather large scientific meeting in Knoxville Tennessee. It was the kickoff meeting for the Exascale Computing Project. This is a relatively mediocritydemotivatorhuge program ($250 million/year) and the talent present at the meeting was truly astounding, a veritable who’s who in computational science in the United States. This project is the crown jewel of the national strategy to retain (or recapture) pre-eminence in high performance computing. Such a meeting has all the makings for banquet of inspiration, and intellectually thought-provoking discussions along with incredible energy. Simply meeting all of these great scientists, many of whom also happen to be wonderful friends only added to the potential. While friends abounded and acquaintances were made or rekindled, this was the high point of the week. The wealth of inspiration and intellectual discourse possible was quenched by bureaucratic imperatives leaving the meeting a barren and lifeless launch of a soulless project.

The telltale signs of worry were all present in the lead up to the meeting: management of work took priority over the work itself, many traditional areas of accomplishment are simply ignored, political concerns swamping technical ones, and most damningly no aspirational vision. The meeting did nothing to dampen or dispel these signs, and we see a program spiraling toward outright crisis. Among the issues hampering the project is theimages-1 degree of project management formality being applied, which is appropriate for a benign construction projects and completely inappropriate for HPC success. The demands of the management formality was delivered to the audience much like the wasteful prep work for standardized testing in our public schools. It will almost certainly have the same mediocrity inducing impact as that same testing regime, the illusion of progress and success where none actually exists. The misapplication of this management formality is likely to provide a merciful deathblow to this wounded mutant of a program. Some point in the next couple of years we will see the euthanized project as being the subject of a mercy killing.

There can be no progress without head-on confrontation.

― Christopher Hitchens

The depth of the vision problem in high performance computing (HPC) is massive. For a quarter of a billion dollars a year, one might expect an expressive and expansive vision for a future to be at the helm of the project. Instead the vision is a stale and spent version of the same approach taken in HPC for the past quarter of a Century. ECP simply has nothing new to offer. The vision of computing for the future is the vision of the past. A quarter of a century ago the stockpile stewardship program came to being in the United States and the lynchpin of the program was HPC. New massively parallel computers would unleash their power and tame our understanding of reality. All that was needed then was some faster computers and reality would submit to the power of computation. Today’s vision ArtilleryShellis exactly the same except the power of the computers is 1000 times greater than the computers that would unlock the secrets of the universe a quarter of a century ago. Aside from the Exascale replacing Petascale in computing power, the vision of 25 years ago is identical to today’s vision. The problem then as now is the incompleteness of the vision and fatal flaws in how it is executed. If one adds a management approach that is seemingly devised by Chinese spies to undermine the program’s productivity and morale, the outcome of ECP seems assured, failure. This wouldn’t be the glorious failure of putting your best foot forward seeking great things, but failure born of incompetence and almost malicious disregard for the talent at their disposal.

The biggest issue with the entire approach to HPC is evident in the room of scientists I sat with last week, the minds and talents of these talented people are not being engaged. Let’s be completely clear, the room was full of immense talent with many members of the National Academies present, yet no intellectual engagement to speak of. How can we john-von-neumann-2succeed at something so massive and difficult while the voices of those paid to work on the project are silenced? At the same time we are failing to develop an entire generation of scientists with the holistic set of activities needed for successful HPC. The balance of technical activities needed for healthy useful HPC capability is simply unsupported and almost actively discouraged. We are effectively hollowing out an entire generation of applied mathematicians, computational engineers and physicists pushing them to focus more on software engineering than their primary disiplines. Today someone working in applied mathematics is more likely to focus on object oriented constructs in C++ than functional analysis. Moreover the software is acting as a straightjacket for the mathematics slowly suffocating actual mathematical investigations. We see important applied mathematical work avoided because software interfaces and assumptions are incompatible. One of the key aspects of ECP is the drive for everything to be expressed in software as products and our raison d’être. We’ve lost the balance of software as a necessary element in checking the utility of mathematics. We now have software in ascendency, and mathematics as a mere afterthought. Seeing this unfold with the arrayed talents on display in Knoxville last week felt absolutely and utterly tragic. Key scientific questions that the vitality of scientific computing absolutely hinge upon are left hanging without attention and progress on them is almost actively discouraged.

When people don’t express themselves, they die one piece at a time.

—  Laurie Halse Anderson

At the core of this tragedy is a fatally flawed vision of where we are going as a community. It was flawed 25 years ago, and we have failed to learn from the plainly obvious lessons. The original vision of computer power uber alles is technically and scientifically flawed, but financially viable. This is the core of the problem as dysfunction; we can get a flawed program funded and that is all we need to go forward. No leadership asserts itself to steer the program toward technical vitality. The flawed vision brings in money and money is all we need to do things. This gets to the core of so many problems as money becomes the sole source of legitimacy, correctness and value. We have lost the ability to lead by principle, and make hard choices. Instead the baser instincts hold sway looking only to provide the support for little empires that rule nothing.

First, we should outline the deep flaws in the current HPC push. The ECP program is about one thing, computer hardware. The issue a quarter of a century ago is the same as it is today; the hardware alone does not solve problems or endow us with capability. It is a single element in our overall ability to solve problems. I’ve argued many times that it is far from being the most important element, and may be one of the lesser capabilities to support. The item of greatest importance are the models of reality we solve, followed byMainframe Computerthe methods used to solve these models. Much of the enabling efficiency of solution is found in innovative algorithms. The key to this discussion is the subtext that these three most important elements in the HPC ecosystem are unsupported and minimized in priority by ECP. The focal point on hardware arises from two elements, the easier path to funding, and the fandom of hardware among the HPC cognoscenti.

We would be so much better off if the current programs took a decisive break with the past, and looked to move HPC in a different direction. In a deep and abiding way the computer industry has transformed in the last decade by the power of mobile computing. We have seen cellphones become the dominant factor in the industry. Innovative applications and pervasive connectivity has become the source of value and power. A vision of HPC that resonates with the direction of the broader industry would benefit from the flywheel effect instead of running counter to direction. Instead of building on this base, the HPC world remains tethered to the mainframe era long gone everywhere else. Moreover HPC remains in this mode even as the laws of physics conspire against it, and efforts suffer from terrible side effects of the difficulty in making progress in the outdated approach being taken. The hardware is acting to further tax every effort in HPC making the already threadbare support untenably shallow.cell-phone

Instead of focusing on producing another class of outdated lumbering dinosaur mainframes, the HPC effort could leap onto clear industry trends and seek a bold resonant path. A combination of cloud based resources, coupled with connectivity could unleash ubiquitous computing and seamless integration with mobile computing forces. The ability to communicate works wonders for combining ideas and pushing innovation ahead would do more to advance science than almost any amount of computing power conceivable. Mobile computing is focused on general-purpose use, but hardly optimized for scientific use, which brings different dynamics. Specific effort to energize science through different computing dynamics could provide boundless progress. Instead of trying something distinct and new, we head back to a mine that has long since born its greatest ore.

Progress in science is one of the most fertile engines for advancing the state of humanity. The United States with its wealth and diversitquick-fix-movie-to-watch-office-space-imagey has been a leading light in progress globally. A combination of our political climate and innate limits in the American mindset seem to be conspiring to undo this engine of progress. Looking at the ECP program as a microcosm of the American experience is instructive. The overt control of all activities is suggestive of the pervasive lack of trust in our society. This lack of trust is paired with deep fear of scandal and more demands for control. Working in almost unison with these twin engines of destruction is the lack of respect for human capital in general, which is only made more tragic when one realizes the magnitude of the talent being wasted. Instead of trust and faith in the arrayed talent of the individuals being funded by the program, we are going to undermine all their efforts with doubt, fear and marginalization. The active role of bullshit in defining success allows the disregard for talent to go unnoticed (think bullshit and alternative facts as brothers).

Progress in science should always be an imperative of the highest order for our research. When progress is obviously constrained and defined with strict boundaries as we are seeing with HPC, the term malpractice should come to mind. One of the clearest elements of HPC is a focus upon management and strict project controls. Instead I see the hallmarks of mismanagement in the failure to engage and harness the talents, capabilities and 11potential of the human resource available to them. Proper and able management of the people working on the project would harness and channel their efforts productively. Better yet, it would inspire and enable these talented individuals to innovate and discover new things that might power a brighter future for all of us. Instead we see the rule of fear, and limitations governing people’s actions. Instead we see an ever-tightening leash placed around people’s neck suffocating their ability to perform at their best. This is the core of the unfolding research tragedy that is doubtlessly playing out across a myriad of programs far beyond the small-scale tragedy unfolding with HPC.

We can only see a short distance ahead, but we can see plenty there that needs to be done.

― Alan Turing

 

 

Fear Makes Us Weak

Fear is the mind-killer.

― Frank Herbert

imagesIf one wants to understand fear and how it can destroy competence and achievement take a look at (American) football. How many times have you seen a team undone during the two minute drill? A team who has been dominating the other team defensively suddenly becomes porous when it switches to the prevent defense, it is a strategy born out of fear. They stop doing what works, but is risking and takes a safety first approach. It happens over and over providing the Madden quip that the only thing the prevent defense prevents is victory. It is a perfect metaphor for how fear plays out in society.

Fear is a rather enormous player in societal decision-making. In playing an over sized role fear provides a massive drain on everything we do ultimately costing us more than we can possibly estimate. Fear produces actions that work steadfastly to undermine every single productive bit of work we might do. Fear drives decisions that cause everything we do to be more expensive. Fear costs us time. Fear destroys trust. Fear undermines openness. Fear enslaves us to a pessimistic life always looking for disaster. In the end fear will keep us from succeeding at making the world better. Fear is making the world worse.

112215_1728_theonlythin1Over 80 years ago we had a leader, FDR, who chastened us against fear saying, “we have nothing to fear but fear itself”. Today we have leaders who embrace fear as a prime motivator in almost every single public policy decision. We have the cynical use of fear to gain power used across the globe. Fear is also a really powerful way to free money from governments too. Terrorism is both a powerful political tool for both those committing the terrorist acts, and the military-police-industrial complexes to retain their control over society. We see the rise of vast police states across the Western world fueled by irrational fears of terrorism.

If you want to control someone, all you have to do is to make them feel afraid.

― Paulo Coelho

GOP 2016 DebateFear also keeps people from taking risks. Many people decide not to travel because of fears associated with terrorism, among other things. Fear plays a more subtle role in work. If failure becomes unacceptable, fear will keep people from taking on difficult work, and focus on easier, low-risk work. This ultimately undermines our ability to achieve great things. If one does not focus on attempting to achieve great things, the great things simply will not happen. We are all poorer for it. Fear is ultimately the victory of small-minded limited thinking over hope and abundance of a better future. Instead of attacking the future with gusto and optimism, fear pushes us to contact to the past and turn our backs on progress.

One of the huge downsides to fear-based decision-making is shutting down cimages copyommunication. Good communication is based on trust. Fear is the absence of trust. People are afraid of ideas, and afraid to share their ideas or information with others. As Google amply demonstrates, knowledge is power. Fear keeps people form sharing information and leader to an overall diminishment in power. Information if held closely will produce control, but control of a smaller pie. Free information makes the pie bigger, creates abundance, but people are afraid of this. For example a lot of information is viewed as dangerous and held closely leading to things like classification. This is necessary, but also prone to horrible abuse.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

A big part of the abuse is retention of power, and used to enhance the power of those holding the power. The issue with this information control is how it inhibits people from working on things that have the greatest value, or simply working allowing people to work on things that others already know don’t work. It keeps people from building productively on the knowledge that others possess. In this and a myriad of other ways the control and failure to share information leads to a diminished future devoid of the potential freedom offers.

He who has overcome his fears will truly be free.

― Aristotle

There are very few truly unique, new ideas. Instead new things and new ideas arise from combining old ideas in new ways or for new purposes. With more ideas on the table and available, the possibilities and discoveries are great and more varied. The entirety of human experience and technology is based on the sharing of information, the combination of old existing ideas over and over. Just as the printing press created the sharing of knowledge and an explosion of creativity, the Internet is doing the same thing today. It can be a force for good and freedom. It can also be a force of evil and chaos as we have seen unfolding in the events of the World. Our job should be make sure that we actively work to make sure information can be harnessed as an agent for good. Fear when added to mix becomes a direct and powerful force for pushing us toward evil and chaos.

Another aspect of modern life and the availability of information is the ever-present worry of scandal and the implications of being part of it. Spurring this fear-based environment is the use of scandal as a political tool and the chaos scandal produces. There are fears of audits and unwanted attention driving decision-making and pushing all sorts of costs. All of this is driven by a general lack of trust across society and the rise of fear as a motivating factor. Instead of being focused on progress and achievement, we see fear of loss and terror at the prospect of scandal forming the basis of decisions. This is captured in the oft-heard comment, “I don’t want to see this featured on the front page of the New York Times.” To avoid this possibility we incur massive costs and horrible performance penalties. The bottom line is that fear is inhibiting our ability to create a better, richer and more abundant future.

Most people do not really want freedom, because freedom involves responsibility, and most people are frightened of responsibility.

― Sigmund Freud

Fear is used because fear works. Fear has become a powerful tool that political forces use to push their agenda, or attack their enemies. The most evident fear-based vehicle is terrorism, which our governments make much more powerful through channeling the fear to support the creation of large pervasive police-surveillance state. Instead of defeating terror, the state amplifies the impact of terror, terrorizes the populace, and becomes the source of terror itself. The greatest weapon against terror is to not be terrorized. Courage and bravery in the face of terror is the cure. Our reaction to terrorism gives it all of its power. By our fearful reaction we insure that more terror will bred out of our fearful reaction to it. This principle is broadly applicable. Our reactions to fear empower the fears and allow them to shape our lives. To overcome fear, we must cease to be afraid. We must be led to not fall prey to fear. Instead we are led to be afraid, and amplify our fears as a means of subservience.

maxresdefault copyWithout leadership rejecting fear too many people simply give into it. Today leaders do not reject fear; they embrace it; they use it for their purposes, and amplify their power. It is easy to do because fear engages people’s animal core and it is prone to cynical manipulation. This fear paralyzes us and makes us weak. Fear is expensive, and slow. Fear is starving the efforts society could be making to make a better future. Progress and the hope of a better future rests squarely on our courage and bravery in the face of fear and the rejection of it as the organizing principle for our civilization.

Our enemy is not terror, it is losing our soul while fighting terror.

— Jeff Lawson

And one has to understand that braveness is not the absence of fear but rather the strength to keep on going forward despite the fear.

― Paulo Coelho

Let’s Discover Some Magic

 

 

Magic’s just science that we don’t understand yet.

― Arthur C. Clarke

Scientific discovery and wonder can often be viewed as magic. Some things we can do with our knowledge of the universe can seem magical until you understand them. We commonly use technology that would seem magical to people only a few decades ago. Our ability to discover, innovate and build upon or knowledge creates opportunity for better, happier and longer healthy lives for humanity. In many ways technology is the most human of endeavors and sets us apart from the animal kingdom through its ability to harness, control and shape our World to our benefit. Scientific knowledge and discovery is the foundation of all technology, and from this foundation we can produce magical results. I’m increasingly aware of our tendency to shy away from doing the very work that yields magic.

The world is full of magic things, patiently waiting for our senses to grow sharper.

― W.B. Yeats

Today I’ll talk about a couple things: the magical power of models, methods, and algorithms, and what it takes to create the magic.

What do I mean by magic with abstract things like models, methods and algorithms in the first place? As I mentioned these things are all basically ideas and these ideas take shape through mathematics and power through computational simulation. Ultimately the the-future-1combination of mathematical structure and computer code the ideas can produce almost magical capabilities in understanding and explaining the World around us allowing us to tame reality in new innovative ways. One little correction is immediately in order; models themselves can be useful without computers. Simple models can be solved via analytical means and these solutions provided classical physics with many breakthroughs in the era before computers. Computers offered the ability expand the scope of these solutions to far more difficult and general models of reality.

cell-phoneThis then takes us to the magic from methods and algorithms, which are similar, but differing in character. The method is the means of taking a model and solving it. The method enables a model to be solved, the nature of that solution, and the basic efficiency of the solution. Ultimately the methods power what is possible to achieve with computers. All our modeling and simulation codes depend upon these methods for their core abilities
. Without the innovate methods to solve models, the computers would be far less powerful for science. Many great methods have been devised over the past few decades, and advances with methods open the door to new models or simply greater accuracy, or efficiency in their solution. Some methods are magical in their ability to open new models to solution and with those new perspectives on our reality.

Any sufficiently advanced technology is indistinguishable from magic.

― Arthur C. Clarke

Despite their centrality and essential nature in scientific computing, emphasis and focus on method creation is waning badly. Research into new or better methods has little priority today and the simple transfer (or porting) of existing methods onto new computers is the preferred choice. The blunt truth is that porting a method onto a new computer will produce progress, but no magic. The magic of methods can be more than simply enabling; the best methods bridge a divide between modeling and methods by containing elements of physical modeling. The key example of this character is shock capturing. Shock capturing magically created the ability to solve discontinuous problems in a general way, and paved the way for many if not most of our general application codes.

flameThe magic isn’t limited to just making solutions possible, the means of making the solution possible also added important physical modeling to the equations. The core methodology used for shock capturing is the addition of subgrid dissipative physics (i.e., artificial viscosity). The foundation of shock capturing led directly to large eddy simulation and the ability to simulate turbulence. Improved shock capturing developed in the 1970’s and 1980’s created implicit large eddy simulation. To many this seemed completely magical; the modeling simply came for free. In reality this magic was predictable. The basic method of shock capturing was the same as the basic subgrid modeling in LES. Finding out that improved shock capturing gives automatic LES modeling is actually quite logical. In essence the connection is due to the model leaving key physics out of the equations. Nature doesn’t allow this to go unpunished.

One of the aspects of modern science is that it provides a proverbial two-edged sword by understanding the magic. In the understanding we lose the magic, but open the door for new more miraculous capabilities. For implicit LES we have begun to unveil the secrets of its seemingly magical success. The core of the success is simply the same as original shock capturing, producing viable solutions on finite grids occurs via getting physically relevant solutions, which by definition means a dissipative (vanishing viscosity) solution. The new improved shock capturing methods extended the basic ability to solve problems. If one were cognizant of the connection between LES and shock capturing, the magic of implicit LES should be been foreseen.

The real key is the movement to physical admissible second-order accurate methods. Before the advent of modern shock capturing methods guarantees of physical admissibility were limited to first order accuracy. The first-order accuracy brings with it large numerical errors that look just like physical viscosity, which renders all solutions effectively laminar in character. This intrinsic laminar character disappears with secondUnknown-1-order accuracy. The trick is that the classical second-order results are oscillatory and prone to being unphysical. Modern shock capturing methods solve this issue and make solutions realizable. It turns out that the fundamental and leading truncation error in a second-order finite volume method produces the same form of dissipation as many models produce in the limit of vanishing viscosity. In other words, the second order solutions match the asymptotic structure of the solutions to the inviscid equations in a deep manner. This structural matching is the basis of the seemingly magic ability of second-order methods to produce convincingly turbulent calculations.

This magic is the tip of the iceberg, and science is about understanding the magic as a route to even greater wizardry. One of the great tragedies of the modern age is the disconnect between these magical results and what we are allowed to do.

We can also get magical results from algorithms. The algorithms are important mathematical tools that enable methods to work. In some cases algorithmic limitations produce significantly limiting efficiency for numerical methods. One of the clearest areas of algorithmic magic is numerical linear algebra. Breakthroughs in numerical linear algebra have produced immense and enabling capabilities for methods. If the linear algebra is inefficient it can limit the capacity for solving problems. Conversely a breakthrough in linear algebra scaling (like multigrid) can allow solutions with a speed, magnitude and efficiency that seems positively magical in nature._12122_tex2html_wrap26

Numerous algorithms have been developed that endow codes with seemingly magical abilities. A recent breakthrough where magical power is ascribable to is compressed sensing. This methodology has seeded a number of related algorithmic capabilities that defy normal rules. The biggest element of compressed sensing is its appetite for sparsity, and sparsity drives good scaling properties. We see magical ability to recover clear images from noisy signals. The key to all of this capability is the marriage of deep mathematical theory to applied mathematical practice, and algorithmic implementation. We should want as much of this sort of magical capabilities as possible. They do seemingly impossible things providing new unforeseen abilities.

In the republic of mediocrity, genius is dangerous.

― Robert G. Ingersoll

We don’t do much of this these days. Model, method and algorithm advancement is difficult and risky. Unfortunately our modern management programs don’t do difficult things well anymore. We do risky things even less. A risky failure prone research program is likely to not be funded. Our management is incapable of taking risks, and progress in all of these areas is very risky. We must be able to absorb many failures in attempting to achieve breakthroughs. Without accepting and managing through these failures, the breakthroughs will not occur. If the breakthroughs occur massive benefits will arise, but these benefit while doubtless are hard to estimate. We are living in the lunacy of the scheduled breakthrough. Our inability to seek success without the possibility of failure is nothing, but unbridled bullshit and the recipe for systematic failure.

There is always danger for those who are afraid.

― George Bernard Shaw

The truly unfortunate aspect of today’s world is the systematic lack of trust in people, expertise, institutions and facts in general. These trustworthiness crises are getting worse, not better, and may be approaching a critical fracture. The end result of the lack of trust is a lack of effective execution of work because people’s hands are tied. The level of control placed on how work is executed is incompatible with serendipitous breakthroNot-just-fail-but-epic-failughs and adaption of complex efforts. Instead we tend to have highly controlled and scripted work lacking any innovation and discovery. In other words the control and lack of trust conspire to remove magic as a potential result. Over the years this leads to a lessening of the wonderful things we can accomplish.

If we expect to continue discovering wonderful things we need to change how we manage our programs. We need to start trusting people, expertise, and institutions again. Trust is a wonderful thing. Trust is an empowering thing. Trust drives greater efficiency and allows people to learn and adapt. If we trust people they will discover serendipitous results. Most discoveries are not completely new ideas. A much more common occurrence is for old mature ideas to combine into entirely new ideas. This is a common source of magical and new capabilities. Currently the controls placed on work driven by lack of trust remove most of the potential for a marriage of new ideas. The new ideas simply never meet and never have a chance to become something new and amazing. We need to give trust and relinquish some control if we want great things to happen.

The problem with releasing control and giving trust is the acceptance of risk. Anything new, wonderful, even magical will also entail great risk of failure. If one desires the magic, one must also accept the possibility of failure. The two things are intrinsically linked and utterly dependent. Without risks the reward will not materialize. The ability to take large risks, highly prone to failure is necessary to expose discoveries. The magic is out there waiting to be uncovered by those with the courage to take the risks.

Breaking Bad: Priorities, Intentions and Responsibility in High Performance Computing

 

Action expresses priorities.

― Mahatma Gandhi

0409_comedy_1-518863bc6d2eeBeing the successful and competent at high performance computing (HPC) is an essential enabling technology for supporting many scientific, military and industrial activities. It plays an important role in national defense, economics, cyber-everything and a measure of National competence. So it is important. Being the top nation in high performance computers is an important benchmark in defining national power. It does not measure overall success or competence, but rather a component of those things. Success and competence in high performance computing depends on a number of things including physics modeling and experimentation, applied mathematics, many types of engineering including software engineering, and computer hardware. In the list of these things computing hardware is among the least important aspects of competence. It is generally enabling for everything else, but hardly defines competence. In other words, hardware is necessary and far from sufficient.

Claiming that you are what you are not will obscure the strengths you do have while destroying your credibility.

― Tom Hayes

21SUPERCOMPUTERS1-master768

Being a necessity for competence, hardware must receive some support for national success. Being insufficient, it cannot be the only thing supported, and it is not the determining factor for HPC supremacy. In other words, we could have the very best hardware and still be inferior to the competition. Indeed the key to success in HPC has always been a multidisciplinary endeavor and predicated on a high degree of balance across the spectrum of activities needed for competence. If one examines the state of affairs in HPC, we can easily see that all this experience and previous success has been ignored and forgotten. Instead of following a path blazed by previous funding success (i.e., ASCI), we have chosen a road to success solely focused on computing hardware and its direct implications. Worse, the lessons of the past are plain and ignored by the current management. Excellence in other areas has been eschewed in favor of the hardware’s wake. The danger in the current approach is dampening progress in a host of essential disciplines in favor of a success completely dependent on hardware.Heroin addict in his bedroom preparing to inject.

The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.

― Bertrand Russell

Unfortunately, the situation is far worse than this. If computer hardware was in an era where huge advances in performance were primed to take place, the focus might be forgivable. Instead we are in an era where advances in hardware are incredibly strained. It is easy to see that huge advances in hardware are grinding to a halt, or at least relative to the past half century. The focus of the current programs, the “exascale” initiatives, is actually the opposite. We are attempting to continue growth in computing power at tremendous cost where the very physics of computers is working against us. The focus on hardware is actually completely illogical; if opportunity were the guide hardware would be a side-show instead of the main event. The core of the problem is the complete addiction of the field on Moore’s law for approximately 50 years, and like all addicts, a-man-injecting-himself-with-drugs-pic-pa-931112059kicking the habit is hard. In a sense under Moore’s law computer performance skyrocketed for free, and people are not ready to see it go.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

Moore’s law is dead and HPC is suffering from the effects of withdrawal. Instead of accepting the death of Moore’s law and shifting the focus to other areas for advancements, we are holding onto it like a junkie’s last fix. In other words, the current programs in HPC are putting an immense amount of focus and resources into keeping Moore’s law alive. It is not unlike the sort of heroic measures taken to extend the life of a terminal patient. Much like the terminal patient whose death is only delayed by the heroic measures, the quality of life is usually terrible. In the same way the performance of HPC is more zombie-like than robust. Achieving the performance comes at the cost of utility and general ease of use for the computers. Moreover the nature of the hardware inhibits advances inend-world-survival-guide-staying-alive-during-zombie-apocalypse.w654other areas due its difficulty of use. This goes above and beyond the vast resource sink the hardware is.

The core truth of HPC is that we’ve been losing this war for twenty years, and the current effort is simply the final apocalyptic battle in war that is about to end. The bottom line is that we are in a terrible place where all progress is threatened by supporting a dying trend that has benefitted HPC for decades.

shooting-coke-thefixI work on this program and quietly make all these points. They fall of deaf ears because the people committed to hardware dominate the national and international conversations. Hardware is an easier sell to the political class who are not sophisticated enough to smell the bullshit they are being fed. Hardware has worked to get funding before, so we go back to the well. Hardware advances are easy to understand and sell politically. The more naïve and superficial the argument, the better fit it is for our increasingly elite-unfriendly body politic. All the other things needed for HPC competence and advances are supported largely by pro bono work. They are simply added effort that comes down to doing the right thing. There is a rub that puts all this good faith effort at risk. The balance and all the other work is not a priority or emphasis of the program. Generally it is not important or measured in the success of the program, or defined in the tasking from the funding agencies.

mediocritydemotivatorWe live in an era where we are driven to be unwaveringly compliant to rules and regulations. In other words you work on what you’re paid to work on, and you’re paid to complete the tasks spelled out in the work orders. As a result all of the things you do out of good faith and responsibility can be viewed as violating these rules. Success might depend doing all of these unfunded and unstated things, but the defined success from the work contracts are missing these elements. As a result the things that need to be done; do not get done. More often than not, you receive little credit or personal success from pursing doing the right thing. You do not get management or institutional support either. Expecting these unprioritized, unintentional things to happen is simply magical thinking.

51e8591d0a31c-imageWe have the situation where the priorities of the program are arrayed toward success in a single area that puts other areas needed for success at risk. Management then asks people to do good faith pro bono work to make up the difference. This good faith work violates the letter of the law in compliance toward contracted work. There appears to be no intention of supporting all of the other disciplines needed for success. We rely upon people’s sense of responsibility for closing this gap even when we drive a sense of duty that pushes against doing any extra work. In addition, the hardware focus levies an immense tax on all other work because the hardware is so incredibly user-unfriendly. The bottom line is a systematic abdication of responsibility by those charged with leading our efforts. Moreover we exist within a time and system where grass roots dissent and negative feedback is squashed. Our tepid and incompetent leadership can rest assured that their decisions will not be questioned.

titan-supercomputerBefore getting to my conclusion, one might reasonably ask, “what should we be doing instead?” First we need an HPC program with balance between the impact on reality and the stream of enabling technology. The single most contemptible aspect of current programs is the nature of the hardware focus. The computers we are building are monstrosities, largely unfit for scientific use and vomitously inefficient. They are chasing a meaningless summit of performance measured through an antiquated and empty benchmark. We would be better served through building computers tailored to scientific computation that solve real important problems with efficiency. We should be building computers and software that spur our productivity and are easy to800px-Cray_Y-MP_GSFC use. Instead we levy an enormous penalty toward any useful application of these machines because of their monstrous nature. A refocus away from the meaningless summit defined by an outdated benchmark could have vast benefits for science.

We could then free up resources to provide a holistic value stream from computing we know by experience. Real applied focusing on modeling and solution methods produces the greatest possible benefit. These immensely valuable activities are completely and utterly unsupported by the current HPC program and paid little more than lip service. Hand-in-hand with the lack of focus on applications and answers is no focus on verification or validation. Verification deals with the overall quality of the calculations, which is just assumed by the magnitude of the calculations (it used so much computer power, it has to be awesome, right?). The lack of validation underpins a generic lack of interest in the quality of the work in terms of real world congruence and impact.

Next down the line of unsupported activities is algorithmic research. The sort of algorithmic research that yields game-changing breakthroughs is unsupported. Algorithmic breakthroughs make the impossible, possible and create capabilities undreamed of. They create a better future we couldn’t even dream of. We are putting no effort into this. Instead we have the new buzzword of “co-design” where we focus on figuring out how to put existing algorithms on the monstrous hardware we are pursuing. The benefits are hardly game changing, but rather simply fighting the tidal wave of entropy of the horrific hardware. Finally we get to the place where funding exists, code development that ports existing models, methods and algorithms onto the hardware. Because little or no effort is put into making this hardware scientifically productive (in fact it’s the opposite), the code can barely be developed and its quality suffers mightily.john-von-neumann-2

A huge tell in the actions of those constructing current HPC programs is their inability to learn from the past (or care about the underlying issues). If one looks at the program for pursuing exascale, it is structured almost identically to the original ASCI program, except being even more relentlessly hardware obsessed. The original ASCI program needed to add significant efforts in support of physical modeling, algorithm research and V&V on top of the hardware focus. This reflected a desire and necessity to produce high quality results with high confidence. All of these elements are conspicuously absent from the current HPC efforts. This sends two clear and unambiguous messages to anyone paying attention. The first message is a steadfast belief that the only quality needed is the knowledge that a really big expensive computer did the calculation at great cost. Somehow the mere utilization of such exotic and expensive hardware will endow the calculations with legitimacy. The second message is that no other advances other than computer power are needed.

The true message is that connection to credibility and physical reality has no importance whatsoever to those running these programs. The actions and focus of the work spelled out plainly in the activities funded makes their plans. The current HPC efforts make no serious attempt to make sure calculations are high quality or impactful in the real world. If the calculations are high quality there will be scant evidence to prove this, and any demonstration will be done via authority. We are at the point where proof is granted by immensely expensive calculations rather then convincing evidence. There will be no focused or funded activity to demonstrate quality. There will be no focused activity to improve the physical, mathematical or algorithmic basis of the codes either. In other words all the application code related work in the program is little more than a giant porting exercise. The priority and intents regarding quality are clear to those of us working on the project, namely quality is not important and not valued.

I’ve been told to assume that the leadership supports the important things to do that are ignored by our current programs. Seeing how our current programs operate, this is hardly plausible. Every single act by the leadership constructs an ever-tightening noose of planning, reporting and constraint about our collective necks. Quality, knowledge and expertise are all seriously devalued in the current era, and we can expect the results to reflect our priorities. We see a system put in place that will punish any attempt to do the right thing. The “right thing” is to do exactly what you’re told to do. Of course, one might argue that the chickens will eventually come home to roost, and the failures of the leadership will be laid bare. I’d like to think this is inevitable, but recent events seem to indicate that all facts are negotiable, and any problems can be spun through innovative marketing and propaganda into success. I have a great deal of faith that the Chinese will mop the floor with us in HPC, and our current leadership should shoulder the blame. I also believe the blame will not fall to the guilty. It never does, today; the innocent will be scapegoated for their mistakes.

Nothing in this World is Static…Everything is Kinetic..

If there is no ‘progression’…there is bound to be ‘regression’…

― Abha Maryada Banerjee

I am left with the feeling that an important opportunity for reshaping the future is being missed. Rather than admit the technological limitations we are laboring under and transform HPC towards a new focus, we continue along a path that appears to be completely nostalgic. The acceptance of the limitations in the growth of computer power in the commercial computing industry led to a wonderful result. Computer hardware shifted to mobile computing and unleashed a level of impact and power far beyond what existing at the turn of the Century. Mobile computing is vastly more important and pervasive than the computing that preceded it. The same sort of innovation could unleash HPC to produce real value far beyond anything conceivable today. Instead we have built a program devoted to nostalgia and largely divorced from objective reality.vyxvbzwx

Doing better would be simple, at least at a conceptual level. One would need to commit to a balanced program where driving modeling and simulation to impact the real world is a priority. The funded and prioritized activities would need to reflect this focus. Those leading and managing the program would need to ask the right questions and demand progress in the right areas. Success would need to be predicated on the same holistic balanced philosophy. The people working on these programs are smart enough to infer the intent of the programs. This is patently obvious by examining the funding profiles.

Programs are funded around their priorities. The results that matter are connected toimages-1money. If something is not being paid for it is not important. If one couples steadfast compliance with only working on what you’re funded to do, any call to do the right thing despite funding is simply comical. The right thing becomes complying, and the important thing in this environment is funding the right things. As we work to account for every dime of spending in ever finer increments, the importance of sensible and visionary leadership becomes greater. The very nature of this accounting tsunami is to blunt and deny visionary leadership’s ability to exist. The end result is spending every dime as intended and wasting the vast majority of it on shitty, useless results. Any other outcome in the modern world is implausible.

You never change things by fighting the existing reality.

To change something, build a new model that makes the existing model obsolete.

― R. Buckminster Fuller

Are We Doing Numerical Error Bars Right?

No. I don’t think so, but I’ll give my argument.

If you reject feedback, you also reject the choice of acting in a way that may bring you abundant success.

― John Mattone

bbf50ad76d36dc0cb78bfba3c5d3870cDespite a relatively obvious path to fulfillment, the estimation of numerical error in modeling and simulation appears to be worryingly difficult to achieve. A big part of the problem is outright laziness, inattention, and poor standards. A secondary issue is the mismatch between theory and practice. If we maintain reasonable pressure on the modeling and simulation community we can overcome the first problem, but it does require not accepting substandard work. The second problem requires some focused research, along with a more pragmatic approach to practical problems. Along with these systemic issues we can deal with a simpler problem, where to put the error bars on simulations, or should they show a bias or symmetric error. I strongly favor a bias.

Implicit in this discussion is an assumption of convergence for a local sequence of calculations. I suspect the assumption is generally a good one, but also prone to failure. One of the key realities is the relative rarity of calculations in the asymptotic range of convergence for methods and problems of interest. The biggest issue is how problems are modeled. The usual way of modeling problems or creating models for physics in problems produces technical issues that inhibit asymptotic convergence (various discontinutiies, other singularities, degenerate cases, etc.). Our convergence theory is predicated on smoothness that rarely exists in realistic problems. This gets to the core of the shortcomings of theory, we don’t know what to expect in these cases. In the end we need to either make some assumptions, collect data and do our best, or do some focused research to find a way.

The basic recipe for verification is simple: make an assumption about the form of the error, collect calculations and use the assumed error model to estimate errors. The assumed error form is A = S_k + C h_k^\alpha where A is the mesh converged solution, S_k is the solution on a grid k, h_k is the mesh density, C is a constant of proportionality and \alpha is the convergence rate. We see three unknowns in this assumed form, A, C and \alpha. Thus we need at least three solutions to solve for these values, or more if we are willing to solve an over-determined problem. At this point the hard part is done, and verification is just algebra and a few very key decisions. It is these key decisions that I’m going to ask some questions about.

The first thing to note is the basic yardstick for the error estimate is the difference between A and the grid solution S_k, which we will call \Delta A.  Notice that this whole error model assumes that the sequence of solutions S_k approaches A monotonically as h_k becomes smaller. In other words all the evidence supports the solution going to A. Therefore the error is actually signed, or biased by this fact. In a sense we should consider A to be the most likely, or best estimate of the true solution as h \rightarrow 0. There is also no evidence at all that the solution is moving the opposite direction. The problem I’m highlighting today is that the standard in solution verification does not apply these rather obvious conclusions in setting the numerical error bar.

The standard way of setting error bars takes the basic measure of error, multiplies it by an engineering safety factor C_s \ge 1, and then centers it about the mesh solution, S_k. The numerical uncertainty estimate is simple,  U_s = C_s \left| \Delta A  \right| . So half the error bar is consistent with all the evidence, but the other half is not. This is easy to fix by ridding ourselves of the inconsistent piece.

slide1The core issue I’m talking about is the position of the numerical error bar. Current approaches center the error bar on the finite grid solution of interest, usually the finest mesh used. This has the effect of giving the impression that this solution is the most likely answer, and the true answer could be either direction from that answer. Neither of these suggestions is supported by the data used to construct the error bar. For this reason the standard practice today is problematic and should be changed to something supportable by the evidence. The current error bars suggest incorrectly that the most likely error is zero. This is completely and utterly unsupported by evidence.

Instead of this impression, the evidence is pointing to the extrapolated solution as the most likely answer, and the difference between that solution, A, and the mesh of interest S_k is the most likely error. For this reason the error bar should be centered on the extrapolated solution. The most likely error is non-zero. This would make the error biased, and consistent with the evidence. If we padded our error estimate with a safety factor, C_s, the error bar would include the mesh solution, S_k and the potential for zero numerical error, but only as a low probability event. It would present the best estimate of the error as the best estimate.

There is a secondary impact of this bias that is no less important. The current standard approach also significantly discounts the potential for the numerical error to be much larger than the best estimate (where the current centering makes the best estimate appear to be low probability!). By centering the error bar on the best estimate we then present larger error as being equally as likely as smaller error, which is utterly and completely reasonable.

The man of science has learned to believe in justification, not by faith, but by verification.

― Thomas Henry Huxley

Why has this happened?

Part of the problem is the origin of error bars in common practice, and a serious technical difference in their derivation. The most common setting for error bars is measurement error. Here a number of measurements are taken and then analyzed to provide a single value (or values). In the most common use the mean value is presented as the measurement (i.e., the central tendency). Scientists then assume that the error bar is centered about the mean through assuming normal (i.e., Gaussian) statistics. This could be done differently with various biases in the data being presented, but truth be told this is rare, as is using any other statistical basis for computing the central tendency and deviations. This point of view is the standard way of viewing an error bar and implicitly plays in the mind of those viewing numerical error. This implicit view is dangerous because it imposes a technical perspective that does not fit numerical error.

The problem is that the basic structure of uncertainty is completely different with numerical error. A resolved numerical solution is definitely biased in its error. An under-resolved numerical solution is almost certainly inherently biased. The term under resolved is simply a matter of how exacting a solution one desires, so for the purposes of this conversation, all numerical solutions are under-resolved. The numerical error is always finite and if the calculation is well behaved, the error is always a bias. As such the process is utterly different than measurement error. With measurements there is an objective reality that is trying to be sensed. Observations can be biased, but generally are assumed to be unbiased unless otherwise noted. We have fluctuations in the observation or errors in the measurement itself. These both can have a distinct statistical nature. Numerical error is deterministic and structured, and has a basic bias through the leading order truncation error. As a result error bars from both sources should be structurally different. There are simply not the same thing and should not be treated as such.

The importance of this distinction in perspective is the proper accounting for sources and impact of uncertainty in modeling and simulation. Today we suffer most greatly from some degree of willful ignorance of uncertainty, and when it is acknowledged, too narrow a perspective. Numerical error is rarely estimated, assumed away and misrepresented even when it is computed. In the best work available, uncertainty is tackled as being dominantly epistemic uncertainty associated with modeling parameters (nominally subgrid or closure models). Large sources of uncertainty are defined by numerical error, problem modeling assumptions, model form error, and experimental uncertainty to name the big ones. All of these sources of uncertainty are commonly ignored by the community without much negative feedback, this needs to be somewhere for progress.

Science is a system of statements based on direct experience, and controlled by experimental verification. Verification in science is not, however, of single statements but of the entire system or a sub-system of such statements.

― Rudolf Carnap

 

Dealing with Bias and Calibration in Uncertainty Quantification

It is useless to attempt to reason a man out of a thing he was never reasoned into.

― Jonathan Swift

Most of the computer modeling and siClimateModelnestingmulation examples in existence are subject to bias in the solutions. This bias comes from numerical solution, modeling inadequacy, and bad assumptions to name a few of the sources. In contrast uncertainty quantification is usually applied in a statistical and clearly unbiased manner. This is a serious difference in perspective. The differences are clear. With bias the difference between simulation and reality is one sided and the deviation can be cured by calibrating parts of the model to compensate. Unbiased uncertainty is common in measurement error and ends up dominating the approach to UQ in simulations. The result is a mismatch between the dominant mode of uncertainty and how it is modeled. Coming up with a more nuanced and appropriate model that acknowledges and deals with bias appropriately would be great progress.

One of the archetypes of the modern modeling and simulation are climate simulations (and their brethren, weather). These simulations carry with them significant bias assclimate_modeling-ruddmanociated with lack of computational resolution. The computational mesh is always far too coarse for comfort, and the numerical errors are significant. There are also issues associated with initial conditions, energy balance and representing physics at and below the level of the grid. In both cases the models are invariably calibrated heavily. This calibration compensates for the lack of mesh resolution, lack of knowledge of initial data and physics as well as problems with representing the energy balance essential to the simulation (especially climate). A serious modeling deficiency is the merging of all of these uncertainties into the calibration with an associated loss of information.

We all see only that which we are trained to see.

― Robert Anton Wilson

pastedgraphic5The issues with calibration are profound. Without calibration the models are effectively useless. For these models to contribute to our societal knowledge and decision-making or raw scientific investigation, the calibration is an absolute necessity. Calibration depends entirely on existing data, and this carries a burden of applicability. How valid is the calibration when the simulation is probing outside the range of the data used to calibrate? We commonly include the intrinsic numerical bias in the calibration, and most commonly a turbulence or mixing model is adjusted to account for the numerical bias. A colleague familiar with ocean models quipped that if the ocean were as viscous as we modeled it, one could drive to London from New York. It is well known that numerical viscosity stabilizes calculation, and we can use numerical methods to model turbulence (implicit large eddy simulation), but this practice should at the very least make people uncomfortable. We are also left with the difficult matter of how to validate models that have been calibrated.

I just touched on large eddy simulation, which is a particularly difficult topic because numerical effects are always in play. The mesh itself is part of the model with classical LES. With implicit LES the numerical method itself provides the physical modeling, or sankaran_fig1_360some part of the model. This issue plays out in weather and climate modeling where the mesh is part of the model rather than independent aspect of it. It should surprise no one that LES was born from weather-climate modeling (at the time where the distinction didn’t exist). In other words the chosen mesh and the model are intimately linked. If the mesh is modified, the modeling must also be modified (recalibrated) to get the balancing of the solution correct. This tends to happen in simulations where an intimate balance is essential to the phenomena. In these cases there is a system that in one respect or another is in a nearly equilibrium state, and the deviations from this equilibrium are essential. Aspects of the modeling related to the scales of interest including the grid itself impact the equilibrium to a degree that an un-calibrated model is nearly useless.

If numerical methods are being used correctly and at a resolution where the solution can be considered remotely mesh converged, the numerical error is a pure bias error. A significant problem is the standard approach to solution verification that treats numerical error as unbiased. This is applied in the case where no evidence exists for the error being unbiased! Well-behaved numerical error is intrinsically biased. This is a significant issue because making a biased error, unbiased represents a significant loss of information. Those who either must or do calibrate their models to account for numerical error rarely explicitly estimate numerical error, but account for the bias as a matter of course. Ultimately the failure of the V&V community to correctly apply well-behaved numerical error as a one-sided bias is counter-productive. It is particularly problematic in the endeavor to deal proactively with the issues associated with calibration.

Science is about recognizing patterns. […] Everything depends on the ground rules of the observer: if someone refuses to look at obvious patterns because they consider a pattern should not be there, then they will see nothing but the reflection of their own prejudices.

― Christopher Knight

Let me outline how we should be dealing with well-behaved numerical error below. If one has a quantity of interest where a sequence of meshes produces the monotonic approacimgresh to a value (assuming the rest of the model is held fixed) then the error is well behaved. The sequence of solutions on the meshes can then be used to estimate the solution to the mathematical problem, that is the solution where the mesh resolution is infinite (absurd as it might be). Along with this estimate of the “perfect” solution, the error can be estimated for any of the meshes. For this well-behaved case the error is one sided, a bias between the ideal solution and the one with a mesh. Any fuzz in the estimate would be applied to the bias. In other words any uncertainty in the error estimate is centered about the extrapolated “perfect” solution, not the finite grid solutions. The problem with the current accepted methodology is that the error is given as a standard two-sided error bar that is appropriate for statistical errors. In other words we use a two-sided accounting for this error even though there is no evidence for it. This is a problem that should be corrected. I should note that many models (i.e., like climate or weather) invariably recalibrate after all mesh changes, which invalidates the entire verification exercise where the model aside from the grid should be fixed across the mesh sequence.

I plan to talk more about this issue next week along with a concrete suggestion about how to do better.

When we get to the heart of the matter at hand, dealing with uncertainty in calibrated models, we rapidly come to the conclusion that we need to keep two sets of books. If the first thing that comes to mind is, “that’s what criminals do,” you’re on the right track. You should feel uneasy about this conclusion, and we should all feel as sense of disease regarding this outcome. What do we put in these two books? In one case we have calibrated models, and we can rely upon this model to reliably interpolate the data it is calibrated with. So for quantities of interest used to calibrate a model, the model is basically useless, or perhaps it unveils uncertainty and inconsistency within the data used for calibration.

A model is valuable for inferring other things from simulation. It is good for looking at imagesquantities that cannot be measured. In this case the uncertainty must be approached carefully. The uncertainty in these values must almost invariably be larger than the quantities used for calibration. One needs to look at the modeling connections for these values and attack a reasonable approach to treating the quantities with an appropriate “grain of salt”. This includes numerical error, which I talked about above too. In the best case there is data available that was not used to calibrate the model. Maybe these are values that are not as highly prized or as important as those used to calibrate. The uncertainty between these measured data values and the simulation gives very strong indications regarding the uncertainty in the simulation. In other cases some of the data potentially available for calibration has been left out, and can be used for validating the calibrated model. This assumes that the hold-out data is sufficiently independent of the data used.

A truly massive issue with simulations is extrapolation of results beyond the data used for calibration. This is a common and important use of simulations. One should expect the uncertainty to grow substantially with the degree of extrapolation from data. A common and pedestrian source for seeing what this looks like is least square fitting of data. The variation and uncertainty in the calibrated range is the basis of the estimates, but depending on the nature of the calibrated range of the data and the degree of extrapolation, the uncertainty can grow to be very large. This makes perfect reasonable sense, as one departs from our knowledge and experience, we should expect the uncertainty in our knowledge to grow.

A second issue to consider is our second set of books where the calibration is not taken quite so generously. In this case the most honest approach to unchart-with-huge-error-barscertainty is to apply significant variation to the parameters used to calibrate the model. In addition we should include the numerical error in the uncertainty. In the case of deeply calibrated models these sources of uncertainty can be quite large and generally paint an overly pessimistic picture of the uncertainty. Conversely we have an extremely optimistic picture of uncertainty with calibration. The hope and best possible outcome is that these two views bound reality, and the true uncertainty lies between these extremes. For decision-making using simulation this bounding approach to uncertainty quantification should serve us well.

There are three types of lies — lies, damn lies, and statistics.”

― Benjamin Disraeli

 

Get Back In The Box

Change almost never fails because it’s too early. It almost always fails because it’s too late.

– Seth Godin

I read a lot including books, papers, articles, online content, and whatever else I can get my hands on. My interests are wide and varied including everything from deep technical science articles to more intellectual takes on popular culture. Among my interests are business or management articles. These speak about various ways of getting the best results from employees using largely positive and empowering techniques. Somehow I never see the techniques espoused in these articles in practice. Increasingly, the articles I read about management and business are science fiction with an ever-widening gap between reality and the ideal. The same gap is present in the realm of politics and public policy. Many bi-partisan forces threaten to push us into an authoritarian future that crushes human spirit challenge the ideal and progressive changes needed to make society function better. Inside and outside of work we see the potential of people constricted to produce predictable results that comply with a sense of order and safety.download

When I read articles on excellence in management and business a big part of the message is employee empowerment and motivation. Empowered and motivated employees can be a huge benefit for a company (or by extension Lab, University, organization,…). Another way of expressing this common message is the encouragement of innovation and problem solving as a route to added value and high performance. Usually this is articulated as out of the box thinking, work and performance. Yet when I return to my reality, the writing seems dramatically out of touch and impossible to imagine being implemented where I work. Almost every thing my management does, and our “corporate” governance strives for is compliance, subservience, and in the box thinking. We are pushed to be predictable and downright pedestrian in everything we do. A large part of the ability to tolerate this environment is the articulation of standards of performance. Today standards of performance are defined not by excellence and achievement, but compliance and predictability. The result is the illusion of excellence and achievement when the reality is exactly the opposite. Remarkably like cattle moving to slaughter, we go along with it.

The greatest irony of the current era is the need to keep out of the box thinking under control, effectively putting it in the box. You can only be out of the box within strictly defined boundaries lest you create a situation that might not be completely under control. Of course this is a complete oxymoron and leads to the sort of ridiculous outcomes at work we all recognize. We are encouraged to be bold at work as long as we comply with all the rules and regulations. We can be bold in our thinking as long as no risks are taken. It is the theatre of the absurd. We can magically manage our way to getting all the reward without any of the risk. Bold outcomes automatically come with risk, and usually unpredictable results and unintended consequences. All of these things are completely outside the realm of the acceptable today. Our governance is all about predictably intended consequences and the entire system is devoted to control and safety. The bottom line is you can’t have the fruits of boldness, innovation and discovery without risking something and potentially courting disaster. If you don’t take risks, you don’t get the rewards, a maxim that our leaders don’t seem to understand.

One of the great sources for business articles is the well-written and respected Harvadrive_book-by-daniel-pink_danpinkdotcom1rd Business Review (HBR). I know my managers read many of the same things I do. They also read business books, sometimes in a faddish manner. Among these is Daniel Pink’s excellent “Drive”. When I read HBR I feel inspired, and hopeful (Seth Godin’s books are another source of frustration and inspiration). When I read Drive I was left yearning for a workplace that operated on the principles expressed there. Yet when I return to the reality of work these pieces of literature seem fictional, even more like science fiction. The reality of work today is almost completely orthogonal to these aspirational writings. How can my managers read these things, then turn around and operate the way they do? No one seems to actually think through what implementation of these ideas would look like in the workplace. With each passing year we fall further from the ideal, more toward a workplace that crushes dreams, and simply drives people into some sort of cardboard cutout variety of behavior without any real soul.

seth-godin-booksWhile work is the focus of my adult world, similar trends are at work on our children. School has become a similarly structured training ground for compliance and squalid mediocrity. Standardized testing is one route to this outcome where children are trained to take tests and no solve problems. Standardized testing becomes the perfect rubric for the soulless workplace that awaits them in the adult world. The rejection of fact and science by society as a whole is another way. We have a large segment of society who is suspicious of intellect. Too many people now view educated intellectuals as dangerous and their knowledge and facts are rejected whenever they disagree with the politically chosen philosophy. This attitude is a direct threat to the value of an educated populace. Under a system where intellect is devalued, education transforms into a means of training the population to obey authority and fall into line. The workplace is subject to the same trends, compliance and authority is prized along with predictability of results. The lack of value for intellect is also present within the sort of research institutions I work at. This is because it threatens predictability of results. As a result out of the box thinking is discouraged, and the entire system is geared to keep everyone in the box. We create systems oriented toward control and safety without realizing the price paid for rejecting exploration and risk. We all live a life less rich and less rewarding as a result, and by accumulating this over society, a broad-based diminishment of results.

Be genuine. Be remarkable. Be worth connecting with.

– Seth Godin

When I see my managers reading things like HBR or Drive, I’m left wondering about how they can square their actions with the distance from what they read? My wife likes to promote “Reality-based Management,” the practical application of principles within a pragmatic approach to achievement. This is good advice that I strive to apply. There is a limit to pragmatism when the forces within society continually push us away from every ideal. Pragmatism is a force for survival and making the best of a bad situation, but there is a breaking point. When does reality become so problematic that something must change? When does the disempowering force become so great that change must occur? Perhaps we are at this point. I find myself hoping for a wholesale rejection of the forces of compliance that enslave us. Unfortunately we have rejected progressive forces nationally, and embraced the slaveholders who seek to exploit and disempower us. We have accepted being disempowered in trade for safety. Make no mistake, we have handed those who abuse the populace with a yoke and whip, and a “mandate” to turn the screws on all of us. In return we all get to be safe, and live a less rich life through the controls such safety requires.

I have to admit to myself that many people prize control and safety above all else. They are willing to reject freedom and rewards if safety can be assured. This is exactly the trade that many Americans have made. Bold, exciting and rewarding lives are traded for safety and predictable outcomes. The same thing is happening for many companies and organizations and infests work with compliance through rules and regulations. We see this play out with the reactions to terrorism. Terrorism has paved the way turlo massive structures of control and societal safety. It also creates an apparatus for big brother to come to fruition in a way that makes Orwell more prescient than ever. The counter to such widespread safety and control is the diminished richness of life that is sacrificed to achieve it. Lives well-lived and bold outcomes are reduced in achieving safety. I’ve gotten to the point where this trade no longer seems worth it. What am I staying safe for? I am risking living a pathetic and empty life in trade for safety and security, so that I can die quietly. This is life in the box, and I want to live out of the box. I want to work out of the box too.

The core message of my work is get in the box and don’t make waves, just do what you’re told. The message from society as a whole may be exactly the same with order, structure and compliance being prized by a large portion of the population. Be happy with what you’ve got, everything is fine. I suspect that my management is just as disempowered as I am. More deeply the issues surrounding this problem are societal. Americans are epically disempowered with many people expressing this dysfunction politically. The horror show is playing out Nationally with the election of a historically unpopular and unqualified President simply because he isn’t part of the system. The population as a whole thinks things are a mess. For roughly half the people electing an unqualified, politically incorrect, outsider seems like the appropriate response. The deeper problem is that the sort of in the box forces are not partisan at all, the right does its thing and the left does another thing, but both seek to disempower the population as a whole.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

– Seth Godin

Some part of Trump’s support comes from people who just want to burn the system to the ground. Another group of people exist on the left who want the same outcome, destroy the current system. Maybe Trump will destroy the system and create a future, but I seriously doubt it. I’m guessing more of a transition to kleptocratic rule where the government actively works to loot the country for the purpose of enriching a select few. I’d prefer a much more constructive and progressive path to the future where human potential is unleashed and unlocked. Ultimately a lack of progress in fixing the system will eventually lead to something extreme and potentially violent. The bottom line is that the forces enslaving us are driven by the sort of people represented by the leadership of both political parties. The ruling class has power and money with the intent of holding and expanding it and personal empowerment of common citizens is a threat to their urlauthority. The ruling business class and wealthy elite enjoy power through subtle subjugation of the vast populace. The populace accepts their subjugation in trade for promises of safety and security through the control of risk and danger.

For now, the message at work is get in the box by complying while not making waves and simply doing what you are told to do. No amount of reading about employee empowerment can fix the reality until there is a commitment to a different path. The management can talk till they are blue in the face about their principles, diversity, excellence, teamwork and the power of innovative out of the box thinking, but the reality is the opposite. The national reality is the same, bullshit about everyone mattering, and a truth where very few matter at all. We have handed the reins of power to those who put us in bondage, and we would have done the same if the democrats had won too. There will be real differences in what the bondage looks like, but the result is largely the same. Rather than breaking our chains, we have decided to make the bonds stronger. We can hope that people recognize the error and change course sooner rather vyxvbzwxthan later. As long as we continue to prize safety and security over possibility and potential, we can expect to be disempowered.

We have so much potential waiting to be unleashed by rejecting in the box thinking. To get there we need to reject over-whelming safety, control and compliance. We need to embrace risk and possibility with the faith that our talents can lead us to a greater future powered by innovative, inspired thinking and lives well lived by empowering everyone to get out of the box.

The best way to be missed when you’re gone is to stand for something when you’re here.

– Seth Godin

 

Verification and Validation with Uncertainty Quantification is the Scientific Method

tl;dr : VVUQ injects the fundamentals of the scientific method into modeling and simulation. The general lack of VVUQ in HPC should cause one to question how much actual science is being done.

Modeling and simulation has been hailed by many as a third way to do science taking its place next to theory and observation as one of the pillars of practice. I strongly believe that this proposition does not bear up to scrutiny. For this to be true the advent of modeling and simulation would need to change the scientific method is some fashion; it does not. This does not minimize the importance of scientific computing, but rather puts it into the proper context. Instead of being a new way to do science, it provides tools for doing partsthe_scientific_method_as_an_ongoing_process-svgof science differently. First and foremost modeling and simulation enhances our ability to make predictions and test theories. As with any tool, it needs to be used with care and skill. My proposition is that the modeling and simulation practice of verification and validation combined with uncertainty quantification (VVUQ) defines this care and skill. Moreover VVUQ provides an instantiation of the scientific method for modeling and simulation. An absence of emphasis on VVUQ in modeling and simulation programs should bring doubt and scrutiny on the level of scientific discourse involved. In order to see this one needs to examine the scientific method in a bit more detail.

The Scientific Method is a wonderful tool as long as you don’t care which way the outcome turns; however, this process fails the second one’s perception interferes with the interpretation of data. This is why I don’t take anything in life as an absolute…even if someone can “prove” it “scientifically.

― Cristina Marrero

To continue our conversation we need a serious discussion of the scientific method itself. What is it? What are its parts? Who does it, and what do they do? We can then map all the activities from VVUQ onto the scientific method, proving my supposition.

In science and society, the scientific method conjures a large degree of reverence. In human discourse few basic processes have the same degree of confidence and power. The two basic activities in science are theory and observation (experiment) along with some basic actions that power each, and drive the connection between these ways of doing science. We devise theories to help explain what we experience in reality. These theories are the result of asking deep questions and proposing hypothesized mechanisms for our experience. Ultimately these theories usually take on the form of principles and mathematical structure. A theory that explains a certain view of reality can then be tested by making a prediction about something reality that has not been observed. The strength of the prediction is determined by the degree of difference between the observation that formed the basis of the theory and the test of the prediction. The greater the difference in circumstance for the experiment, the stronger the test of the theory is. Ultimately there are a great number of details and quality assessments needed to put everything in context.

One thing that modeling and simulation does for science expands the ability to make predictions for complex and elaborate mathematical models. Many theories produmediocritydemotivatorce elaborate and complex mathematical models, which are difficult to solve and inhibit the effective scope of predictions. Scientific computing relaxes this limitations significantly, but only if sufficient care is taken with assuring the credibility of the simulations. The entire process of VVUQ serves to provide the assessment of the simulation so that they may confidently be used in the scientific process. Nothing about modeling and simulation changes the process of posing questions and accumulating evidence in favor of a hypothesis. It does change how that relaxing limitations on the testing of theory arrives at evidence. Theories that were not fully testable are now open to far more complete examination as they now may make broader predictions than classical approaches allowed.

Science has an unfortunate habit of discovering information politicians don’t want to hear, largely because it has some bearing on reality.

― Stephen L. Burns

The first part of VVUQ, the verification, is necessary to be confident that the simulation is a proper solution of the theoretical model, and suitable for further testing. The other element of verification is error estimation from the approximate solution. This is a vastly overlooked aspect of modeling aimagesnd simulation where the degree of approximate accuracy is rarely included in the overall assessment. In many cases the level of error is never addressed and studied as part of the uncertainty assessment. Thus verification plays two key roles in the scientific study using modeling and simulation. Verification acts to define the credibility of the approximate solution to the theory being tested, and an estimation of the approximation quality. Without an estimate of the numerical approximation, we possibly suffer from conflating this error with modeling imperfections, and obscuring the assessment of the validity of the model. One should be aware of the pernicious practice of simply avoiding error estimation by declarative statements of being mesh-converged. This declaration should be coupled with direct evidence of mesh convergence, and the explicit capacity to provide estimates of actual numerical error. Without such evidence the declaration should be rejected.

Verification should be a prerequisite for then examining the validity of the model, or validation. As mentioned that validation without first going through verification is prone to false positives or false negatives with a risk that numerical error will be confused with the true assessment of the theoretical model and its predictions. The issue of counting numerical error as modeling is deep and broad in modeling and simulation. A proper VVUQ process with a full breadth of uncertainty quantification must include it. Like any scientific endeavor the uncertainty quantification is needed to place the examination of models in a proper perspective. When the VVUQ process is slipshod and fails to account for the sources of error and uncertainty, the scientific process is damaged and the value of the simulation is shortchanged.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

Of course, validation requires data from reality to be done. This data can come from images-1experiments or observation of the natural world. In keeping with the theme an important element of the data in the context of validation is its quality and a proper uncertainty assessment. Again this assessment is vital for its ability to put the whole comparison with simulations in context, and help define what a good or bad comparison might be. Data with small uncertainty demands a completely different comparison than large uncertainty. Similarly for the simulations where the level of uncertainty has a large impact on how to view results. When the uncertainty is unspecified either data or simulation are untethered and scientific conclusions or engineering judgments are threatened.

Cielo rotatorIt is no understatement to note that this perspective is utterly missing from the high performance computing world today and the foolish drive to exascale we find ourselves on. Current exascale programs are almost completely lacking any emphasis on VVUQ. This highlights the lack of science in our current exascale programs. They are rather naked and direct hardware-centric programs that show little or no interest in actual science, or applications. The whole program is completely hardware-focused. The holistic nature of modeling and simulation is ignored and the activities connecting modeling and simulation with reality are systematically starved of resources, focus and attention. It is not too hyperbolic to declare that our exascale programs are not about science.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feign

The biggest issue in the modern view of project management for VVUQ is its injection of risk into work. We live in a world where spin and BS can easily be substituted for actual technical achievement. Doing VVUQ often results in failures by highlighting problems with modeling and simulation. One of the greatest skills in being good at VVUQ is honesty. Today it is frequently impossible to be honest about shortcomings because it is dude_wtfperceived as vulnerability. Stating weaknesses or limitations to anything cannot be tolerated in today’s political environment, and risks project existence because it is perceived as failure. Instead of an honest assessment of the state of knowledge and level of theoretical predictivity, today’s science prefers to make over-inflated claims and publish via press release. VVUQ runs counter to this practice if done correctly. Done properly VVUQ provides people using modeling and simulation for scientific or engineering work with a detailed assessment of credibility and fitness for purpose.

Scientific objectivity is not the absence of initial bias. It is attained by frank confession of it.

― Mortimer J. Adler

Just as science has a self-correcting nature in how the scientific method work, VVUQ is a means of self-correction for modeling and simulation. A proper and complete VVUQ assessment will produce good knowledge of strengths and weaknesses in modeling and where opportunities for improvement lie. A lack of VVUQ both highlights the lack of commitment to science in a project and its unsuitability for serious work. This assessment is quite damning to current HPC effort that have failed to include VVUQ in the efforts much less their emphasis. It is basically a deunderachievementdemotivatorclaration of intent by the program to seek results associated with spin and BS instead of a serious scientific or engineering effort. This end state is signaled by far more than merely a lack of VVUQ, but also the lack of serious application and modeling support. This simply compounds the lack of method and algorithm support that also plagues the program. The most cynical part of all of this is the centrality of application impact to the case made for the HPC programs. The pitch to the nation or the World is the utility of modeling and simulation to economic or physical security, yet the programs are structured to make sure this cannot happen, and will not be a viable outcome.

We may not yet know the right way to go, but we should at least stop going in the wrong direction.

― Stefan Molyneux

The current efforts seem to be under the impression that giant (unusable, inefficient, monstrous,…) computers will magically produce predictive, useful and scientifically meaningful solutions. I could easily declare those running these programs to be naïve and foolish, but this isn’t the case, the lack of breadth and balance in these programs is willful. People surely know better, so the reasons for the gaps are more complex. We have a complete and utter lack of brave, wise and courageous leadership in HPC. We know better, we just don’t do it.