It is quality rather than quantity that matters.

― Seneca
The utility of calculations and scientific computing is taken to be axiomatic, yet we cannot easily articulate why a calculation is useful. By virtue of this dynamic, we also can’t tell you why a calculation isn’t useful either. This axiomatic belief underlies the investment by the nation into high performance computing (HPC), yet the lack of clarity on utility clouds any discussion. Worse yet, the clouding of the reasons for utility produces counter-productive investment decisions and suboptimal programs. Perhaps it is high time to wrestle with this issue and try to see our way clear to some greater clarity.

Useful calculations shed light and understanding on issues existing in the real world. The utility of calculation for scientific investigations lies in their ability to study hypotheses or (help to) explain observations. A successful model of reality implies a certain level of understanding that can be comforting, contrasted with an unsuccessful or highly calibrated that drives new work. With sufficient confidence, the calculation allows one to play “what if” scenarios and study the impact of changes to a physical system. This guides physical studies, measurements and observations, which can yield unequivocal evidence. Computations usually do not provide this, but show the way to finding it. The same thing happens in engineering where calculations are often used in a comparative sense to understand how to optimize designs, or fix problems with existing designs. In other cases, the calculation can help to explain why things don’t work, or broke, or behaved in a manner that was unexpected. For calculations need to take a valued role in science and engineering the demonstration the ability to provide these varied aspects of real world functionality are essential. Once calculations step into the role of advisor, sage, and detective, the confidence, trust and credibility follows. This dynamic is never present in any discourse on HPC and current HPC programs almost callously disregard this legacy. This is dangerous and threatens progress.logo

When one looks at HPC, the press is drawn to the biggest, most expensive, most time consuming calculations and the science community allows itself to bullshit people on their utility. Plainly put, the giant calculations inhabiting press releases are simply demos at best, stunts at worst and very rarely science of any note. These hero calculations are not useful for science or engineering. As one of my most senior colleagues has quipped, single calculations will never be the right answer for hard problems. These stunts and demos are single, one-off calculations that have no established pedigree and dubious credibility. The problem is that the broader dynamic in HPC is poisoned by the devotion to the myth of utility of hero calculations. At best, these calculations are harbingers of the calculations that might be useful five to ten years from now and little else. These calculations push and pace progress in HPC, but their use for engineering and domain science is minimal.

What we have is another serious case of bullshit hurting our ability does deal with reality. In HPC, the effort and funding is chasing the biggest calculations while the important work done with smaller calculations simply fails to capture the attention and focus of the community. Increasingly the funding follows the bullshit narrative instead of the actual utility narrative, which is undermining the reality of modeling & simulation impact. The danger in the distance between focus and utility is the loss of opportunity to make HPC really matter and produce unambiguous impact. The community has allowed this fiction to persist and color funding decisions for decades. The efficacy of HPC for science and engineering is suffering as a result. The depth of the issue is great and clarity is actually easy to grasp.

One of the clearest issues with HPC utility is the prevalence of faith in individuals being definitive in credibility. Even today, the analysts involved in conducting calculation matter more to real world concerns using modeling & simulation than any technical work underpinning utility. The importance of the analyst also overwhelms the importance of the code itself. We persist with this false narrative around the importance of codes. One of the clearest results of this gap is the continuing lack of impact for verification and validation. In fact I see regression instead of progress in the impact of technical work for credibility and greater focus on the personal aspect of credibility. In other words, it is gesamthubschrauber-01more important who does a calculation than how the work is done although these two items are linked. This was true 25 years ago with ASCI as it is today. The progress has not happened in large part because we let it, and failed to address the core issues while focusing on press releases and funding profiles. We see the truth squashed because it doesn’t match rhetoric. Now we see lack of funding and emphasis on calculation credibility in the Nation’s premier program for HPC. We continue to trumpet the fiction that the bigger the calculation and computer, the more valuable a calculation is a priori.

supercomputer-2016-1-100693249-large-3x2Even today with vast amounts of computer power, the job of modeling reality is subtle and nuanced. The modeler who conspires to represent reality on the computer still makes the lion’s share of the decisions necessary for high fidelity representations of reality. All of the items associated with HPC impact a relatively small amount of the overall load of analysis credibility. The analyst decides how to model problems in detail including selection of sub-models, meshes, boundary conditions, and the details included and neglected. The computer power and the mesh resolution usually end up being an afterthought and minor detail. The true overall modeling uncertainty is dominated by everything in the analyst’s power. In other words, the pacing uncertainty in modeling & simulation is not HPC; it is all the decisions made by the analysts. Even with the focus on “mesh resolution” the uncertainty associated with the finite integration of governing equations is rarely measured or estimated. We are focusing on a small part of the overall modeling & simulation capability to the exclusion of the big stuff that drives utility.

As usual, the issue is related to the relative sex appeal of the details in modeling & simulation. All the analyst-controlled details are dull and unexciting while HPC is sexy and exciting. The HPC things are easily marketed and receive funding while the analyst details are boring, but essential. The result is a focus on the sexy HPC stuff while the important work done by analysts goes by with meager, haphazard and disparate support. More deeply, the analyst support is defined purely through application work and generally divorced from the HPC work. As such the divide just grows and grows. Moreover the HPC side of the work can dutifully ignore the analyst stuff that matters because the political weight says that the important details matter little. In the HPC work all the glue between the computer-focused HPC work and applications is poorly funded or not funded at all.

One of the core issues in this entire dynamic associated with the utility of computational modeling and simulation is predictivity. Predictive simulations are a sort of “Holy Grail” for computational science. Predictive calculations are not necessarily useful. Useful computations can come from sources that are far from predictive, and the utility is far more driven by the flexibility of computational capability combined with the ability of analysts to wield the computational power. The utility, flexibility and understanding cannot come from the lumbering computational behemoths driving funding. If a calculation is predictive; so much the better it is for utility. The key to predictivity is it demands a lot of evidence and a systematic investigation, which is the whole practice of verification and validation (V&V).

Where utility ends and decoration begins is perfection.

― Jack Gardner

One of the single greatest issues is a general failure to measure prediction, modeling & simulation uncertainties in a holistic manner. Generally uncertainty estimation is limited to parametric epistemic uncertainty, which is an important, but small part of the overall uncertainty budget. Numerical uncertainty is usually not estimated at all, but declaration is made regarding the lack of mesh dependence, or simply the massive size of the calculation renders numerical errors small by fiat. In many cases systems have intrinsic variability that provides an important source of uncertainty (turbulence canonically comes to mind). This is also rarely estimated. Finally we come to the uncertainty directly associated with the analyst’s decisions. When this issue has been studied, the uncertainty associated with analyst modeling decisions or analyst assumptions tends to huge compared to other sources. The easier and common thing to do is a declaration that the calculation is predictive by definition thus avoids any real quantification of the uncertainty.

imgresThe current HPC belief system believes that massive computations are predictive and credible solely by the virtue of overwhelming computational power. In essence they use proof by massive computation as the foundation of belief. The problem is that science and engineering do not work this way at all. Belief comes from evidence and the evidence that matters are measurements and observations of the real World (i.e., this would be validation). Models of reality can be steered and coaxed into agreement via calibration in ways that are anathema to prediction. Part of assuring that this isn’t happening is verification. We ultimately want to make sure that the calculations are getting the right answers for the right reasons. Deviations from correctness should be understood at a deep level. Part of putting everything in proper context is uncertainty quantification (UQ). UQ is part of V&V. Unfortunately UQ has replaced V&V in much of the computational science community, and UQ estimated is genuinely incomplete. Now in HPC most of UQ has been replaced by misguided overconfidence.

This issue is another view of the dynamic where we have allowed alternative facts to displace reality. We are paving the road for a reality where bullshit and facts cannot be separated. It is everyone’s fault for allowing this to happen. Too many of us simply comply with the need for declarative success when admission of failure would suit progress and truth far better. Too often the emphasis is placed on marketing and spin rather than the truth. In the process we have systematically undermined core principles of quality in every corner of life. Perception has been allowed to become more important that truth and reality. Into this vacuum propaganda quickly become the medium of discourse. We may be too far-gone to fix this, and reality will bite back in a viscous manner to restore balance. This restoration will probably be very painful to experience.

bullshit_everywhere-e1345505471862At the core of the problem with bullshit as a technical medium is a general lack of trust, and inability to accept outright failure as an outcome. This combination forms the basis for bullshit and alternative facts becoming accepted within society writ large. When people are sure they will be punished for the truth, you get lies, and finely packaged lies are bullshit. If you want the truth you need to accept it and today, the truth can get you skewered. The same principle holds for the acceptance of failure. Failures are viewed as scandals and not accepted. The flipside of this coin is the truth that failures are the fuel for progress. We need to fail to learn, if we are not failing, we are not learning. Instead of hiding, or bullshitting our way through in order to avoid being labeled failures, we avoid learning, and also corrode our foundational principles. We are locked in a tight downward spiral and all our institutions are under siege. Our political, scientific and intellectual elite are not respected because truth is not valued. False success and feeling good is acceptable as an alternative to reality. In this environment bullshit reigns supreme and being useful isn’t enough to be important.

Raise your quality standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this, because it is the only way of discovering how that boundary should be moved forward.

― Edsger W. Dijkstra

 

Advertisements