Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

iceberg3In the modern dogmatic view of high performance computing, the dominant theme of utility revolves around being predictive. This narrative theme is both appropriate and important, but often fails to recognize the essential prerequisites for predictive science, the need to understand and explain. In scientific computing the ability to predict with confidence is always preceded by the use of simulations to aid and enable understanding and assist in explanation. A powerful use of models is the explanation of the mechanisms leading to what is observed. In some cases simulations allow exquisite testing of models of reality, and when a model matches reality we infer that we understand the mechanisms at work in the World. In other cases we have observations of reality that cannot be explained. With simulations we can test our models or experiment with mechanisms that can explain what we see. In both cases the confidence of the traditional science and engineering community is gained through the process of simulation-based understanding.

Leadership and learning are indispensable to each other.

― John F. Kennedy

Too often in today’s world we see a desire to leap over this step and move directly to prediction. This is a foolish thing to attempt. Like fools, this is exactly where we are leaping! The role of understanding in the utility of simulation is vital in building the foundation upon which prediction is laid. This has important technical and cultural imprints that should never be overlooked. The role of building understanding is deep and effective in providing a healthy culture of modeling and simulation excellence. Most essentially it builds deep bonds of curiosity satisfaction within the domain science and engineering community. The experimental and test community is absolutely vital to a healthy approach, and needs a collaborative spirit to thrive. When prediction becomes the mantra without first building understanding, simulations often get put into an adversarial position. For example we see simulation touted as a replacement to experiment and observation. Instead of collaboration simulation becomes an outright threat. This can lead to completely and utterly counter-productive competition where collaboration would serve everyone far better in almost every case.

The-most-powerful-Exascale-ComputerUnderstanding as the object of modeling and simulation also works keenly to provide a culture of technical depth necessary for prediction. I see simulation leaping into the predictive fray without the understanding stage as arrogant and naïve. This is ultimately highly counter-productive. Rather than building on the deep trust that the explanatory process provides, any failure on the part of simulation becomes proof of the negative. In the artificially competitive environment we too often produce, the result is destructive rather than constructive. Prediction without first establishing understanding is an act of hubris, and plants the seeds of distrust. In essence by sidestepping the understanding phase of simulation use makes failures absolutely fatal to success instead of stepping-stones to excellence. This is because the understanding phase is far more forgiving. Understanding is learning and can be engaged in with a playful abandon that yields real progress and breakthroughs. It works through a joint investigation of things no one knows and any missteps are easily and quickly forgiven. This allows the competence and knowledge to be built through the acceptance of failure. Without allowing these failures, success in the long run cannot happen.

Raise your quality standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this, because it is the only way of discovering how that boundary should be moved forward.

― Edsger W. Dijkstra

The essence of the discussion revolves around the sort of incubator that can be created by a collaborative, learning environment focused on understanding. When the focus is understanding of something the dynamic is forgiving and open. No one knows the answer and people are eager to accept failure as long as it is an honest attempt. More importantly when success comes it has the flavor of discovery and serendipity. The discovery takes the role of an epic win by heroic forces. After the collaboration has worked to provide new understanding and guided the true advance of knowledge, simulation sits in a place where it can be a trusted partner in the scientific or engineering enterprise. Too often in today’s world we disallow the sort of organic mode of capability development in favor of an artificial project based approach.

images-1Our current stockpile stewardship program is a perfect example of how we have systematically screwed all this up. Over time we have created a project management structure with lots of planning, lots of milestone, lots of fear of failure and managed to completely undermine the natural flow of collaborative science. The accounting structure and funding has grown into a noose that is destroying the ability to build a sustainable success. We divide the simulation work from the experimental or application work in ways that completely undermine any collaborative opportunity. Collaborations become forced and teaming with negative context instead of natural and spontaneous. In fact anything spontaneous or serendipitous is completely antithetical to the entire management approach. Worse yet, the newer programs have all the issues hurting the success of stockpile stewardship and have added a lot additional program management formality. The biggest inhibition to success is the artificial barriers to multi-disciplinary simulation-experimental collaborations, and the pervasive fear of failure permeating the entire management construct. By leaping over the understanding and learning phase of modeling and simulation we are short-circuiting the very mechanisms for the most glorious successes. We are addicted to managing programs not to ever fail, which ironically sew the seeds of abject failure.

The problem with the current project milieu is the predetermination of what success looks like. This is then encoded into the project plans and enforced via our prevalent compliance culture. In the process we almost completely destroy the potential for serendipitous discovery. Good discovery science is driven by having rough and loosely defined goals with an acceptance of outcomes that are unknown beforehand, but generally speaking provide immense value at the end of the projects. Today we have instituted project management that attempts to guide our science toward scheduled breakthroughs and avoid any chance at failure. The bottom line is that breakthroughs are grounded on numerous failures and course corrections that power enhanced understanding and a truly learning environment. Our current risk aversion and fear of failure is paving the road to a less prosperous and knowledgeable future.

Aurl specific area where this dynamic is playing out with maximal dysfunctionality is climate science. Climate modeling codes are not predictive and tend to be highly calibrated to the mesh used. The overall modeling paradigm involves a vast number submodels to include a plethora of physical processes important within the Earth’s climate. In a very real sense the numerical solution of the equations describing the climate are forever to be under-resolved with significant numerical error. The system of Earth’s climate also involves very intricate and detailed balances between physical processes. The numerical error is generally quite a bit larger than the balance effects determining the climate, so the overall model must be calibrated to be useful.

You couldn’t predict what was going to happen for one simple reason: people.

― Sara Sheridan

earth_system_interactionsIn the modern modeling and simulation world this calibration then provides the basis of very large uncertainties. The combination of numerical error and modeling error means that the true simulation uncertainty is relatively massive. The calibration assures that the actual simulation is quite close to the behavior of the true climate. The models can then be used to study the impact of various factors on climate and aid the level of understanding of climate science. This entire enterprise is highly model-driven and the level of uncertainty is quite large. When we transition over to predictive climate science, the issues become profound. We live in a world where people believe that computing should help to provide quantitative assistance for vexing problems. The magnitude of uncertainty from all sources should provide people with significant pause and provide a pushback from putting simulations in the wrong role. It should also not prevent simulation from providing a key tool in understanding this incredibly complex problem.

The premier program for high performance computing simply takes all of these issues and amplifies them to an almost ridiculous degree. The entire narrative around the need for exascale computing is entirely predicated on the computers providing predictive calculations. This is counter to the true role of computation as a modeling, learning, explanation, and understanding partner with scientific and engineering domain expertise. While it is wrong at an intrinsic level the secondary element in the program’s spiel is the simplicity of moving existing codes to new, faster computers for better science. Nothing could be further from the truth on either account. Most codes are woefully inadequate for predictive science first and foremost because of their models. All the things that the exascale program ignores are the very things that are necessary for predictivity. At the end of the day this program is likely to only produce more accurately solved wrong models and do little for predictive science. To exacerbate these issues, the exascale 21SUPERCOMPUTERS1-master768program generally does not support the understanding role of simulation in science.

The long-term impact of this lack of support for understanding is profound. It will produce a significant issue with the ability for simulation to elevate itself to a predictive role in science and engineering. The use of computation to help with understanding difficult problems paves the way for a mature predictive future. Removing the understanding is akin to putting someone into an adult role in life without going through a complete childhood. This is a recipe for disaster. The understanding portion of computational collaboration with engineering and science is the incubator for prediction. It allows the modeling and simulation to be very unsuccessful with prediction and still succeed. The success can arise through learning things scientifically through trial and error. These trials, errors and response over time provide a foundation for predictive computation. In a very real way this spirit should always be present in computation. When it is absent, the computational efficacy will become stagnant.

Crays-Titan-SupercomputerIn summary we have yet another case of marketing of science overwhelming the true narrative. In the search for funding to support computing, the sale’s pitch has been arranged around prediction as a product. Increasingly, we are told that a faster computer is all that we really need to do. The implied message in this sale’s pitch is a lack of necessity to support and pursue other aspects of modeling and simulation for predictive success. These issues are plaguing our scientific computing programs. Long-term success of high performance computing is going to be sacrificed, based on this funding-motivated approach. We can add the failure to recognize understanding, explaining and learning as a key products for science and engineering from computation.

Any fool can know. The point is to understand.

― Albert Einstein

seek-to-understand-cloud

 

Advertisements