People almost invariably arrive at their beliefs not on the basis of proof but on the basis of what they find attractive.

― Blaise Pascal

21SUPERCOMPUTERS1-master768When we hear about supercomputing, the media focus, press release is always talking about massive calculations. The bigger is always better with as many zeros as possible with some sort of exotic name for the rate of computation, mega, tera, peta, eta, zeta,… Up and to the right! The implicit proposition is that bigger the calculation, the better the science. This is quite simply complete and utter bullshit. These big calculations providing the media footprint for supercomputing and winning prizes are simply stunts, or more generously technology demonstrations, and not actual science. Scientific computation is a much more involved and thoughtful activity involving lots of different calculations many at a vastly smaller scale. Rarely, if ever, do the massive calculations come as a package including the sorts of evidence science is based upon. Real science has error analysis, uncertainty estimates, and in this sense the massive calculations produce a disservice to computational science by skewing the picture of what science using computers should look like.

This post aims to correct this rather improper vision, and replace it with a discussion of what computational science should be.

With a substantial amount of focus on the drive toward the first exascale supercomputer, it is high time to remind everyone that a single massive calculation is a stunt meant to sell the purchase of said computers, and not science. This week the supercomputing community is meeting in Salt Lake City for a trade show sc16logomasquerading as a scientific conference. It is simply another in a phalanx of echo chambers we seem to form with increasing regularity across every sector of society. I’m sure the cheerleaders for supercomputing will be crowing about the transformative power of these computers and the boon for science they represent. There will be celebrations of enormous calculations and pronouncements about their scientific value. There is a certain lack of political correctness to the truth about all this; it is mostly pure bullshit.

The entire enterprise pushing toward exascale is primarily a technology push program. It is a furious and futile attempt to stave off the death of Moore’s law. Moore’s law has TOP500-the-list-graphic-150x150provided an enormous gain in the power of computers for 50 years and enabled much of the transformative power of computing technology. The key point is that computers and software are just tools; they are incredibly useful tools, but tools nonetheless. Tools allow a human being to extend their own biological capabilities in a myriad of ways. Computers are marvelous at replicating and automating calculations and thought operations at speeds utterly impossible for humans. Everything useful done with these tools is utterly dependent on human beings to devise. My key critique about this approach to computing is the hollowing out of the investigation into devising better ways to use computers and focusing myopically on enhancing the speed of computation.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The core of my assertion that its mostly bullshit comes from looking at the scientific method and its application to these enormous calculations. The scientific method is fundamentally about understanding the World (and using this understanding via engineering). The World is observed either in its natural form, or through experiments deserrorbars2vised to unveil difficult to see phenomena. We then produce explanations or theories to describe what we see, and allow us to predict what we haven’t see yet. The degree of comparison between the theory and the observations confirms our degree of understanding. There is always a gap between our theory and our observations, and each is imperfect in its own way. Observations are intrinsically prone to a variety of errors, and theory is always imperfect. The solutions to theoretical models are also imperfect especially when solved via computation. Understanding these imperfections and the nature of the comparisons between theory and observation is essential to a comprehension of the state of our science.

Cielo rotatorAs I’ve stated before, the scientific method applied to scientific computing is embedded in the practice of verification and validation. Simply stated, a single massive calculation cannot be verified or validated (it could be, but not with current computational techniques and the development of such capability is a worthy research endeavor). The uncertainties in the solution and the model cannot be unveiled in a single calculation, and the comparison with observations cannot be put into a quantitative context. The proponents of our current approach to computing want you to believe that massive calculations have intrinsic scientific value. Why? Because they are so big, they have to be the truth. The problem with this thinking is that any single calculation does not contain steps necessary for determining the quality of the calculation, or putting any model comparison in context.

The context of any given calculation is determined by the structure of the errors associated with the computational modeling. For example it is important to understand the nature of any numerical errors, and producing an estimate of these errors. In some (many, most) cases a very good comparison between reality and a model is the result of calibration of uncertain model parameters. In many cases the choices for the modeling parameters are mesh dependent, which produces the uncomfortable outcome where a finer mesh produces a systematically worse comparison. This state of affairs is incredibly common, and generally an unadvertised feature.

An important meta-feature of the computing dialog is the skewing of computer size, design and abilities. For example, the term capability computer comes up where these computers can produce the largest calculations we see, the ones on press releasegesamthubschrauber-01s. These computers are generally the focus of all the attention and cost the most money. The dirty secret is that they are almost completely useless for science and engineering. They are technology demonstrations and little else. They do almost nothing of value to the myriad of programs reporting to use computations to do produce results. All of the utility to actual science and engineering come from the homely cousins of these supercomputers, the capacity computers. These computers are the workhorses of science and engineering because they are set up to do something useful. The capability computers are just show ponies, and perfect exemplars of the modern bullshit based science economy. I’m not OK with this; I’m here to do science and engineering. Are our so-called leaders OK with the focus of attention (and bulk of funding) being non-scientific, media-based, press release generators?

Crays-Titan-SupercomputerHow would we do a better job with science and high performance computing?

The starting point is the full embrace of the scientific method. Taken at face value the observational or experimental community is expected to provide observational uncertainties with their data. These uncertainties should be de-convolved between errors/uncertainties in raw measurement and any variability in the phenomena. Those of us using such measurements for validating codes should demand that observations always come with these uncertainties. By the same token, computational simulations have uncertainties from a variety numerical errors and modeling choices and assumptions that should be demanded. Each of these error sources needs to be characterized to put any comparison with observations/experimental data into context. Without knowledge of these uncertainties on both sides of the scientific process, any comparison is completely untethered.

If nothing else, the uncertainty in any aspect of this process provides a degree of confidence and impact of comparative differences. If a comparison between a model and data is poor, but the data has large uncertainties, the comparison suddenly becomes more palatable. On the other hand small uncertainties with the data would imply that the model is potentially too incorrect. This conclusion would be made once the modeling uncertainty has been explored. One reasonable case would be the identification of large numerical errors in the model’s solution. This is the case where a refined calculation might be genuinely justified. If the bias with a coarse grid is sufficient, a finer grid calculation could be a reasonable way of getting more agreement. Therimages-1e are certainly cases where exascale computing is enabling for model solutions with small enough error to make models useful. This case is rarely made or justified in any massive calculation rather being asserted by authority.

On the other hand numerical error could be a small contributor to the disagreement. In this case, which is incredibly common, a finer mesh does little to rectify model error or uncertainty. The lack of quality comparison is dominated by modeling error, or uncertainty about the parameterization of the models. Worse yet, the models are poor representations of the physics of interest. If the model is a poor representation solving it very accurately is a genuinely wasteful exercise, at least if your goal is scientific in nature. If you’re interested in colorful graphics and a marketing exercise, computer power is your friend, but don’t confuse this with science (or at least good science). The worst case of this issue is a dominant model form error. This is the case where the model is simply wrong, and incapable of reproducing the data. Today many examples exist where models we know are wrong are beat to death with a supercomputer. This does little to advance science, which needs to work at producing a new model that ameliorates the deficiencies in the old model. Unfortunately our supercomputing programs are sapping the vitality from our modeling programs. Even worse, many people seem to confuse computing power as a remedy to model form error.

Equidistributed error is probably the best goal of modeling and simulation that is a balance of numerical and modeling error/uncertainty. This would be the case where the combination of modeling error and uncertainty with a numerical solution has the smallest value. The standard exascale computing driven model would have the numerical error driven to be nearly zero without regard for the modeling error. This ends up being a small numerical error by fiat or proof by authority, proof by overwhelming power. Practically, this is foolhardy and technically indefensible. The issue is the inability to effectively hunt down modeling uncertainties under these conditions, which is hamstrung by the massive cal2-29s03culations. The most common practice is to assess the modeling uncertainty via some sort of sampling approach. This requires many calculations because of the high-dimensional nature of the problem. Sampling converges very slowly with any mean value for the modeling being proportional to the inverse square root of the number of samples and the measure of the variance of the solution.

Thus a single calculation will have an undefined variance. With a single massive calculation you have no knowledge of the uncertainty either modeling or numerical (at least without have some sort of embedded uncertainty methodology). Without assessing the uncertainty of the calculation you don’t have a scientific or engineering activity. For driving down the inherent uncertainties especially where the modeling uncertainty dominates, you are aided by smaller calculations that can be executed over and over as to drive down the uncertainty. These calculations are always done on capacity computers and never on capability computers. In fact if you try to use a capability computer to do one of these studies, you will be punished and get kicked off. In other words the rules of use enforced via the queuing policies are anti-scientific.

Supernove-Shocks-1The uncertainty structure can be approached at a high level, but to truly get to the bottom of the issue requires some technical depth. For example numerical error has many potential sources: discretization error (space, time, energy, … whatever we approximate in), linear algebra error, nonlinear solver error, round-off error, solution regularity and smoothness. Many classes of problems are not well posed and admit multiple physically valid solutions. In this case the whole concept of convergence under mesh refinement needs overhauling. Recently the concept of measure-valued (statistical) solutions has entered the fray. These are taxing on computer resources in the same manner as sampling approaches to uncertainty. Each of these sources requires specific and focused approaches to their estimation along with requisite fidelity.

Modeling uncertainty is similarly complex and elaborate. The hardest aspect to evaluate is the form of the physical model. In cases where multiple reasonable models exist, the issue is evaluating the model’s (or sub-model’s) influence on solutions. Models often have adjustable parameters that are unknown or subject to calibration. Most commonly the impact of these parameters and their values are investigated via sampling solutions, an expensive prospect. Similarly there are modeling issues that are purely random, or statistical in nature. The solution to the problem is simply not determinate. Again sampling the solution of a range of parameters that define such randomness is a common approach. All this sampling is very expensive and very difficult to accurately compute. All of our focus on exascale does little to enable good outcomes.

The last area of error is the experimental or observational error and uncertainty. This is important in defining the relative quality of modeling, and the sense and sensibility of using massive computing resources to solve models. We have several standard components in the structure of the error in experiments: the error in measuring a quantity, and then the variation in the actual measured quantity. In one case there is some intrinsic uncertainty in being able to measure something with complete precision. The second part of this is the variation of the actual value in the experiment. Turbulence is the archetype of this sort of phenomena. This uncertainty is intrinsically statistical, and the decomposition is essential to truly understand the nature of the world, and put modeling in proper and useful context.

dag006The bottom line is that science and engineering is evidence. To do things correctly you need to operate on an evidentiary basis. More often than not, high performance computing avoids this key scientific approach. Instead we see the basic decision-making operating via assumption. The assumption is that a bigger, more expensive calculation is always better and always serves the scientific interest. This view is as common as it is naïve. There are many and perhaps most cases where the greatest service of science is many smaller calculations. This hinges upon the overall structure of uncertainty in the simulations and whether it is dominated by approximation error, modeling form or lack of knowledge, and even the observational quality available. These matters are subtle and complex, and we all know that today neither subtle, nor complex sells.

What can be asserted without evidence can also be dismissed without evidence.

― Christopher Hitchens

 

Advertisements