The best dividends on the labor invested have invariably come from seeking more knowledge rather than more power.

— Wilbur Wright

Here is a hint; it’s not how we are approaching it today. The approach today is ultimately doomed to fail and potentially take a generation of progress wit it. We need to emphasize the true differentiating factors and embrace the actual sources of progress. Computer hardware is certainly a part of the success, but by no means the dominant factor in true progress. As a result we are starving key aspects of scientific computing from the intellectual lifeblood needed for advancing the state of the art. Even if we “win” following our current trajectory, the end result will be a loss because of the opportunity cost incurred in pursuing the path we are on today. Supercomputing is a holistic activity embedded in a broader scientific enterprise. As such it needs to fully embrace the scientific method and structure its approach more effectively.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

21SUPERCOMPUTERS1-master768The news of the Chinese success in solidifying their lead in supercomputer performance “shocked” the high performance-computing World a couple of weeks ago. To make things even more troubling to the United States, the Chinese achievement was accomplished with home grown hardware (a real testament to the USA’s export control law!). It comes as a blow to the American efforts to retake the lead in computing power. It wouldn’t matter if the USA or anyone else for that matter were doing things differently. Of course the subtext of the entire discussion around supercomputer speed is the supposition that raw computer power measures the broader capability in computing, which defines an important body of expertise for National economic and military security. A large part of winning in supercomputing is the degree to which this supposition is patently false. As falsehoods go, this is not ironclad and a matter of debate over lots of subtle details that I elaborated upon last week. The truth depends on how idiotic the discussion needs to be and one’s tolerance for subtle technical arguments. In today’s world arguments can only be simple, verging on moronic and technical discussions are suspect as a matter of course.

Instead of concentrating just on finding good answers to questions, it’s more important to learn how to find good questions!

― Donald E. Knuth

If you read that post you might guess the answer of how we might win the quest for supercomputing supremacy. In a sense we need to do a number of things better than today. First, we need to stop measuring computer power with meaningless and misleading benchmarks. These do nothing but damage the entire field by markedly skewing the overall articulation of both the successes, but also the challenges of building uunnamedseful computers. Secondly, we need to invest our resources in the most effective areas for success these are modeling, methods and algorithms all of which are far greater sources of innovation and true performance for the accomplishment of modeling & simulation. The last thing is to change the focus of supercomputing to modeling & simulation because it is where the societal value of computing is delivered. If these three things were effectively executed upon victory would be assured to whomever made the choices. The option of taking more effective action is there for the taking.

Discovery consists of looking at the same thing as everyone else and thinking something different.

― Albert Szent-Györgyi

The first place to look for effort that might dramatically tilt the fortunes of supercomputing is modeling. Our models of the World are all wrong to some degree; they are all based on various limiting assumptions, and may be improved. None of these characteristics may be ameliorated by supercomputing power, or accuracy of discretization, nor algorithmic efficiency. Modeling limitations are utterly impervious to anything, but modeling improvement. The subtext to the entire discussion of supercomputing power is the supposition that our models today are completely adequate and only in need of faster computers to fully explain reality. This is an utterly specious point-of-view that basically offends the foundational principles of science itself. Modeling is the key to the understanding and irreplaceable in its power and scope to transform our capability.

And a step backward, after making a wrong turn, is a step in the right direction.

― Kurt Vonnegut

gesamthubschrauber-01We might take a single example to illustrate the issues associated with modeling: gradient diffusion closures for turbulence. The diffusive closure of the fluid equations for the effects of turbulence is ubiquitous, useful and a dead end without evolution. It is truly a marvel of science going back to the work of Prantl’s mixing length theory. Virtually all the modeling of fluids done with supercomputing is reliant on its fundamental assumptions and intrinsic limitations. The only place where its reach does not extend to is the direct numerical simulation where the flows are computed without the aid of modeling, i.e., a priori (which for the purposes here I will take as a given although it actually needs a lot of conversation itself). All of this said, the ability of direct numerical simulation to answer our scientific and technical questions are limited because turbulence is such a vigorous and difficult multiscale problem that even an exascale computer cannot slay.

So let’s return to what we need to do to advance the serious business of turbulence modeling. In a broad sense one of the biggest limitations of diffusion as a subgrid closure is its inability to describe behavior that is not diffusive. While turbulence is a decisively dissipative phenomenon, it is not always and only dissipative locally. The diffusive subgrid closure makes this assumption and hence carries deep limitations. In key areas of a flow field the proper subgrid model is actually non-dissipative or even anti-dissipative. The problem is that diffusion is a very stable and simple way to model phenomena in many ways exaggerating its success. We need to develop non-diffusive models that extend the capacity to model flows not fully or well described by diffusive closure approaches.

computer-modeling-trainingOnce a model is conceived of in theory we need to solve it. If the improved model cannot yield solutions, its utility is limited. Methods for computing solutions to models beyond the capability of analytical tools were the transformative aspect of modeling & simulation. Before this many models were only solvable in very limited cases through apply a number of even more limiting assumptions and simplifications. Beyond just solve the model; we need to solve it correctly, accurately and efficiently. This is where methods come in. Some models are nigh on impossible to solve, or entail connections and terms that evade tractability. Thus coming up with a method to solve the model is a necessary element in the success of computing. In the early years of scientific computing many methods came into use that tamed models into ease of use. Today’s work on methods has slowed to a crawl, and in a sense our methods development research are victims of their own success.

Arthur C. Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic.

An example of this success is the nonlinear stabilization methods I’ve written about recently. These methods are the lifeblood of the success computational fluid dynamics (CFD) codes have had. Without their invention the current turnkey utility of CFD codes would be unthinkable. Before their development CFD codes were far more art and far less science than today. Unfortunately, we have lost much of the appreciation for the power and scope of these methods. We have little understanding of what came before them and the full breadth of their magical powers. Before these methods came into the fore one was afforded the daunting task of choosing between an overly diffusive stable method (i.e., donor cell–upwind differencing) and a more accurate, but unphysically oscillatory method. These methods allowed on to have both and adaptively use whatever was necessary under the locally determined circumstances, but they can do much more. While their  power to allow efficient solutions was absolutely immense, these methods actually opened doors to physically reasonable solutions to a host of problems. One could have both accuracy and physical admissibility in the same calculation.

This is where the tale turns back toward modeling. These methods actually provide some modeling capability for “free”. As such the modeling under the simplest circumstances is completely equivalent to the Prantl’s mixing layer approach, but with the added benefit of computability. More modern stabilized differencing actually provides modeling that goes beyond the simple diffusive closure. Because of the robust stability properties of the method one can compute solutions with backscatter stably. This stability is granted by the numerical approach, but provides the ability to solve the non-dissipative model with an asymptotic stability needed for physically admissible modeling. If one had devised a model with the right physical effect of local backscatter, these methods provide the stable implementation. In this way these methods are magical and make the seemingly impossible, possible.images-1

This naturally takes us to the next activity in the chain of activities that add value to computing, algorithm development. This is the development of new algorithms that have greater efficiency to differentiate itself from the focus of algorithm work today, simply implementing old algorithms on the new computers, which comes down to dealing with the increasingly enormous amount of parallelism demanded. The sad thing is that no implementation can over come the power of algorithmic scaling, and this power is something we are systematically denying ourselves of. Indeed we have lost massive true gains in computational performance because of failure to invest in this area, and the inability to recognize the opportunity cost of a focus on implementing the old.

A useful place to look to in examining the sort of gains coming from algorithms is numerical linear algebra. The state of the art here comes from multigrid and it came into the fore over 30 years ago. Since then we have had no breakthroughs, when before a genuine breakthrough occurred about every decade. It is not coincidence 30 years ago is when parallel computing began its eventual takeover of high performance computing. Making multigrid or virtually any other “real” algorithm work at a massive parallel scale is very difficult, incredibly challenging work. This difficulty has swallowed up all the effort and energy in the system effectively starving the development of new algorithm invention out. What is the cost? We might understand the potential cost of these choices by looking back at what previous breakthroughs have gained.

We can look at the classical example of solving Poisson’s equation (\nabla^2 u = f) on the unit square or cube to instruct us on how incredibly massive the algorithmic gains might be. The crossover point between a relaxation method (Gauss-Seidel, GS, or Jacobi) and an incomplete Cholesky conjugate gradient (ICCG) is at approximately 100 unknowns. For a multigrid algorithm the crossover point in cost occurs at around 1000 unknowns. Problems of 100 or 1000 unknowns can now be accomplished on something far less capable than a cell phone. For problems associated with supercomputers the differences in the cost of these different algorithms are utterly breathtaking to behold.7b8b354dcd6de9cf6afd23564e39c259

Consider a relatively small problem today of solving Poisson’s equation on a unit cube of 1000 unknowns in each direction (10^9 unknowns). If we take the cost of multigrid as taking “one” the GS now takes ten million times more effort, and ICCG almost 1000 times the effort. Scale up the problem to something we might dream of doing on an exascale computer of a cube of 10,000 on a side with a trillion unknowns, and we easily see the tyranny of scaling and the opportunity of algorithmic breakthroughs we are denying ourselves of. For this larger problem, the GS now costs ten billion times the effort of multigrid, and ICCG is now 30,000 times the expense. Imagine the power of being able to solve something more efficiently than multigrid! Moreover multigrid can withstand incredible levels of inefficiency in its implementation and still win compared to the older algorithms. The truth is that parallel computing implementation drives the constant in front of the scaling up to a much larger value than a serial computer, so these gains are offset by the lousy hardware we have to work with.

Here is the punch line to this discussion. Algorithmic power is massive almost to a degree that defies belief. Yet algorithmic power is vanishingly small compared to methods, which itself is dwarfed by modeling. Modeling connects the whole simulation endeavor to the scientific method and is irreplaceable. Methods make these models solvable and open the doors of capability. All of these activities are receiving little tangible priority or support in the current high performance computing push resulting in the loss of incredible opportunities for societal benefit. Moreover we have placed our faith in the false hope that mere computing power is transformative.

Never underestimate the power of thought; it is the greatest path to discovery.

― Idowu Koyenikan

Both models and methods transcend the sort of gains computing hardware produces and can never replace. Algorithmic advances can be translated to the language of efficiency via scaling arguments, but provide gains that go far beyond hardware’s capacity for improvement. The problem is that all of these rely upon faith in humanities ability to innovate, think and produce things that had previously been beyond the imagination. This is an inherently risky endeavor that is prone to many failures or false hopes. This is something that today’s World seems to lack tolerance for, and as such the serendipity and marvel of discovery is scarified at the altar of fear.

We have to continually be jumping off cliffs and developing our wings on the way down.

― Kurt Vonnegut

The case for changing the focus of our current approach being airtight, and completely defensible. Despite the facts, the science and the benefits of following rational thinking there is precious little chance of seeing change. The global effort in supercomputing is utterly and completely devoted to the foolish hardware path. It wins by a combination of brutal simplicity, and eagerness to push money toward industry. So what we have is basically cash driven funeral pyre for Moore’s law. The risk-taking, innovation-driven approach necessary for success is seemingly beyond the capability of our society to execute today. The reasons why are hard to completely grasp, we have seemingly lost of nerve and taste for subtlety. Much of the case for doing the right things and those things that lead to success are bound to a change of mindset. Today the power, if not the value of computing are measured in the superficial form of hardware. The reality is that the power is bound to our ability to model, simulate and ultimately understand or harness reality. Instead we blindly put our faith in computing hardware instead of the intellectual strength of humanity.

The discussion gets to a number of misconceptions and inconsistencies that the field of supercomputing. The biggest issue is the disconnect between the needs of science and engineering and the success of supercomputing (i.e., what constitutes a win). Winning in supercomputing programs is tied to being able to put a (American) machine at the top of the list. Increasingly success at having the top computer on the increasingly useless Top500 list is completely at odds with acquiring machines useful for conducting science. A great deal of the uselessness of the list is the benchmark used to define its rankings, LINPAC, which is less relevant to applications every passing day. It has come to the point where it is hurting progress in a very real way.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The science and engineering needs are varied all the way from QCD, MD and DNS to climate modeling and integrated weapons calculations. The pure science needs of QCD, MD and DNS are better met by the machines being built today, but even in this idealized circumstance the machines we buy to top the computing list are fairly suboptimal for this pure science application. The degree of suboptimality for running our big integrated calculations has become absolutely massive over time and the gap is only growing larger with each passing year. Like most things, inattention to this condition is only allowing it to become worse. The machines being designed for winning the supercomputing contest are actual monstrosities that are genuinely unusable for scientific computing. Worse yet the execution of the exascale program is acting to make this worse in every way, not better.

We then increase the damaging execution of the supercomputing program is the systematic hollowing out of the science, and engineering content from our programs. We are systematically diminishing our efforts in experimentation, theory, modeling, and mathematics despite their greater importance and impact on the entire enterprise. The end result will be a lost generation of computational scientists who are left using computers completely ill-suited to the conduct of science. If National security is a concern, the damage we are doing is real and vast in scope.

We need supercomputing to be a fully complimentary part of the scientific enterprise used and relied upon only as appropriate with limits rationally chosen based on evidence. Instead we have created supercomputing as a prop and marketing stunt. There is a certain political correctness about how it contributes to our national security, and our increasingly compliant Labs offer no resistance to the misuse of the taxpayer money. The mantra is “don’t rock the boat,” we are getting money to do this. Whether or not it’s sensible or not is immaterial. The current programs are ineffective and poorly executed and do a poor job of providing the sorts of capability claimed. It is yet another example of and evidence of the culture of bullshit and pseudo-science that pervades our modern condition.

Supercomputer_Share_Top500_November2015The biggest issue is the death of Moore’s law and our impending failure to produce the results promised. Rather than reform our programs to achieve real benefits for science and national security, we will see a catastrophic failure. This will be viewed through the usual lens of scandal. It is totally foreseeable and predictable. It would be advisable to fix this before disaster, but my guess is we don’t have the intellect, foresight, bravery or leadership to pull this off. The end is in sight and it won’t be pretty. Instead there is a different path that would be as glorious and successful. Does anyone have the ability to turn away from the disastrous path and consciously choose success?

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

Some Background reading on the Top500 list and benchmarks that define it:

https://en.wikipedia.org/wiki/TOP500

https://en.wikipedia.org/wiki/LINPACK_benchmarks

https://en.wikipedia.org/wiki/HPCG_benchmark

A sample of prior posts on topics related to this one:

https://wjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/

https://wjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/

https://wjrider.wordpress.com/2016/01/15/could-the-demise-of-moores-law-be-a-blessing-in-disguise/

https://wjrider.wordpress.com/2016/01/01/are-we-really-modernizing-our-codes/

https://wjrider.wordpress.com/2015/11/19/supercomputing-is-defined-by-big-money-chasing-small-ideas-draft/

https://wjrider.wordpress.com/2015/10/30/preserve-the-code-base-is-an-awful-reason-for-anything/

https://wjrider.wordpress.com/2015/10/16/whats-the-point-of-all-this-stuff/

https://wjrider.wordpress.com/2015/07/24/its-really-important-to-have-the-fastest-computer/

https://wjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/

https://wjrider.wordpress.com/2015/06/05/the-best-computer/

https://wjrider.wordpress.com/2015/05/29/focusing-on-the-right-scaling-is-essential/

https://wjrider.wordpress.com/2015/04/10/the-profound-costs-of-end-of-life-care-for-moores-law/

https://wjrider.wordpress.com/2015/03/06/science-requires-that-modeling-be-challenged/

https://wjrider.wordpress.com/2015/02/14/not-all-algorithm-research-is-created-equal/

https://wjrider.wordpress.com/2015/02/12/why-is-scientific-computing-still-in-the-mainframe-era/

https://wjrider.wordpress.com/2015/02/06/no-amount-of-genius-can-overcome-a-preoccupation-with-detail/

https://wjrider.wordpress.com/2015/02/02/why-havent-models-of-reality-changed-more/

https://wjrider.wordpress.com/2015/01/05/what-is-the-essence-of-computational-science/

https://wjrider.wordpress.com/2015/01/01/2015-time-for-a-new-era-in-scientific-computing/

Advertisements