This week I’m in Las Vegas an appropriately ironic place to see people making stupid gambles on the future. We are having the annual “Principal Investigator’s” meeting for the NNSA’s ASC program, itself a rather ironic name for an echo chamber, which is a Cosmopolitan-Las-Vegas-008more apt description. There is frightfully little investigation or intellectual engagement
in sight, it is more of a programmatic, project meeting with little or no discussion of intellectual depth, at least publicly
. Private discussions among the technically oriented attendees are a bit more far reaching, but everyone seems to feel a deep sense of following a fate rather than making choices. The lack of holistic thought and vitality in high performance computing is becoming evident, but the machine focus is a juggernaut too powerful to slow down at this point.

If failure is not an option, then neither is success.

—Seth Godin

Unknown-3Societally, the concept of too big to fail applies to the banking and financial institutions that almost destroyed the World economy eight years ago. We demonstrated that they were both too big to fail and too big and too powerful to change thus remaining a ticking time bomb. It is only a matter of time before the same issues present in 2007 erupt again and wreck havoc on the World economy. All the evidence needed to energize real change is available, but there is simply too much money to be made, and greed is more powerful than common sense. I realized that our application codes and computers probably properly deserve to be thought of in exactly the same light, they are too big to fail too. This character is slowly and steadily poisoning the environment we live in and any discussion of different intellectual paths is simply forbidden.

Only those who dare to fail greatly can ever achieve greatly.
― Robert F. Kennedy

In high performance computing we live in immensely challenging times where deep intellectual engagement is necessary for success. The very nature of the programs seems to be anathema for the very free thought needed for success. The codes and computers are treated as being absolutes for success and immutable. We see decades of investment in codes and capabilities that must be sustained. The systems we have created are immense in terms of expense and size. The idea has taken hold that they must be preserved. This preservation is rather superficial rather than holistic and pervasive; the concept in analogous to playing the codes in amber. As such the intellectual content of the codes is remaining far too static and our intellectual ownership of the contents of the codes is slipping away. It is a dangerous and unsustainable future. Like the banks that should have been split into smaller more manageable chunks, the codes need to be removed from this concept of permanence.

mediocritydemotivatorAs I said, the depth of intellectual ownership of these very codes is diminishing with each passing day. The essential aspects of these code’s utility and success in our application areas is based on deep knowledge and intense focus of talented individuals. The talent and skills leading to successful codes are difficult to develop and maintain; the skills must be developed by simultaneously pushing several envelopes: the applications, the models, methods to solve models, and computer science-programming. Today we really only focus on the computer science-programming and simply sort all the other details. Rather than continually reinvest in people and science, we are creating an environment where codes are curated. This state is actually a recipe for catastrophic failure rather than glorious success. The path forward should be adaptive, flexible and agile; instead the path is a lumbering goliath and viewed as a fait accompli.

Any fool can know. The point is to understand.
― Albert Einstein

A code is not an investment and shouldn’t ever be viewed as such. A code is simply a computer executable version of independent thought and intellectual content. It is absolutely vital for all of the capability we have in code to be fully understood and known by humans. We need to have Feynman_Richardhumans who understand the basis of models and how these models are solved. When we curate code this key connection is lost. We lose the fundamental nature of the model as our impression of nature, rather than its direct image. We use models as a way of explaining nature rather than a substitute for the natural World. This tie is being systematically undermined by the way we compute today and results in a potentially catastrophic loss of humility. Such loses of humility ultimately produce reactions that are unpleasant and damaging.

images-2We are creating a program that will collide with reality leaving a broken and limping community in its wake. It has a demonstrated track record of not learning from past mistakes, producing a plan for moving ahead that is devoid of innovation and deep thought. Today’s path forward is solely predicated on the idea that we must have the fastest computer rather than the best computing. It is the epitome of bigger and more expensive is better, rather than faster, smarter and more agile. Perhaps more damaging is a perspective that the problems we face are already solved save the availability of more computer power. We will end up eviscerating the very communities of scientists that are the lifeblood of modeling and simulation. The program may be a massive mistake and no one is questioning any of it.

Judge a man by his questions rather than by his answers.

― Voltaire

I believe that all of these efforts could vastly benefit from a mindset orthogonal to the prevailing approach. How would we solve the problems facing us today if we had less computing power? If we thought about how to productively solve our problems with less computational horsepower, we could do a far better job with whatever computers we actually have. I find the call for more computer power as a way of dodging deeper and more challenging problems with an effectively kneejerk response. More computing power is undeniably better, but it is almost always a highly suboptimal path to better solutions to problems.

The most effective way to get a better answer is provide a better model of reality to address the questions. If you believe that your model is correct and appropriate for the questions at hand, the method of solution has the most leverage for improving your performance. Many of these solution methods are based on fundamental algorithms, which can provide massive upgrades in performance. In each of these endeavors the use of deep applied mathematics expertise can provide tremendous benefits to the rigor and effectiveness of each aspect. Only when these options have been exhausted should the implementation and hardware be brought to bear as the primary path to improvement. In today’s high performance computing research the highest leverage paths to improved modeling and simulation are virtually ignored by our efforts. Of course part of the issue is the identification of the activity as high performance computing first, and modeling & simulation second. This ordering and priority should be reversed in keeping with their proper role for impacting the real World applications that should be motivating all of our efforts.nucleartesting-620x310

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.
― Isaac Asimov
Advertisements