It’s never too late

…to start heading in the right direction.

― Seth Godin

What would be the best gift I could hope for? Honestly the best gifts would be health and Unknownhappiness in my private life, so I mean the best gift for my professional life. It would lead to a much more vibrant and rewarding research life than the zombie-like march toward mediocrity we’re been on for several decades. The mediocrity is being fueled by the tendency to celebrate false achievement to generate the need for unvarnished success. So what would I like to find under the tree this morning?

Confidence is ignorance. If you’re feeling cocky, it’s because there’s something you don’t know.

― Eoin Colfer

How about an end to the orgy of self-congratulating false achievement defining scientific computing these days? That would be refreshing for a change. There seems to be a disturbing trend to only report success today, with any sense of failure or challenge is simply not accepted by our funding agencies. So instead of computation honestly being the third way to practice science, as many would pronounce, it still falls a bit short. Perhaps with sufficient honestly over the past couple of decades, this would be an honest assessment instead of a hollow boast

To share your weakness is to make yourself vulnerable; to make yourself vulnerable is to show your strength.

― Criss Jami

There are plenty of challenges to be had if we simply applied some integrity to our assessment of what we’re capable of. This doesn’t belittle what we are capable of, but rather highlight the size of the difficulties we face. The worst aspect of the current situation is that our present capability could be so much beyond what it is today if we treated the science with greater honesty. Computationa34yrawwyca6bwyebdmh5l5zzdlncalwgl science opens doors to understanding unavailable to experimental and theoretical science; it is a compliment to each, but not a replacement. A large part of the problem is the extent to which it has been sold as a replacement to traditional science. This is at the core of the problem, but there is much more wrong with the direction.

The best way to predict the future is to invent it.

—Alan Kay

As the end of the era of cheap progress in computing power looms before us, we should look toward crafting a future where progress isn’t gifted by hardware following a law whose demise is vastly overdue (i.e., the end of Moore’s law). I see a field where Moore’s law seems to be seen as the only path to success, and its demise is greeted as apocalyptic. Rather than seeing the end of Moore’s law as a problem, I believe it will force us to work smarter again. We will stop relying upon faster computers to gift us progress and start thinking about how to solve problems better again. This lack of thought has created a dismal state of affairs in scientific computing research.

Before getting to what we are missing it might be good to focus a little bit of attention on hardware and how we got into this mess. Twenty some-odd years ago we had the “attack of the killer micros”. This became something real as it swallowed the high keynote-snir-sc-4-638performance computing world whole. It’s being taken over today by an attack of legions of portables. The entire computing industry has become enormous and dominated by cell phone and the impending “Internet of things”. Before the killer micros, we had custom machines and custom chips tailored to the scientific computing world, and typified by Crays.

It might be worth think about what would the computers customized for the needs of scientific computing would have looked like if we hadn’t accepted the fool’s errand of chasing Moore’s law like a dog chases a dirty old tennis ball? I don’t like being overly nostalgic, but in many ways the state of affairs Unknown-1with computing hardware has gotten far worse over the last twenty years. Sure the computers are much faster, but they are terrible to use. From a user’s perspective, the systems are worse. We have accepted the commodity bargain and a “worse is better” approach instead of demanding something focused on solving our problems. I most hypothesize that we would have been better off with somewhat slower computers that were more useful and better constructed to our tasks. In relative terms we have accepted crap in pursuit of a politically correct, but technically corrupt vision of computing. In pursuit of the fastest, biggest computer we have accepted worse actual, real performance.

Innovative solutions to new challenges seldom come from familiar places.

—Gyan Nagpal

What if we had stayed with that model? Of course it wasn’t a viable path from a business point of view, but stay with me. What would computer designs look like? What would be the programming model look like? How would the emphasis be different? I’d like to think we would have been better off not trying to squeeze two decades more out of Moore’s law. The truth is that we never use all of these machines anyway except for marginally useful (mostly useless) stunt computations. The whole Moore’s law thing is largely a lie anyway; the problem being solved in measuring speed is completely uncharacteristic of scientific computing (LINPAC). If we apply benchmarking to something more realistic (the HCG benchmark is a step in the right direction), we see that supercomputers get 1-5% of the speed that the LINPAC benchmark gives. Most of our codes are even slower than that. Putting this all together we can see that high performance computing is a bit of a facade. The emphasis on hardware is central to the illusion, but this is only the tip of the illusory iceberg.

Failure isn’t something to be embarrassed about; it’s just proof that you’re pushing your limits, trying new things, daring to innovate.

—Gavin Newsom

RUnknownesearch in scientific computing has not had enough impact over this time. In too many cases new numerical methods are still not being used in codes. The methods embedded in code are largely and significantly the same ones we used twenty years ago. The models too are twenty years old. The older models should be invalid many cases simply due to the refinement of the mesh, and the requisite change in time and length scales. New capabilities in uncertainty estimation and coupled physics are still research rather than deployed and producing results. In many cases the codes are two, three or four generations of methods and models overdue from a complete change. Back in the days of Crays and a bit more courage in research, new codes would be spawned every few years. Now we hold on to our old codes for decades. These new codes are the vehicle for new ideas, new methods and new models to be used for useful, important work.

Olds_Ninety-Eight_CoupePerhaps a useful analogy would be to think of high performance computing as a car. In this view the computer is the engine, and the code is the steering, interior, stereo, and other features. What kind of car have we been driving? Basically we are driving a car from the 1980’s with a series of new engines. The steering is the same, the interior has at most reupholstered and all the original equipment is still in place. Instead of being able to hook up your iPhone to the stereo, we are playing our old 8-tracks. No built in navigation either, so make sure you buy a map at the gas station. This works fine, but you won’t get any warning about the major construction zone. This is the bargain we’ve entered into; it is an absurd and unbalanced approach. If it were a car we wouldn’t stand for it, so why do we put up with this in computing?2015-acura-ilx-2-4l

If failure is not an option, then neither is success.

― Seth Godin

Over the years the entire program has suffered under the low-risk, false success management of today’s research. We have to simultaneously deliver progress without taking risks. No one who understands how to make progress would buy off on it. We labor under the assumption that we can manage to complete success while encountering no problems and succeed without the need for failure. Failure and risk are the lifeblood of progress. The management of research in this manner is largely an illusion that manifests in the reality of lack of risk taking and pervasive incrementalism. It strains credibility to its limit to believe this. The end result is achievement without progress. Achievement is made by fiat and only because failure is never an option today. The utter lack of honesty is truly disturbing.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

― Seth Godin

Part of the impact of this reign of mediocrity is the over-development of code bases. This enables the achievement to take place within the incrementalism so intrinsic to the low-risk model of management. To achieve progress we continue to build upon bases of code long after they should have been discarded. As a consequence the amount of technical debt and inflation associated with our code is perilously large. We are hemmed into outdated ideas of how a code should be written, and the methods and models implicit in the design. The ability to start fresh and put new ideas into action simply isn’t allowed under the current model.

 The best way to get a good idea is to have a lot of ideas.

—Linus Pauling

A couple of other write-ups have appeared this week touching on the topic of progress and what’s holding us back (http://www.americanscientist.org/issues/pub/wheres-the-real-bottleneck-in-scientific-computing and http://www.hpcwire.com/2006/07/21/seven_challenges_of_high_performance_computing-1/). In the case of Greg’s commentary, he is right on the mark, but the use of modern software engineering is close a necessary, but wholly insufficient condition for success. I see this where I work. We are about as good as anyone at doing the software end of things professionally, yet without the scientific vision and aggressive goal-setting it is a hallow victory.

What the software engineering reflects is the maturity of scientific software and the need to contend with its impact on the field. Codes have become large and complex. To solve big problems they need to be developed using real engineering. Like most infrastructure they crumble and show age. If they are not invested in and kept up, they will fail to perform. The lifetime of software is much shorter than other infrastructures, but similarly we don’t have the political will to fix the problem.

F16-CFD-RealDoug makes a number of good points in his commentary, but I think he misses the single biggest issue. The main problem with the list is the degree of political correctness associated with the list. He still hails from the point-of-view that Moore’s law must be pursued. It is the tail that is wagging the dog, and it has been in control of our mindset too long. Doug also hits upon some of the same issues that Greg touches on, and again software engineering done professionally is a necessity. Finally the increasing emphasis on V&V is touched upon, but as I will discuss, our current ideas about it miss something essential. V&V should be the discipline that points to where progress has been made, and been lacking.

The biggest issues with scientific computing are the conclusions that we have already solved a bunch of problems with current capability. All that we need to do is build bigger computers, refine the mesh and watch the physics erupt from the computer. This is the sense that most of high performance computing is constructed upon. We already know how to do everything we need to do; it’s just a matter of getting the computers big enough to crush the problems into submission.Unknown-2

This is where verification and validation come in. Again our current practice is permeated with a seeming belief that good results are merely a formality. Most V&V work provides balance with some sense of trust in results combined with an assessment of how limited capability really is. It should provide a targeted view of where improvement is need. Instead of honesty in the nature of our understanding, we have over-selling. V&V is expected to be a rubber stamp for the victories of scientific simulation. Bad news isn’t expected or accepted. We act as if we have complete mastery over science, and it’s just a matter of engineering.

Remember the two benefits of failure. First, if you do fail, you learn what doesn’t work; and second, the failure gives you the opportunity to try a new approach.

—Roger Von Oech

 

Nothing could be further from the truth. The primary achievement of scientific computing has unveiled new mysteries and limitations. This is the nature of the quest for knowledge. Answering good questions only yields the capacity to ask better, more refined questions. To answer these new questions, we need better computational science, but also vibrant experimental and theoretical science. Our current approach to the field in general is not providing it. The methods and models of yesterday are not sufficient to answer the questions of today or tomorrow. We need to quit perpetuating the illusion that they are.

 

Healthy curiosity is a great key in innovation.

—Ifeanyi Enoch Onuoha

 

The right way to make progress is to realize that sometimes the answer to a question raised by computation lies in experiment or theory. Conversely the new theoretical question may find answers in experiment or computation. We benefit by having each area push the other. Computation simply adds to the capacity to solve problems, but does not replace the need for the traditional approaches. If we neglect theory and experiment, we are diminished. Ultimately our progress with computation will be harmed (if it hasn’t already).

 

Let’s celebrate the holidays and give each other the gift of deep open-minded questions that require every tool at our disposal to answer. Let’s stop giving ourselves false self-congratulating achievements that only perpetuate the wrong view of science. Let’s make real progress.

 

Only those who dare to fail greatly can ever achieve greatly.

— Robert Kennedy

 

Advertisements