Real generosity towards the future lies in giving all to the present.

― Albert Camus

cell-phoneIt goes without saying that we want to have modern things. A modern car is generally better functionally than its predecessors. Classic cars primarily provide the benefit of nostalgia rather than performance, safety or functionality. Modern things are certainly even more favored in computing. We see computers, cell phones and tablets replaced on an approximately annual basis with hardware having far greater capability. Software (or apps) gets replaced even more frequently. Research programs are supposed to be the epitome of modernity and pave the road to the future. In high end computing no program has applied more resources (i.e., lots of money!! $$) to scientific computing than the DoE’s Advanced Simulation & Computing(ASC) program and its original ASCI. This program is part of a broader set of science campaigns to support the USA’s nuclear weapons’ stockpile in the absence of full scale testing. It is referred to as “Science-based” stockpile stewardship, and generally a commendable idea. Its been going on for nearly 25 years now, and perhaps the time is ripe (over-ripe?) for assessing our progress.

So, has ASC succeeded?

Unknown-3My judgment is that ASC has succeeded in replacing the old generation of legacy codes with a new generation of legacy codes. This is now marketed to the unwitting masses as “preserving the code base”. This is a terrible reason to spend a lot of money and fails to recognize the real role of code, which is to encode expertise and knowledge of the scientists into a working recipe. Legacy codes simply make this an intellectually empty exercise making the intellect of the current scientists subservient to the past. The codes of today have the same intellectual core as the codes of a quarter of a century ago. The lack of progress in developing new ideas into working code is palpable and hangs heavy around the entire modeling and simulation program like a noose.

It’s not technology that limits us. We’re the limitation. Our technology is an expression of our intelligence and creativity, so the limitations of our technology are a reflection of our own limitations. We can’t fundamentally advance technology until we fundamentally advance ourselves.

― Christian Cantrell

legacy-code-1A modern version of a legacy code is not modernizing; it is surrender. We have surrendered to fear, and risk aversion. We have surrendered to the belief that we already know enough. We have surrendered to the belief that the current scientists aren’t good enough to create something better than what already exists. As I will outline this modernization is more of an attempt to avoid any attempt to engage in risky or innovative work. It places all of the innovation in an inevitable change in computing platforms. The complexity of these new platforms makes programming so difficult that it swallows every amount of effort that could be going into more useful endeavors.

The prevailing excuses for the modernization program we see today are the new computers we are buying. These computers are the embodiment of the death rattle of Moore’s law. These computers are still the echoes of the mainframe era that died so long ago, but lives on in scientific computing. The whole model of scientific computing is anything but modern; it is a throwback to a bygone era that needs to die. Mobile computing drives computing today and the true power of computing is connectivity and mobility, or perhaps ubiquity. These characteristics have not been harnessed by scientific computing.

The future is already here – it’s just not evenly distributed.

― William Gibson

mistakesdemotivatorIs a code modern if it executes on the newest computing platforms? Is a code modern if it is implemented using a new computer language? Is a code modern if it utilizes new software libraries in its construction and execution? Is a code modern if it has embedded uncertainty quantification? Is a code modern if it does not solve today’s problems? Is a code modern if it uses methods developed decades ago? Is a code modern if it runs on my iPhone?

What makes a code, or anything else for that matter, modern?

For the most part the core of our simulation codes are not changing in any substantive manner. Our codes will be solving the same models, with the same methods and algorithms, using the same meshing approach and the same analysis procedures. The things that will be changing are the coding and implementation of these model, methods and algorithms. The operating systems, system software, low-level libraries and degree of parallelization will all change substantially. The computers we run the codes on will change dramatically too. So at the end of process will our codes be modern?

imagesThe conventional wisdom would have us believe that we are presently modernizing our codes in preparation for the next generation of supercomputers. This is certainly a positive take on the current efforts in code development, but not a terribly accurate characterization either. The modernization program is certainly limited to the aspects of the code that have the least impact on the results, and avoids modernizing the aspects of a code most responsible for its utility. To understand this rather bold statement requires a detailed explanation.

Ultimately if our masters are to be believed, the point of ASC, SBSS and our codes is the proper stewardship of the nuclear weapons stockpile. The stockpile exists in the real, physical world and consists of a decreasing number of complex engineered systems we are charged with understanding. Part of that understanding involves the process of modeling and simulation, which needs a chain of activities to succeed. Closest to reality is our model of reality, which is then solved by a combination of methods and algorithms, which in turn are implemented in code to run on a computer. All of this requires a set of lower-level libraries and software that effectively interface the coded implementation with the computer. Finally we have the computer that runs the code.

Each one of these steps is essential and must work properly to succeed; the needs of each step need to be balanced against the others in a holistic fashion. For example no amount of computer power, computer science, or scaling will ever rescue a code whose models are flawed. If you believe that our models as presently stated are inappropriate to answer the questions facing the stockpile today (and I do), the current program does nothing to alleviate this problem. I believe we have failed to properly balance our efforts, and allowed ourselves to create a new generation of legacy codes to replace the previous one. A legacy code is the opposite of a modern code, but its exact what we have made.

The goal of a life purpose is not what you will create, but what it will make you into for creating it.

― Shannon L. Alder

A major problem with the approach we have taken to computing is the impact on the careers of our staff. Instead of producing a cadre of professionals spanning the full spectrum of the necessary knowledge and skills, we have a skewed base. The bias toward stewardship by massive computer power without emphasis on modeling, methods or algorithms, the development of our scientists and engineers is similarly and unhealthily skewed as well. By not embracing a holistic path with an emphasis on creation and innovation, the development of the current generation of scientists and engineers is stunted. We see our current path perpetuating both an unbalanced approach that amplifies its harmful impact by eschewing risky research and avoiding both innovation and discovery in the process. This produces the knock on effect of killing the development of our staff.

It is notable that this is a New Year’s Day post; so the future is here. Given this, and upon some reflection a research program isn’t really good enough if it is modern, it must be futuristic. Research should be creating the future, not simply be in the present. If research is stuck in the past, the future really can’t be accessed. My concern is that the view of low risk endeavors is severely shaped by what has succeeded in the past. The best way to be successful, at least superficially, is to do what has worked in the past. This seems to be what we are doing in high performance computing. We build the codes that worked in the past and put them on our big mainframes. The truth is we can’t be modern if we are in the past, and we will never create the future.

computational_fluid_hSo this is where we are at, stuck in the past, trapped by our own cowardice and lack of imagination. Instead of simply creating modern codes, we should be creating the codes of the future, applications for tomorrow. We should be trailblazers, but this requires risk and taking bold chances. Our current system cannot tolerate risk because it entails the distinct chance of failure, or unintended consequence. If we had a functioning research program there is the distinct chance that we would create something unintended and unplanned. It would be wonderful and disruptive in a wonderful way, but it would require the sort of courage that is in woefully short supply today. Instead we want to have certain outcomes and control, which means that our chance of discovering anything unintended disappears from the realm of the possible.

With relative ease this situation could be rescued. Balance could be restored and progress could proceed. We simply need to produce a greater focus and proper importance on the issues associated with modeling, methods and basic algorithms (along with appropriate doses of experiments, physics and real applied math). Each of these areas is greatly in need of an injection of real vitality and modernity, and offer far greater benefits than our present focus on computing hardware. It is arguable that we have evolved to point where the emphasis on hardware is undermining more valuable efforts. This would require a requisite reduction is some of computer science and hardware focus, which is useless without better models anyway.

titan-supercomputerThe core of the issue is the difficulty of using the next generation of computers. These machines are literally monstrous in character. They raise parallelism to a level that makes the implementation of codes incredibly difficult. We are already in a massive deficit in terms of performance on computers. For the last 25 years we have steadily lost ground in accessing the potential performance of our computers. Our lack of evolution for algorithms and methods plays a clear role here. By choosing to follow our legacy code path we are locked into methods and algorithms that are suboptimal both in terms of performance, accuracy and utility on modern and future computing architectures. The amount of technical debt is mounting and magnified by acute technical inflation.

I’ll posit an even more controversial idea about massively parallel computers. These machines were a bona fide disruptive innovation, but instead of disrupting positively as this concept usually applies, parallel computing has been destructive. The implementation of standard scientific computing models and methods has been so difficult that more valuable efforts have been decimated in the process. For example numerical linear algebra has been completely static algorithmically for thirty years. The effort to merely implement multigrid on parallel computers has swallowed all of the innovation. The problem is that the innovative algorithmic progress would crush the impact of implementations with a single breakthrough. Have we been denied a breakthrough because of effort is all focused on implementation?

The clincher is that the next generation of computing may be even more catastrophically disruptive than the previous one.

To accomplish this we would have to turn our back on the mantra of the last quarter century; we just need a really fast computer (preferably the fastest one on Earth) and the stockpile will be OK. This mindset is so incredibly vacuous as to astound, but the true epitome of modernity is superficiality. The view that a super fast computer is all we need to make modeling and simulation work effectively is simplistic in the extreme. In modern America simplistic is what sells. Americans don’t do subtlety and the current failures in high performance computing can all be linked to the subtle differences between the simplistic messaging that gets funding and the subtle messaging of what would be effective. Our leaders have consistently chosen to focus on what would get funded over what would be effective. We cannot continue to make these choices and be successful; the deficit in intellectual depth will come due soon. Instead of allowing this to become a crisis we have the opportunity to get ahead of the problem and change course.

The best way to predict your future is to create it.

― Abraham Lincoln

The real goal should not be modernizing our codes; it should be creating the codes of the future. First, we must throw off the shackles of the past and refuse to perpetuate the creation of a new generation of legacy codes. The codes of the future should solve the problems of the future using futuristic model, methods and algorithms. If we continue to keep our attention in the past and promoting the continued preservation of an antiquated code base, the future will never arrive. Simply implementing the codes of the past so that they work on new computers is merely a useful exercise for proofing concepts. These computers purchased at great cost should be looking forward, not back, with fresh eyes and new ideas for solving the problems ahead of us, not yesterday’s.

Today’s science is tomorrow’s technology.

― Edward Teller

We owe the future nothing less. The future is in our hands; we can make it into what we want to.

You realize that our mistrust of the future makes it hard to give up the past.

― Chuck Palahniuk

 

 

 

 

 

Advertisements