Where all think alike there is little danger of innovation.

—Edward Abbey

02What do you do when you’re in a leadership position for a project that you’re sure is moving in the wrong direction?

If you’re a regular reader you can guess that its high performance computing, and the direction has been wrong for a couple of decades (IMHO). Today and tomorrow we are just kicking the can down the road. Current direction is virtually identical to what we’ve been doing since the early to mid-1990’s. There isn’t a lot of visionary thinking to be had. On the one hand the imminent demise of Moore’s law promises some degree of disruption. We can’t continue to make progress the way we have been, and change will be thrust on us. On the other hand the conditions for visionary thinking, risk taking and progress are absent.images

Without deviation from the norm, progress is not possible.

― Frank Zappa

While the business we are doing is stable today, the continued traditional emphasis has an aggregated impact on a host of issues. Being stable also means that the business is stagnant, which isn’t good for the science. Progress in modeling and simulation has been made largely in two regards: computational power available has increased, and the way of conducting studies has matured. The stagnation of progress is most evident in the codes; the methods and models in the codes are simply not moving forward meaningfully. The codes in terms of methods and models are largely the same as we exmatex-1000x400used twenty years ago. Furthermore most of innovative and creative energy has gone into implementing the codes on modern computers. The result is a phalanx of missed opportunity whose implicit costs are massive. I’d like to sketch out what some of these opportunities and the costs associated with missing them.

Societies in decline have no use for visionaries.

― Anaïs Nin

Historically, we have been rewarded greatly by algorithm, methods and models improvements that exceeded the benefits of faster computers. Despite this track record support for continued development along these lines has languished and dropped in intensity. One might surmise that we have picked off the low hanging fruit already and made the easy breakthroughs. I’m far more optimistic that massive improvements and innovation are not just possible, but awaiting relatively easy synthesis into our current work. To achieve these gains we will have to discard some of the limitations we impose on our current execution of projects.

But knowing that things could be worse should not stop us from trying to make them better.

― Sheryl Sandberg

So what is holding us back?

A big part of the problem is the issue of “sunk cost”. The codes are now huge and quite complex. They represent massive investments in resources over years, if not decades. The program management is not interested in the starting over, but rather evolving capability forward. This is rather limited in scope, and largely takes the form of moving the codes whole cloth onto new computing platforms. For people with short time horizons (and/or attention spans) this is a safe path to success. The long-term costs are lost to the risk calculus currently employed. No one realizes that the code is merely a vehicle for intellectual products that can utilize automatic computation. Its value is solely based on the quality of the thinking in its implementation, and the quality of its implementation. Virtually all the effort today is on implementation rather than the thinking itself. Until we overcome this sunk cost mentality, codes will remain intellectually static with respect to their defining applied character.

Restlessness is discontent — and discontent is the first necessity of progress. Show me a thoroughly satisfied man — and I will show you a failure.

― Thomas A. Edison

Ninja_Hcurl_40_approxWhat are some of the things we are missing? Clearly one of the greatest sacrifices of the “sunk cost” code is static discretizations and models. The numerical methods that implement the physical models in codes are generally completely intertwined with the codes basic structure. Over time, these aspects of the code become a virtual skeleton for everything else the code does. The skeletal replacement surgery usually kills the patient, and that can be allowed. Therefore we get stuck. New discretizations could provide far more accurate solutions, and new models could provide greater fidelity to reality, but this has been taken off the table to maintain continuity of effort. Part of the work that we need to conduct is a better understanding of how practical discretization accuracy is achieved. For most applications we don’t have smooth solutions and the nominal notions of numericalimages-1 accuracy do not hold. How do discretization choices impact this? And how can these choices be optimized given resources? Furthermore changes in these areas are risky and never sure to succeed, while risk reduction with fear of failure is the preeminent maxim of project management today.

Anyone who says failure is not an option has also ruled out innovation.

—Seth Godin

MorleyWangXuElementsMoving on to other more technical aspects of computing and potential benefits I’ll touch on two other missing elements. One of these is stability theory. As I noted a couple of posts ago, robustness is a key to a code’s success. At a very deep level robustness is a crude form of stability. The crudeness is a symptom of failings in the current stability theory. This implies that we could be far better with a more extensive and useful stability theory. Part of this is defining a stability that captures the requirements mathematically for producing robust, physical results. Today we simply don’t have this. Stability theory is a starting point, and we have to kludge our way to robustness.

Innovative solutions to new challenges seldom come from familiar places.

—Gyan Nagpal

UnknownA second area of progress that we have suffered from not having is numerical linear algebra. We are thirty years on from the last big breakthrough, multigrid. Multigrid is viewed as being the ultimate algorithm given its ideal scaling with respect to the number of unknowns (being linear, and all other methods are super linear). Since then we have moved to using multigrid as a preconditioner for Krylov method improving both methods, and implemented the method on modern computers (which is really hard). Thirty years is a long time especially considering that other advances in this field came on a faster than decadal pace. A good question to ask is whether a sub-images-2linear method can be defined? Is multigrid the ultimate algorithm? I suspect that the answer is sub-linear method can be discovered, and work on “big data” is pointing the direction. Beyond this we typically solve linear algebra far more accurately (very small residuals) than probably necessary. It is done almost reflexively with a better safe than sorry attitude. This is a huge waste of effort, and someone should come up with a sensible way to set solver tolerances and optimize computational resources.

The willingness to be a champion for stupid ideas is the key to greater creativity, innovation, fulfillment, inspiration, motivation, and success.

—Richie Norton

A big area for progress is uncertainty quantification. The methods today are clearly focused on modeling and parametric uncertainties using sampling methods. While sampling is general, it is inefficient. These are epistemic uncertainties reflecting our lack of knowledge. The natural variability or aleatory uncertainty is largely unexplored computationally. This reflects pointedly on the modeling approach we use. Key to this is the generally homogeneous nature of material models even though the materials are quite heterogeneous at the scale of the discretization. This is a clear place where the maintenance of codes over long periods of time works against progress. Most of the potentially more efficient uncertainty methods are deeply intrusive and don’t fit existing code bases. Further complicating matters, the development of these methods has not been focused on models sufficient for applications. It has focused on “toy” problems. To progress we need to take significant risks and tackle real problems using innovative methods. Our system today is not set up to allow this.

Dreamers are mocked as impractical. The truth is they are the most practical, as their innovations lead to progress and a better way of life for all of us.

—Robin S. Sharma

IBM_sequoa12345Expect to see a lot of money going into computing to support “extreme” or “exascale” initiatives. It is too bad that this effort is largely misplaced and inefficient. The chosen approach is grossly imbalanced and not indicative of historical perspective. The work we are not doing is risky, but capable of massive benefit. Current management models seem to be immune to measuring opportunity cost while amplifying the tendency to avoid risk and failure at all costs.

Never confuse movement with action.

― Ernest Hemingway

 

Advertisements