No.

Scientific computing and high performance computing are virtually synonymous.  Should they be? Is this even a discussion worth having? 

It should be.  It shouldn’t be an article of faith.

I’m going to argue that perhaps they shouldn’t be so completely intertwined.    The energy in the computing industry is nearly completely divorced from HPC.  HPC is trying to influence computing industry to little avail.  In doing so, scientific computing is probably missing opportunities to ride the wave of technology that is transforming society.  The societal transformation brings with it economic forces that HPC never had.  It is unleashing forces that will have a profound impact on how our society and economy look for decades to come.

Computing is increasingly mobile, and increasingly networked.  The access to information and computational power is omniscient in today’s world.  It is not an understatement to say that computers and the Internet are reshaping our social, political and scientific worlds.  Why shouldn’t scientific computing be similarly reshaped?

HPC is trying to maintain the connection of scientific computing and supercomputing.  Increasingly, supercomputing seems passé and a relic of the past, just as mainframes are relics.  Once upon a time scientific computing and mainframes dominated the computer industry.  Government Labs had the ear of the computing industry, and to a large extent drove the technology.  No more.  Computing has become a massive element in the World’s economy with science only being a speck on the windshield.  The extent to which scientific research is attempting to drive computing is becoming ever more ridiculous, and shortsighted.

At a superficial level all the emphasis on HPC is reasonable, but leads to a group think that is quite damaging in other respects.  We expect all of our simulations of the real world get better if we have a bigger, faster computer.  In fact for many simulations we have ended up relying upon Moore’s law to do all the heavy lifting.  Our simulations just get better because the computer is faster and has more memory.  All we have to do is make sure we have a convergent approximation as the basis of the simulation.  This entire approach is reasonable, but suffers from intense intellectual laziness. 

There I said it.  The reliance on Moore’s law is just plain lazy. 

Rather than focus on smarter, better, faster solution methods, we just let the computer do all the work.  It is lazy.  As a result the most common approach is to simply take the old-fashioned computer code and port it to the new computer.  Occasionally, this requires us to change the programming model, but the intellectual guts of the program remains fixed.  Because consumers of simulations are picky, the sales pitch is simple.  “You get the same results, only faster,” “no thinking required!”  It is lazy and it serves science, particularly computational science, poorly. 

Not only is it lazy, it is inefficient.  We are failing to properly invest in advances in algorithms.  Study, after study, has shown that the gains from algorithms exceed those of the computers themselves.  This is in spite of the relatively high investment in computing compared to algorithms.  Think what a systematic investment in better algorithms could do?

It is time for this to end.  Moreover there is a very dirty little secret under the hood of our simulation codes.  For the greater part, our simulation codes are utilizing an ever-decreasing portion of the potential performance offered by modern computing.  This inability to utilize computing is just getting worse and worse.  Recently, I was treated to a benchmark of the newest chips, and for the first time the actual runtimes for the codes started to get longer.  The new chips won’t even run the code faster, efficiency be damned.  A large part of the reason for such poor performance is that we have been immensely lazy in moving simulation forward for the last quarter of a century.

For example, I ran the Linpack benchmark on the laptop I’m writing this on.  The laptop is about a generation behind the top of the line, but rates as a 50 GFLOP machine!  It is equivalent to the fastest computer in the World 20 years ago; one that cost millions of dollars.  My iPad4 is equivalent to Cray-2 (1 GLFOP), and I just use it for email, web-browsing, and note taking.  Twenty years ago I would have traded my first born simply to have access to this.  Today it sits idle most of the day.   We are surrounded by computational power, most of it goes to waste.

The ubiquity of computational power is actually an opportunity to overcome our laziness and start doing something.  Most of our codes are using about 1% of the available power.  Worse yet, the 1% utility may look fantastic very soon.  Back in the days of Crays we could expect to squeeze 25-50% of the power with sufficiently vectorized code.  Let’s just say that I could run a code that got 20% of the potential of my laptop, now my 50 GFLOP laptop is acting like a one TeraFLOP computer.  No money spent, just working smarter.  

Beyond the laziness of just porting old codes with old methods, we also expect the answers to simply get better by having less discrete error (i.e., a finer mesh).  This should be true and normally is, but also fails to rely upon the role that a better method can play.  Again, the reliance on brute force through a better computer is an aspect of outright intellectual laziness.  To get this performance we need to write new algorithms and new implementations.  It is not sufficient to simply port the codes.  We need to think, we need to ask the users of simulation results to think, and have faith in the ability of the human mind to create new, better solutions to old and new problems.  This only applies to the areas of science where computing has been firmly established, there are new areas and opportunities that our intimately connected and computationally rich world have to offer. 

These points are just the tip of the proverbial iceberg.  The deluge of data and our increasingly networked world offer other opportunities most of which haven’t even been thought of.  It is time to put our thinking caps back on.  They’ve been gathering dust for too long.

Advertisements