This is especially true with Moore’s law on its deathbed. With Moore’s law going away it isn’t going to a contest in less than 10 years.

This comes on the heels of 20 years of ignoring the following wisdom: “*The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.”* – Nick Trefethen, Oxford University

Before digging into the depths of the topic, I’d like to point out that all of you make intimate contact with algorithms every day. We collectively can’t live without them in our computerized world. An algorithm defines Google. Without the algorithm there is no Google. The computers powering the algorithm are an afterthought, a necessary detail, but not the soul of what Google does. The world with Google is different from the world without it. More perniciously if the Google algorithm were never invented, we’d never miss it. Today Google defines access to information, and its ideas have built one of the biggest businesses. This is the power of algorithms laid bare, the power to change whatever they touch. Many other algorithms have importance such as those that occupy our smartphones, tablets and computers allowing commerce to take place securely, access music, connect with friends, and so on. The algorithm is more important than the computer itself. The computer is the body, but the algorithm is the soul, the ghost in the machine.

Last week I tried making the case that scientific computing need to diminish its emphasis on high performance computing. Supercomputers are going to be a thing of the past, and Moore’s law is ending. More importantly, the nature of computing is changing in ways far beyond the control of the scientific computing community. Trying to maintain the focus on supercomputing is going to be swimming against an incoming tidal wave as mobile computing, and the Internet becomes the dominant forces in computing.

Instead, we need to start thinking more about how we compute, which implies that algorithms and modeling should be our emphasis. Even with Moore’s law in force, the twin impacts of modeling and algorithm improvement will be more powerful. With Moore’s law going the way of the Dodo, thinking is the only game in town. The problem is that funding and emphasis are still trying to push supercomputing like a bowling ball up a garden hose. In the end we will probably waste a lot of money that could have been invested more meaningfully in algorithms, modeling and software. In other words we need to invest where we can have leverage instead of the fool’s errand of a wasteful attempt at nostalgic recreation of a bygone era.

Let’s start by saying that the scaling that Moore’s law gives is almost magical. For scientific computing, the impact is heavily taxed by the poor scaling for improvements in computed solutions. Take the archetype of scientific computing, big 3-D calculations that are used as the use case for supercomputers. Add time dependence and you have a 4-D calculation. Typically answers to these problems improve with first-order accuracy if you are lucky (this includes problems with shock waves and direct numerical simulation of turbulence). This means that if you double the mesh density, the solution’s error goes down by a factor of two. That mesh doubling actually costs 16 times as much (if your parallel efficiency is perfect and it never is). The 16 come from having eight times as many computational points/cells/volumes and needing twice as many time steps. So you need almost a decade of Moore’s law improvement in computers to enable this.

What instead, you developed a method that cut the error in half? Now you get the same accuracy as the decade of advances in supercomputing overnight (two or three years that the research project needs), for a fraction of the cost of computers. Moreover you can run the code on the same computers you already have. This is faster, more economical and adds to base of human knowledge.

So why don’t we take this path? It doesn’t seem to make sense. Part of the reason is the people funding scientific computing. Supercomputers (or Big Iron as they are affectionately known) are big capital outlays. Politicians, and computer executives love this because they are big, tangible and you can touch, hear and see them. Algorithms, models and software are ephemeral, exotic and abstract. Does it have to be this way? I’m optimistic that change is coming. The new generation of leaders is beginning to understand that software and algorithms are powerful. Really, really powerful. Anyone out there heard of Google. The algorithm and software are the soul of one of the biggest and most powerful businesses on the face of the Earth. All this power comes from an algorithm that gives us access to information as never before. It is the algorithm that is reshaping our society. Google still buys a lot of computing power, but it isn’t exotic or researchy, it’s just functional, and an afterthought to the algorithmic heart of the monster. Maybe it is time for the politicians to take notice.

We should have already noticed that software was more important that computers. Anyone remember Microsoft? Microsoft’s takedown of IBM should have firmly planted the idea that software beats hardware in our minds. One of the key points is that scientific computing, like politicians haven’t been ready the lessons in the events of the world correctly. We are stuck in an old-fashioned mindset. We still act like Seymour Cray is delivering his fantastic machines to the Labs, and dreaming wistfully for those bygone days. It is time to get with the times. Today software, algorithms and data rule the roost. We need to focus on providing a resonant effort to ride this wave instead of fighting against it.

To strengthen my case let me spell out a host of areas where algorithms have provided huge advances and should have more to provide us. A great resource for this is the ScaLeS workshop held in 2003, or the more recent PITAC report. In addition to the cases spelled out there a few more cases can be found in the literature. The aggregate case to be made is that advances in individual technology cases alone keep up with Moore’s law, but the aggregations of algorithms for more complex applications provide benefits that outpace Moore’s law by orders of magnitude! That’s right, by factors of 100, or 1000 or more!

Before spelling a few of these cases out something about Moore’s law needs to be pointed out. Moore’s law applies to computers, but the technology powering the growth in computer capability is actually an ensemble of technologies comprising the computer. Instead of tracking the growth in capability for a single area of science the computer capability is the integration of many disciplines. The advances in different areas are uneven, but when taken together provide a smoother transition in computational performance. Modeling and simulation is the same thing. Algorithms from multiple areas work together to produce a capability.

Taken alone the improvements in algorithms tend to be quantum leaps when a breakthrough is made. This can be easily seen in the first of the eight cases I will spell out below, numerical linear algebra. This area of algorithm development is really a core method that very many simulation technologies depend upon. New algorithms come along that change the scaling of the methodology, and the performance of the algorithm jumps. Every other code that needs this capability also jumps, but these codes depend on many algorithms and advances there are independent.

Case 1: Numerical linear algebra – this is the simplest case and close to the core of the argument. Several studies have shown that the gains in efficiency (essentially scaling in the number of equations solved) by linear algebra algorithms come very close to equally the gains achieved by Moore’s law. The march over time from direct solutions, to banded solvers, to relaxation methods, to preconditioned Krylov methods and now multigrid methods has provided a steady advance in the efficiency of many simulations.

Case 2: Finite differences for conservation laws – This is a less well-known instance, but equally compelling. Before the late 1970’s simulations had two options: use high-order methods that produced oscillations, or low-order dissipative methods like upwind (or donor cell as the Labs call them). The oscillatory methods like Lax-Wendroff are stabilized with artificial dissipation, which was often heavy-handed. Then limiters were invented. All of a sudden one could have the security of upwind with the accuracy of Lax-Wendroff without the negative side effects. Broadly speaking the methods using limiters are non-oscillatory methods, and they are magic.

“*Any sufficiently advanced technology is indistinguishable from magic.*“ – Arthur C. Clark

More than simply changing the accuracy, the limiters changed the solutions to be more physical. They changed the models themselves. The first order methods have a powerful numerical dissipation that defacto laminarizes flow. In other words you never get anything that looks remotely turbulence with a first-order method. With the second-order limited methods you do. The flows act turbulent, the limiters give you an effective large eddy simulation! For example look at accretion discs, they never form (or flatten) with first-order methods, and with second-order limited methods, BINGO, they form. The second-order non-oscillatory method isn’t just more accurate it is a different model! It opens doors to simulations that were impossible before it was invented.

Solutions were immediately more accurate. Even more accurate limiting approaches have been developed leading to high-order methods. These methods have been slow to be adopted in part because of the tendency to port the legacy methods, and the degree to which the original limiters were revolutionary. New opportunities exist to further these gains in accuracy. For now, it isn’t clear whether the newer more accurate methods can promise the revolutionary advances offered by the first non-oscillatory methods, but one can never be certain.

Case 3: Optimization – this was spelled out in the PCAST report from 2010. Algorithms improved the peformance of optimization by 43,000 times over a nearly twenty-year period, computer hardware only improved things by 1000. Yet we seem to systematically invest more in hardware. Mind-bending!

Case 4: Plasma Physics – The ScaLeS report spelled out the case for advances in plasma physics, which more explicitly combined algorithms and modeling for huge advances. Over twenty years algorithmic advances coupled with modeling changes provided more than a factor of 1000 improvement in performance where computational power only provided a factor of 100.

The plot from the report clearly shows the “quantum” nature of the jumps in performances as opposed to the “smooth” plot for Moore’s law. It is not clear what role the nature of the improvement plays in the acceptance. Moore’s law provides a “sure” steady improvement while algorithms and modeling provide intermittent jumps in performance at uneven intervals. Perhaps this embodies the short-term thinking that pervades our society today. Someone paying for computational science would rather get a small, but sure improvement rather than the risky proposition of a huge, but very uncertain advance. It is a sad commentary on the state of R&D funding.

Case 5: N-Body problems – A similar plot exists for N-body simulations where algorithms have provided an extra factor of 1000 improvement in capability compared with hardware.

Case 6: Data Science – There have been instances in this fledgling science where a change in an analysis algorithm can speed up the achievement of results by a factor of 1000.

We’ve been here before. The ScaLeS workshop back in 2003 tried to make the case for algorithms, and the government response has been to double-down on HPC. All the signs point to a continuation of this foolhardy approach. The deeper problem is that those that fund Science don’t seem to have much faith in the degree to which innovative thinking can solve problems. It is much easier to just ride the glide path defined by Moore’s law.

That glide path is now headed for a crash landing. Will we just limp along without advances, or invest in the algorithms that can continue to provide an ever growing level of computational simulation capability for the scientific community? This investment will ultimately pay off in spurring the economy of the future allowing job growth, as well as the traditional benefits in National Security.

A couple of additional aspects of this change are notable. The determination of whether an algorithm and modeling approach is better than the legacy is complex. This is a challenge to the verification and validation methodology. A legacy on a method with a finer mesh is a simple verification and error estimation problem (which is too infrequently actually carried out!). A new method and/or often produces a systematically different answer that requires much more subtle examination. This then carries over to a delicate matter of providing confidence in those that might use the new methodology. In many cases users have already accepted the results computed with the legacy method, and the nearby answers obtainable with a refined mesh are close enough to inspire confidence (or simply be familiar). Innovative approaches providing different looking answers will induce the opposite effect, and inspire suspicion. This is a deep socio-cultural problem rather than a purely technical issue, but at its solution lays the roots of success or failure.

Pingback: Why algorithms and modeling Beat Moore’s Law | The Regularized Singularity

Dan Segalman

said:Bill, your points are all good, but the situation is worse than you describe. By my perspective, Moore’s law (actually Moore’s trend) has been off for over a decade because much of the speedup due to parallelism has be illusory. As the number of processors has grown, the number of categories of problems that can take full advantage of such machines has diminished. Fortunately for fluid mechanics people, their problems continue to scale. For many other categories of problem, such as solid mechanics with free contact, MP computing has not been so impressive. For many of these problems falling by the wayside of MP computing, you observation about algorithms is especially relevant.

Another observation has to do with relevance. There are categories of problem – such as those governed by Navier-Stokes – where refining meshes can significantly improve the quality of the answer. There are other categories where discretization error is no longer the major concern; missing and approximate physics is the core limitation to quality of prediction. For these cases, bigger hardware offers at best more precise solution to the wrong equations. This situation would seem to call for enhanced investment in the science underlying our calculations.

Bill Rider

said:Dan thanks for the comment. I’d go even farther and say that weak scaling is largely a marketing ploy and has papered over the death of Moore’s law, which has been bad for a while and going into critical condition. I highly recommend Sutter’s “Welcome to the Jungle” as a brilliant read on what is happening.

The differences between fluid mechanics and solid mechanics is deeper still. The foundations of the fields are different and their connection to the basic mathematics is very important. I would contend that the numerical analysis roots of solid mechanics is very weak, while fluid mechanics is strong. This as much as anything has conspired to make fluid mechanics work better with computing.

I hope you’d agree with my primary contention, thinking is the key to improvement, not the lazy habit of simply waiting for faster computers to make things better by fiat.

Pingback: We have already lost to the Chinese in Supercomputing; Good thing it doesn’t matter | The Regularized Singularity