In algorithms, as in life, persistence usually pays off.

― Steven S. Skiena

Over the past year I’ve opined that algorithm (method) research is under-supported and under-appreciated as a source of progress in computing. I’m not going to backtrack one single inch on this. We are not putting enough effort into using computers better, and we are too much effort to building bigger, less useful and very hard to use computers. Without the smarts to use these machines wisely this effort will end up being a massive misapplication of resources.bh_computers_09

The problem is that the issues with algorithm research are even worse than this. The algorithm research we are supporting is mostly similarly misdirected. It turns out we are focused on algorithm research that does even more to damage our prospects for success. In other words, even within the spectrum of algorithm research there isn’t an equality of impact.

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

— Nick Trefethen

There are two fundamental flavors of algorithm research with intrinsically different value to the capability of computing. One flavor involves the development of new algorithms with improved properties compared to existing algorithms. The most impactful algorithmic research focuses on solving the unsolved problem. This research is groundbreaking and almost limitless in impact. Whole new fields of work can erupt from these discoveries. Not surprisingly, this sort of research is the most poorly supported. Despite its ability to have enormous and far-reaching impact, this research is quite risky and prone to failure.ContentImage-RiskManagement

If failure is not an option, then neither is success.

― Seth Godin

It is the epitome of the risk-reward dichotomy. If you want a big reward, you need to take a big risk, or really lots of big risks. We as a society completely suck at taking risks. Algorithm research is just one of enumerable examples. Today we don’t do risk and we don’t do long-term. Today we do low-risk and short-term payoff.

Redesigning your application to run multithreaded on a multicore machine is a little like learning to swim by jumping into the deep end.

—Herb Sutter

A second and kindred version of this research is the development of improved solutions. These improvements can provide lower cost of solution through better scaling of operation count, or better accuracy. These innovations can provide new vistas for computing and enable the solution of new problems by virtue of efficiency. This sort of research can be groundbreaking when it enables something to be done that couldn’t be reached due to inefficiency.7b8b354dcd6de9cf6afd23564e39c259

This is the form of algorithm research forms a greater boon to efficiency of computing than Moore’s law has provided. A sterling example comes from numerical linear algebra where costly methods have been replaced by methods that made solving billions of equations simultaneously well within reach of existing computers. Another really good example were the breakthroughs in the 1970’s by Jay Boris and Bram Van Leer whose discretization methods allowed an important class of problems to be solved effectively. This powered a massive explosion in the capacity of computational fluid dynamics (CFD) to produce meaningful results. Without their algorithmic advances CFD might still be ineffective for most engineering and science problems.

The third kind of algorithm research is focused on the computational implementation of existing algorithms. Typically these days this involves making an algorithm work on parallel computers. More and more it focuses on GPU implementations. This research certainly adds value and improves efficiency, but its impact pales in comparison to the other kind of research. Not that it isn’t important or useful, it simply doesn’t carry the same “bang for the buck” as the other two.

In the long run, our large-scale computations must inevitably be carried out in parallel.

—Nick Trefethen

Care to guess where we’ve been focusing for the past 25 years?

The last kind of research gets the lion’s share of the attention. One key reason for this focus is the relative low risk nature of implementation research. It needs to be done and generally it succeeds. Progress is almost guaranteed because of the non-conceptual nature of the work. This doesn’t imply that it isn’t hard, or requires less expertise. It just can’t compete with the level of impact as the more fundamental work. The change in computing due to the demise of Moore’s law has brought parallelism, and we need to make stuff work on these computers.images-2

Both are necessary and valuable to conduct, but the proper balance between the two is a necessity. The lack of tolerance for risk is one of the key factors contributing to this entire problem. Low-risk attitudes contribute to the dominance of focus on computing hardware and the appetite for the continued reign of Moore’s law. It also compounds and contributes to the dearth of focus on more fundamental and impactful algorithm research. We are buying massively parallel computers, and our codes need to run on them. Therefore the algorithms that comprise our codes need to work on these computers. QED.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The problem with this point of view is it’s absolute disconnect with the true value of computing. Computing’s true value comes from the ability to solve models of reality. We solve those models with algorithms (or methods). These algorithms are then represented in code for the computer to understand. Then we run them on a computer. The computer is the most distant thing from the value of computing (ironic, but true). The models are the most important thing, followed by how we solve the model using methods and algorithms.LinExtrap

Our current view and the national “exascale” initiative represents a horribly distorted and simplistic view of how scientific value is derived from computing, and as such makes for a poor investment strategy for the future. The computer, the thing the greatest distance from value, is the focus of the program. In fact the emphasis in the national program is focused at the opposite end of the spectrum from the value.titan2

I only hope we get some better leadership before this simple-minded mentality savages our future.

Extraordinary benefits also accrue to the tiny majority with the guts to quit early and refocus their efforts on something new.

― Seth Godin

Transistor_Count_and_Moore's_Law_-_2008_1024

Advertisements