“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” – Clarke’s first law

It would be easy to point fingers at the crushing bureaucratic load we face at many of our premier research institutes. I think that this only compounds the real forces holding us back as a sort of mindless ally in the quest for mediocrity. I for one can feel my ability to think and create being syphoned away by meaningless paperwork, approvals, training and mindless formality. The personal toll is heartbreaking and the taxpayers should be up in arms. Of course most of this is driven by our scandal mongering political system and the increasingly tabloidesque media. These items are merely various forms of societal dissipation aimed at driving entropy into its all-consuming conclusion.

When I came across the article in the Daily Beast (Our Mindless Government Is Heading for a Spending Disaster) yesterday on the book “The Rule of Nobody,” by Phillip K. Howard it became clear that I’m not alone in feeling this way. Our Labs are actually not run by anyone, and certainty not the management of the Lab. The problem with this approach is not partisan, but rather associated with a tendency to be lazy in our rule. The core of what drives this trend is the inability to reinvent our governance. This failure to reinvent is then at the core of the deeper issue, the fear of risk or failure. We have a society-wide inability to see failure for what it is; failure is a necessary vehicle for success. Risk is the thing that allows us to step forward toward both accomplishment and failure. You cannot have one without the other. Somehow as a culture we have forgotten how to strive, to accept the failure as a necessary element for a healthy Country. Somehow this aversion has crept into our collective consciousness. It is sapping our ability to accomplish anything of substance.

In scientific research the inability to accept risk and the requisite failure is incredibly destructive. Research at its essence is doing something that has never been done before. It should be risky and thus highly susceptible to failure. Our ability to learn the limits of knowledge is intimately tied to failure. Yet failure is the very thing that we are not encouraging as a society. In fact, failure is punished without mercy. The aggregate impact of this is the failure to accept the sort of risk that leads to large-scale success. To get a “Google” or a “moon landing” we have to fund, accept and learn from innumerable failures. Without the failure the large success will elude us as well.

Another is the artificial limitation we place on our thinking in the guise of thinking “it’s impossible”. Impossible also implies risk and the large chance of outright failure. We quit pushing the limits of what might be possible and escape into the comfortable confines of the safe possible, A third piece is the inability to marshal our collective efforts in the pursuit of massive societal goals. These goals capture the imagination and drive the orientation toward success beyond us to greater achievements. Again, it is the inability to accept risk. The last I’ll touch upon is the lack of faith in the creative abilities of mankind. Man’s creative energies have continually overcome limitations for millennia and there is no reason to think this won’t continue. Algorithmic improvement’s impacts on computing are but one version of the large theme of man’s ability to create a better world.

It seems that my job is all about NOT taking risks. The opposite should be true. Instead we spend all our time figuring out how to not screw up, how to avoid any failure. This, of course, is antithetical to success. All success, all expertise is built upon the firm foundation of glorious failure and risk. Failure is how we learn and risk helps to stoke the flames of failure. Instead we have grown to accept creeping mediocrity as the goal of our entire society. When the biggest goal at work is “don’t screw up” it is hard to think of a good reason to do anything. We have projects that have scheduled breakthroughs and goals that are easy to meet. Very few projects are funded that actually attack big goals. Instead instrumentalism abounds and the best way to get funded is to solve the problem first then use the result to justify more funding. It’s a vicious cycle, and it is swallowing too much of our efforts.

Strangely enough, the whole viscous cycle also keeps us from doing the mundane. Since our efforts are so horrifically over managed there is no energy to actually execute what should be the trivial aspects of the job. Part of this related to the slicing and dicing of our work into such small pieces any coherence is lost. The second part is the lack of any overarching vision of where we are going. The lack of big projects with scope kills the ability to do consequential tasks that should be easy. Instead we do all sorts of things that seem hard, but really amount to nothing. We are a lot of motion without any real progress. Some of us noted a few weeks ago that new computer codes were started every five to seven years. Then about 25 years ago that stopped. Now everything has to be built upon existing codes because it lowers the risk. We have literally missed four or five generations of new codes. This is failure on an epic scale because no one will risk something new.

“Can we travel faster than the speed of light?” My son once asked me. A reading of the standard, known theories of physics would give a clear unequivical “No, it would be impossible.” I don’t buy this as the ultimate response. A better and more measured response would be “not with what we know today, but there are always new things to be learned about the universe.” “Maybe we can using physical principles that haven’t been discovered yet.” Some day we might travel faster than light, or effectively so, but it won’t look like Star Trek’s warp drive (or maybe it will, who knows). The key is to understand that what is possible or impossible is only a function of what we know today, and our state of knowledge is always growing.

In mathematics these limits on possibility often take the form of barrier theorems. These state what cannot be done. These barriers can be overcome if the barriers are looked at liberally with an eye toward loopholes. A common loophole is linearity. Linearity infuses many mathematical proofs and theorems, and the means to overcoming the limitations are appealing to nonlinearity. One important example is Godunov’s theorem where formal accuracy and monotonicity were linked. The limit only exists for linear numerical methods, and a nonlinear numerical method can be both greater than first order accurate and monotone. The impossible was possible! It was simply a matter of thinking about the problem outside the box of the theorem.

In most of the areas that have traditionally supported scientific computing are languishing today. Almost nothing in the way of big goal oriented projects exist to spur progress. The last such program was the ASCI program from the mid-1990’s, which unfortunately focused too much on pure computing as the route to progress. ASCI bridged the gap between the CPU dominated early era to the growth in massively parallel computation. If fact parallel computing has masked the degree to which we are collectively failing to use our computers effectively. This era is drawing to a close, and in fact Moore’s law is rapidly dying.

While some might see the death of Moore’s law as a problem, it may be an opportunity to reframe to quest for progress. In the absence of computational improvements driven by the technology, the ability to progress could be again given to the scientific community. Without hardware growing in capability the source of progress resides in the ability of algorithms, methods and models to improve. Even under the spell of Moore’s law, these three factors have accounted for more improvement in computational capability than hardware. What will our response be to losing Moore’s law? Will we make investments appropriately in progress? Will we refocus our efforts on improving algorithmic efficiency, better numerical methods and improved modeling? Hope springs eternal!

In the final analysis, such an investment requires a great deal of faith in man’s eternal ability to create, to discover and be inspired. History provides an immense amount of evidence that this faith would be well placed. As noted above, we have created as much if not more computational capability through ingenious algorithms, methods, heuristics, and models than our massive strides in computational hardware.

It is noteworthy that the phone in my pocket today has the raw computational power of a Cray 2. It sits idle most of the time and gets used for email, phone calls, texts and light web browsing. If you had told me that I’d have this power available to me like these 25 years ago, I would have been dumbstruck. Moreover, I don’t really use it for anything like I’d have used a Cray 2. The difference is that the same will almost certainly not happen in the next 25 years. The “easy” progress simply riding the coattails of Moore’s law is over. We will have to think hard to progress and take a different path. I believe the path is clear. We have all the evidence needed to continue our progress.