The greater danger for most of us lies not in setting our aim too high and falling short; but in setting our aim too low, and achieving our mark.
The DOE ASC program has turned into “Let’s create a program that will replace the old generation of legacy codes with a new generation of legacy codes.” In this way the program which just celebrated its 20th anniversary has been a massive success. Unfortunately this end product is not in service to our National security, it is a threat.
One of the reasons I have been given for some of the work we are doing is the need to “preserve our code base”. This code base is called a multi-billion dollar investment that DOE has made and needs to be maintained for the future. Nothing could be further from the truth. It is one of the most defeatist and insulting things I can imagine. It is naïve and simplistic at it core. This makes me want to puke.
Why should I have such a strong and visceral reaction to a statement of “support” for the importance of modeling and simulation work? After all to preserve the code base comes with funding for lots of work and the purchase of super-exotic super-computers that seem really cool (they’re really big and complicated with lots of flashing lights plus cost a shit-ton of money). My problem comes from the lack of faith this approach denotes in the ability of our current scientists to produce anything of intellectual value. Instead the valuing the creativity and creation of knowledge by our current generation of scientists, we are implicitly valuing the contributions from the past. We should value the work of the past, but as a foundation to build upon not worship as an idol. The impact of the approach means the value of work today is diminished, and the careers of current scientists are diminished to the point of simply being caretakers. It makes today’s scientists mindless high priests of the past. We end up asking very little of them in terms of challenge and accomplishment, and end up harming the Nation’s future in the process. Hence the reason for the “makes me want to puke,” comment.
So why the hell does this messaging exist?
It is a rather feeble attempt to justify the existence of the work that exists. It is feeble because it completely misrepresents the work entirely and creates a harmful narrative. The question exist because people simply don’t understand what “code” is. They think of code like a bridge that once built simply does the job for a very long time. Code is nothing at all like a bridge or building and trying to manage it in the manner that is being promoted is dangerous, destructive and borders on incompetence. It is certainly an attitude born of complete ignorance.
Cooking requires confident guesswork and improvisation– experimentation and substitution, dealing with failure and uncertainty in a creative way
― Paul Theroux
A much better analogy is cooking. Code is simply the ingredients used to cook a dish. Good ingredients are essential, but insufficient to assure you get a great meal. Moreover food spoils and needs to be thrown out, replaced or better ones choses. Likewise parts of the code are in constant need of replacement or repair or simply being thrown out. The computing hardware is much like the cooking hardware, the stove top, oven, food processors, etc. which are important to the process, but never determine the quality of the meal. They may determine the ease of preparation of the meal, but almost never the actually taste and flavor. In the kitchen nothing is more important than the chef. Nothing. A talented chef can turn ordinary ingredients into an extraordinary culinary experience. Give that same talented chef great ingredients, and the resulting dining experience could be absolutely transcendent.
Our scientists are like the chefs and their talents determine the value of the code and its use. Without their talents the same code can be rendered utterly ordinary. The code is merely a tool that translates simple instructions into something the computer can understand. In skilled hands it can render the unsolvable, solvable and unveil an understanding of reality invisible to experiment. In unskilled hands, it can use a lot of electricity and fool the masses. With our current attitude toward computers we are turning Labs once stocked with outstanding ingredients and masterful chefs into fast food frycooks. The narrative of preserve the code base, isn’t just wrong, it is downright dangerous and destructive.
…no one is born a great cook, one learns by doing.
― Julia Child
It does represent modeling and simulation in support of our Nation’s nuclear weapons, and this should worry everyone a lot. Rather than talk about investing in knowledge, talent and people, we are investing our energy in keeping old stale code alive and well as our computers change. Of course we are evolving our computers in utterly idiotic ways that do little or nothing to help us solve problems that we really care about. Instead we are designing and evolving our computers to solve problems that only matter for press releases. More and more the computers that make for good press releases are the opposite for real problems; the new computers just suck so much harder for solving real problems.
Computing at the high end of modeling and simulation is undergoing great change in a largely futile endeavor to squeeze what little life Moore’s law has left in it. The truth is that Moore’s law for all intents and purposes died a while ago, at least for real codes solving real problems. Moore’s law only lives in its zombie-guise of a benchmark involving dense linear algebra that has no relevance to the codes we actually buy computers for. So I am right in the middle of a giant bait and switch scheme that depends on the even greater naivety and outright ignorance on the part of those cutting the checks for the computers than those defining the plan for the future of computing.
At the middle of this great swindle is code. What is code, or more properly code for solving models used to simulate the real world? The simplest way to think about code is to view it as a recipe that a “master” chef created to produce a model of reality. A more subtle way to think about a code is as a record of intellectual labor made toward defining and solving models proposed to simulate reality. If we dig our way deeper, we see that code is way of taking a model of reality and solving it generally without making gross assumptions to render it analytically tractable. The model is only as good as theintellect and knowledge base used to comprise it in conjunction with the intellect and knowledge used to solve the problem. At the deepest level the code is only as good as the people using it. By not investing in the quality of our scientists we are systematically undermining the value of the code. For the scientists to be good their talent must be developed through engaging in the solution of difficult problems.
If we stay superficial and dispense with any and all sophistication then we get rid of the talented people, and we can get by with trained monkeys. If you don’t understand what is happening in the code, it just seems like magic. With increasing regularity the people running these codes treat the codes like magical recipes for simulating “reality”. As long as the reality being simulated isn’t actually being examined experimentally, the magic works. If you have magic recipes, you don’t change them because you don’t understand them. This is what we are creating at the labs today, trained monkeys using magical recipes to simulate reality.
In a lot of ways the current situation is quintessentially modern and exceptionally American in tenor. We have massive computers purchased at great cost running magical codes written by long dead (or just retired) wizards maintained by a well-paid,well-educated cadre of pheasants. Behind these two god-awful reasons to spend money is a devaluing of the people working at the Labs. Development of talent and the creation of intellectual capital by that talent are completely absent from the plan. It creates a working environment that is completely backward looking and devoid of intellectual ownership. It is draining the Labs of quality and undermining one of the great engines of innovation and ingenuity for the Nation and World.
The computers aren’t even built to run the magical code, but rather run a benchmark that only produces results for press releases. Running the magical code is the biggest challenge for serfs because the computers are so ill suited to their “true” purpose. The serfs are never given the license of ability to learn enough to create their own magic; all their efforts go into simply maintaining the magic of the bygone era.
What could we accomplish if we knew we could not fail?
― Eleanor Roosevelt
What would be better?
One option would be to stop buying these computers whose sole purpose is to create a splashy press release then struggle forever to run magical codes. Instead we should build computers that are optimized within constraints to solve the problems the agencies they are purchased for a solving. We could work to push back against the ever-steeper decline in realized performance. Maybe we should actually design, build and buy computers we actually want to use. What a novel concept, buy a computer you actually want to use instead of one you are forced to use!
That, my friends, is the simplest thing to achieve. The much more difficult thing is overcoming the magical code problem. The first thing is overcoming magical code is to show the magic for what it is, the product of superior intellect and clever problem solving and nothing more. We have to allow ourselves to create new solutions to new problems grounded by the past, but never chained to it. The codes we are working with are solving the problems posed in the past, and the problems of today are different.
One of the biggest issues with the magical codes is their masterful solution of the problems they were created to solve. The problems they are solving are not the problems we need to solve today. The questions driving technological decision making today are different than yesterday. Even if there is a good reason to preserve the code base (there isn’t), the code base is solving the wrong problems; it is solving yesterday’s problems (really yesteryear’s or yester-decade’s problems).
All of this is still avoiding the impact of solution algorithms on the matter of efficiency. As others and I have written, algorithms can do far more than computers to improve the efficiency of solution. Current algorithms are an important part of the magical recipes in current codes. We generally are not doing anything to improve the algorithmic performance in our codes. We simply push the existing algorithms along into the future.
This is another form of the intellectual product (or lack thereof) that the current preserve the code base attitude favors. We completely avoid the possibility of doing anything better than we did in the past algorithmically. Historically improvements in algorithms provided vastly greater advances in capability than Moore’s law provided. I say historically because these advances largely occurred prior to the turn of the Century (i.e., 2000). In the 15 years since progress due to algorithmic improvements has ground to a virtual halt.
All the energy in scientific computing has gone into implementing existing algorithms on the new generation of genuinely awful computers. Instead of investing in a proven intellectual path for progress that has paired with computer improvements, we have shifted virtually all effort into computers and their direct consequences. Algorithmic research is risky and produces many failures. It takes a great deal of tolerance for failures to invest sufficiently to get the big payoff.
Our funding agencies have almost no tolerance for failure, and without the tolerance for failure the huge successes are impossible. The result is a systematic lack of progress, and complete reliance on computer hardware for improvement. This is a path that will ultimately and undeniably leads to a complete dead end. In the process of reaching this dead end we will sacrifice an entire generation of scientists to this obviously sub-optimal and stupid approach.
Ultimately, the irritation over the current path is primarily directed at the horrible waste of opportunity it represents. There is so much important work that needs to be done to improve modeling and simulation’s quality and impact. At this point in time computing hardware might be the least important aspect to work on; instead it is the focal point.
Much greater benefits could be realized through developing better models, extending physical theories, and fundamental improvements in algorithms. Each of these areas is risky and difficult research, but offers massive payoffs with each successful breakthrough. The rub is that breakthroughs are not guaranteed, but rather require an element of faith in ability of human intellect to succeed. Instead we are placing our resources behind an increasingly pathetic status quo approach. Part of the reason for continuation of the approach is merely the desire of current leadership to take a virtual victory lap by falsely claiming the success of the approach they are continuing.
In a variety of fields the key aspect of modeling and simulation that evades our grasp today are complex chaotic phenomena that are not understood well. In fluid dynamics turbulence continues to be vexing. In solid mechanics fracture and failure serve a similar role in limiting progress. Both areas are in dire need of new fresh ideas that might break the collective failures down and allow progress. In neither area will massive computing provide the hammer blow to allow progress. Only by harnessing human creativity and ingenuity will these areas progress.
In many ways I believe that one of the key aspect that limits progress is our intrinsic devotion to deterministic models. Most of our limiting problems all lend themselves more naturally to non-deterministic models. These all require new mathematics, new methods, new algorithms to unleash their power. Faster computers are always useful, but without the new ideas these new faster computers will simply waste resources. The issue isn’t that our emphasis is necessarily unambiguously bad, but rather grossly imbalanced and out of step with where our priorities should be.
Mediocrity knows nothing higher than itself; but talent instantly recognizes genius.
― Arthur Conan Doyle
At the core of our current problems is human talent. Once we found our greatest strength in the talent of the scientists and engineers working at the Labs. Today we focus on things, code, computers, and machines as the strength while starving the pipeline of human talent. When the topic of talent comes up they speak about hiring the best and brightest while paying them at the “market” rate. More damning is how the talent is treated when it hires on. The preserve the code mantra speaks to a systematic failure to develop, nurture and utilize talent. We hire people with potential and then systematically squander it through inept management and vacant leadership. Our efforts in theory and experiment are equally devoid of excellence, utility and vision.
Once we developed talent by providing tremendously important problem to solve and turn excellent people loose to solve these problems in an environment that encouraged risky, innovative solutions. In this way the potentially talented people become truly talented and accomplished, ready to slay the next dragon using the experience of the previous slain beasts. Today we don’t even show let them see an actual dragon. Our staff never realize any of their potential because they are simply curating the accomplishments of the past. The code we are preserving is one of these artifacts we are guarding. This approach is steadily strangling the future.
Human resources are like natural resources; they’re often buried deep. You have to go looking for them, they’re not just lying around on the surface. You have to create the circumstances where they show themselves.
― Ken Robinson