If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.
― Henry David Thoreau
In all endeavors we desire success, and the best success endures. The endurance of success is predicated on the foundations upon which that success is grounded. If foundations are systematically deprived of the basis, they will crumble and induce a crisis. Another way of saying this is success is dependent on balance. If the short-term success is continually rewarded, the long-term success will be undermined. These principles apply broadly including to the conduct of computational science and scientific computing.
To apply this principle it is important to understand the nature of the foundation, and how the inter-linking areas of focus come together to provide a broad base for success. I see “computing” as a general stream of activities running from an impact in the reality of people’s lives to the method of achieving this on a computer (models with methods and algorithms). These methods and algorithms need to be expressed to the computer in useful form through computer code, and ultimately have a computing platform adequate to the purpose. Every single step in the chain is important, but the relative value and priority of each is different. A lot depends on what the pacing requirements for progress are, but the focus of the value proposition should be an imperitive.
Insanity is doing the same thing, over and over again, but expecting different results
― Narcotics Anonymous
I made the argument that the thing that has set apart computers in recent times is the ability to make things matter to our daily lives, in and out of work. Computers can now have a huge impact on every aspect of living. When this happened the value of the entire computing enterprise exploded to a level unimaginable before. Every other aspect the model, algorithm, code and computer needed to be competently executed and adequate, but the connection to reality was the enabler for unprecedented growth.
Observing and understanding are two different things.
― Mary E. Pearson
The secondary fuel for this revolution is the model of interaction and the algorithms to efficiently deliver the value. The actual code and compute needs of this delivery needs to be competently executed, but beyond that offer nothing distinguishing to it. This is a massive lesson right in front of the scientific community, which seems to be not understood these observations as measured by its actions. Today’s computing for science emphasis has completely inverted the value stream revolutionizing computing in the rest of the World.
The computing hardware has taken center stage in scientific computing followed by computer code. The methods and algorithms have greatly diminished importance in charting the path forward. More troublingly the methods and algorithm work is typically focused upon the effective implementation on new exotic computing hardware, not establishing fundamentally new capabilities. It is important to get the most out of expensive computers, but we fail to harness the power of algorithms; the greatest power of algorithms is to transform what is possible to do with a model of reality. They can change what is even conceivable to solve, and open new vistas of fidelity to solution. A prime example is Google’s search, the value is putting the right information is people’s hand, the model is the connectivity of the Internet, and the PageRank algorithm makes it happen well enough. The code and computers putting it together are necessary, but not innovative.
But better to get hurt by the truth than comforted with a lie.
― Khaled Hosseini
The models of reality are important as the interface between reality and algorithms for solution. Without the model all the algorithms work is for naught. Without an algorithm all the beautiful code and powerful computers are useless. Without the model you don’t have a connection to reality. Thus the lack of focus on modeling in scientific computing is perhaps even worse.
Current work almost assumes that modeling available is adequate for the purposes. It is most assuredly not presently adequate, and it will almost as assuredly never be completely adequate. Modeling must always be improving. If we are doing our computing correctly the models we use should continually be coming up short. Instead, the models seem to be completely frozen in time. They aren’t advancing. For example, I believe we should be undoing the chains of determinism in simulation, but even today deterministic simulations are virtually all of the workload.
Instead of seeing a need for improvement of the underlying models, and the way these models are solved, we have a program that tries to solve the same models, with the same algorithms on massive computers only changing the fidelity of the discretization. This assumes that everything in this chain is already at its ultimate state. This implicit assumption should be rejected out of principle.
To acquire knowledge, one must study;
but to acquire wisdom, one must observe.
― Marilyn Vos Savant
These concepts should be almost self-evident, but in practice we continually trade long-term success for short-term gains. We have adopted practices that lower the short-term risk by raising the long-term risk. Ultimately the entire enterprise is lurching toward a crisis in sustainability. The key to this crisis is starving the foundation of value in scientific computing that is found foremost in models and their solution via algorithms and methods. The other aspect that has been systematically shortchanged is the value of the people who provide the ideas that form model, methods and algorithms. Ultimately, the innovation in scientific computing is the intellectual labor of talented individuals.
The scientific man does not aim at an immediate result. He does not expect that his advanced ideas will be readily taken up. His work is like that of the planter—for the future. His duty is to lay the foundation for those who are to come, and point the way.
― Nikola Tesla