The best dividends on the labor invested have invariably come from seeking more knowledge rather than more power.
– The Wright brothers
Some messages are so important that they need to be repeated over and over. This is one of those times. Computing is mostly not about computers. A computer is a tool, powerful, important and unrelentingly useful, but a tool. Using computing is a fundamentally human activity that uses a powerful tool to augment human’s capacity for calculation and monotony. Today we see attitudes expressing more interest in the computers themselves with little regard for how they are used. The computers are essential tools that enable a certain level of utility, but the holistic human activity is at the core of what they do. This holistic approach is exactly the spirit that has been utterly lost by the current high performance computer push. In a deep way the program lacks the appropriate humanity in its composition, which is absolutely necessary for progress.
Most clearly the computers themselves are not an end in themselves, but only useful insofar as they can provide benefit to the solution of problems for mankind. Taking human thinking and augmenting it derives the benefits for humanity. It is our imagination and inspiration automated as to enable solutions through primarily approximate means. The key to all of the true benefits of computing come from the fields of physics, engineering, medicine, biology, chemistry and mathematics. Subjects closer to the practice of computing do not necessarily push benefits forward to society at large. It is this break in the social contract that current high performance computing has entirely ignored. The societal end product is a mere after thought and little more than simply a marketing ploy for a seemingly unremitting focus on computer hardware.
Mathematics is the door and key to the sciences.
— Roger Bacon
This approach is destined to fail, or at the very most not reap the potential benefits the investment should yield. It is completely and utterly inconsistent with the venerated history of scientific computing. The key to the success and impact of scientific computing has been its ability to augment its foundational fields as a supplement to human’s innate intellect in an area that human’s ability is a bit diminished. While it supplements raw computational power, the impact of the field depends entirely on human’s natural talent as expressed in the base science and mathematics. One place of natural connection is the mathematical expression of the knowledge in basic science. Among the greatest sins of modern scientific computing is the diminished role of mathematics in the march toward progress.
Computing should never be an excuse to not think; the truth is that computing has become exactly that; it is an excuse to stop thinking, and simply automatically get “answers”. The importance of this connection cannot be underestimated. It is the complete and total foundation of computing. This is where the current programs become completely untethered from logic, common sense and the basic recipe of success. The mathematics program is virtually absent from the drive toward greater scientific computing. For example I work in an organization that is devoted to applied mathematics, yet virtually no mathematics actually takes place. Our applied mathematics programs have turned into software programs. Somehow the decision was made 20-30 years ago that software “weaponized” mathematics, and in the process the software became the entire enterprise, and the mathematics itself became lost, an afterthought of the process. Without the actual mathematical foundation for computing, important efficiencies, powerful insights and structural understanding is scarified.
The software has become the major product and end point of almost all research efforts in mathematics to the point of displacing actual math. The product of work needs to be expressed in software and the construction and maintenance of the software packages has become the major enterprise being conducted. In the process the centrality of the mathematical exploration and discovery has been submerged. Software is a difficult, valuable and important endeavor in itself, but distinct from mathematics. In many cases the software itself has become the raison d’être for math programs. In the process of the emphasis on the software instantiating mathematical ideas, the production and assault on mathematics has stalled. It has lost its centrality to the enterprise. This is horrible because there is so much yet to do.
Worse yet, the mathematical software is horribly expensive to maintain and loses its modernity with a frightful path. We hear calls to preserve the code base because it was so expensive. A preserved code base loses its value more surely than a car depreciates. The software is only as good as the intellect of the people maintaining it. In the process we lose intellectual ownership of the code. This is beyond the horrible accumulation of technical debt in the software, which erodes its value like mold or dry rot. None of these problems are the worst of the myriad of issues around this emphasis; the worst issue is the opportunity cost of turning our mathematicians into software engineers and removing the attention from some of our most pressing issues.
A single discovery of a new concept, principle, algorithms or technique can render one of these software packages completely obsolete. We seem to be in an era where we believe that more computer power is all that is needed to bring reality to heel. These discoveries can allow results and efficiencies that were completely unthinkable to be achieved. Discoveries make the impossible, possible, and we are denying ourselves the possibility of these results through our inept management of mathematics proper role in scientific computing. What might be some of the important topics in need of refined and focused mathematical thinking?
The work of Peter Lax and others has brought great mathematical understanding, discipline and order to the world of shock physics. Amazingly this has all happened in one dimension plus time. In two or three dimensions where the real World happens, we know far less. As a result our knowledge and mastery over the equations of (compressible) fluid dynamics is limited and incomplete. Bringing order and understanding to the real World of fluids could have a massive impact on our ability to solve realistic problems. Today we largely exist on the faith that our limited one-dimensional knowledge gives us the key to multi-dimensional real World problems. A program to expand our knowledge and fill these gaps in knowledge would be a boon to analytical and numerical methods seeding a new renaissance for scientific computing, physics and engineering.
One of the key things to understanding the power of computing is the comprehension that the ability compute belies a deep understanding that enables analytical, physical and domain specific knowledge. A problem intimately related to the multi-dimensional issues with compressible fluids is the topic of one of the Clay prizes. This is a million dollar prize for proving the existence of solutions to the Navier-Stokes equations. There is a deep problem with the way this problem is posed that may make its solution both impossible and practically useless. The equations posed in the problem statement are fundamentally wrong. They are physically wrong, not mathematically although this wrongness has consequences. In a very deep practical way fluids are never truly incompressible; incompressible is an approximation, but not a fact. This makes the equations have an intrinsically elliptic character (because incompressibility implies infinite sound speeds, and lack of thermodynamic character).
Physically the infinite sound speeds remove causality from the equations, and the removal of thermodynamics takes them further outside the realm of reality. This also creates immense mathematical difficulties that make these equations almost intractable. So this problem touted as the route to mathematically contribute to understanding turbulence may be a waste of time for that endeavor as well. Again, we need a concerted effort to put this part of the mathematical physics World into better order. The benefits to computation through some order would be virtually boundless.
This gets at one of the greatest remaining unsolved problems in physics, turbulence. The ability to solve problems depends critically upon models and the mathematics that makes such models tractable or not. The existence theory problems for the incompressible Navier-Stokes equations are essential for turbulence. For a century it has largely been assumed that the Navier-Stokes equations describe turbulent flow with an acute focus on incompressibility. More modern understanding should have highlighted that the very mechanism we depend upon for creating the sort of singularities turbulence observations imply has been removed in the process of the choice of incompressibility. The irony is absolutely tragic. Turbulence brings almost an endless amount of difficulty to its study whether experimental, theoretical, or computational. In every case the depth of the necessary contributions by mathematics is vast. It seems somewhat likely that we have compounded the difficulty of turbulence by choosing a model with terrible properties. If so, it is likely that the problem remains unsolved, not due to its difficulty, but rather our blindness to the shortcomings, and the almost religious faith many have followed in attacking turbulence with such a model.
Before I close I’ll touch on a few more areas where some progress could either bring great order to a disordered, but important area, or potentially unleash new approaches to problem solving. An area in need of fresh ideas, connections and better understanding is mechanics. This is a classical field with a rich and storied past, but suffering from a dire lack of connection between the classical mathematical rigor and the modern numerical world. Perhaps in no way is this more evident in the prevalent use of hypo-elastic models where hyper-elasticity would be far better. The hypo-elastic legacy comes from the simplicity of its numerical solution being the basis of methods and codes used around the World. It also only applies to very small incremental deformations. For the applications being studied, it should is invalid. In spite of this famous shortcoming, hypo-elasticity rules supreme, and hyper-elasticity sits in an almost purely academic role. Progress is needed here and mathematical rigor is part of the solution.
A couple of areas of classical numerical methods are in dire need of breakthroughs with the current technology simply being accepted as good enough. A key one is the solution of sparse linear systems of equations. The current methods are relatively fragile and it’s been 30-40 years since we had a big improvement. Furthermore these successes are somewhat hollowed by the lack of a robust solution path. Right now the gold standard of scaling comes from multigrid, invented in the mid-1970’s to mid-1980’s. Robust solvers use some sort of banded method with quadratic scaling or pre-conditioned Krylov methods (which are less reliable). This area needs new ideas and a fresh perspective in the worst way. The second classical area of investigation that has stalled is high-order methods. I’ve written about this a lot. Needless to say we need a combination of new ideas, and a somewhat more honest and pragmatic assessment of what is needed in practical terms. We have to thread the needle of accuracy, efficiency and robustness in both cases. Again without mathematics holding us to the level of rigor it demands progress seems unlikely.
Lastly we have broad swaths of application and innovation waiting to be discovered. We need to work to make optimization something that yields real results on a regular basis. The problem in making this work is similar to the problem with high-order methods; we need to combine the best technology with an unerring focus on the practical and pragmatic. Optimization today only applies to problems that are far too idealized. Other methodologies are laying in wait of great impact among these the generalization of statistical methods. There is an immense need for better and more robust statistical methods in a variety of fields (turbulence being a prime example). We need to unleash the forces of innovation to reshape how we apply statistics.
When you change the way you look at things, the things you look at change.
The depth of the problem for mathematics does seem to be slightly self-imposed. In a drive for mathematical rigor and professional virtue in applied mathematics, the field has lost a great deal of connection to physics and engineering. If one looks to the past for guidance, the obvious truth is that the ties between physics, engineering and mathematics have been quite fruitful. There needs to be healthy dynamics of push and pull between these areas of emphasis. The worlds of physics and engineering need to seek mathematical rigor as a part of solidifying advances. Mathematics needs to seek inspiration from physics and engineering. Sometimes we need the pragmatic success in the ad hoc “seat of the pants” approach to provide the impetus for mathematical investigation. Finding out that something works tends to be a powerful driver to understanding why something works. For example the field of compressed sensing arose from a practical and pragmatic regularization method that worked without theoretical support. Far too much emphasis is placed on software and far too little on mathematical discovery and deep understand. We need a lot more discovery and understanding today, perhaps no place more than scientific computing!
Mathematics is as much an aspect of culture as it is a collection of algorithms.
— Carl Boyer
Note: Sometimes my post is simply a way of working on narrative elements for a talk. I have a talk on Exascale computing and (applied) mathematics next Monday at the University of New Mexico. This post is serving to help collect my thoughts in advance.