If you want a new tomorrow, then make new choices today.
― Tim Fargo
Ultimately the importance of what we compute is determined by how useful the results are. Are the results good at explaining something we see in nature, confirming an idea, providing concrete evidence of how a scenario might unfold, or helping create a better widget? The classical uses of scientific computing are solving initial value problems and large-scale data analysis each of which can play a role in the answering the above questions. How much have we moved bey ond this classical view in the 70 or so years the field has existed?
I think the answer is “not nearly enough,” and computing is failing to deliver on its full potential as a result.
Never attribute to malice that which can be adequately explained by stupidity.
― Robert Hanlon
Scientific computing is still dominated by the same two big uses that existed at the beginning. Recently data analysis has reasserted itself as the big “new” thing. This is mostly the consequence of the deluge of data coming from the Internet, and the impending Internet of things. For mainstream science, the initial value problem still holds sway for a broader set of activities although data is big in astronomy, geophysics and social sciences.
To change ourselves effectively, we first had to change our perceptions.
― Stephen R. Covey
The problem is that bigger, better things are possible if we simply marshal our efforts properly. Computing has the potential to reshape our ability to design through combining our forward simulations with optimization. The same could be done with data analysis to power calibration of models. Another powerful would be a pervasive analysis of uncertainties in our modeling. Almost all of these cases have direct analogs in the World of data analysis. Together this array of untapped potential would contribute greatly to our understanding and mastery of nature.
Engineers like to solve problems. If there are no problems handily available, they will create their own problems.
― Scott Adams
What is holding us back?
Probably the greatest issue holding us back is our absolute intolerance of risk. It is always less risky to incrementally improve what you are already doing. This has become the singular focus of science today. Making small improvements to something that is already deemed a success is a path to avoiding failure, “building on success”. Most progress looks like this, and today almost all progress looks like this. To get more out of computing, we need to risk doing something really new, and with that risk comes the possibility of failure. Without that risk the level of success that may be achieved is also much lower. I believe that this is the main driver behind not taking advantage of computing.
Evolution is more about adaptivity than adaptability.
― Raheel Farooq
This modern pathology also creates a myriad of side effects. One of the engines of innovation is applied mathematics where the act of playing it safe is sapping the vitality from the field. Increasingly the applied math work is focused on ideal model problems, and eschews the difficult work of attacking real problems, or problems where the math is messy. Without a more applied and more daring approach to developing capabilities, the innovative energy will not be unleashed. Part of the innovation means simply trying new things whether or not it is amenable to analysis. Work is guided by importance and utility rather than tractability.
Life’s journey is built of crests and troughs, the movement is always going to be fast only towards the trough and the progress is bound to be slow towards the crest.
― Anuj Somany
A good place to look at where analysis should be applied is to methods that work. The topic of compressed sensing is a great example. By the time compressed sensing was “invented” it had been in use for 30 years as a practical approach in several fields, but lacked theoretical support. When the theoretical support arrived from some of the best mathematicians alive today, the field exploded. New uses for this old methodology are discovered almost every day. It is an example of what a coherent theory can do for a field. Without the theory, the topic was stranded as a “trick” and its applicability was limited. With the theory the applications that could be attempted grew immensely (and continues to grow).
Our culture works hard to prevent change.
― Seth Godin
Another place where we have systematically failed to advance appropriately is the simulation of stochastic or random phenomena. We are still devoted to solving almost everything in terms of a mean field theory. While the mean field view of the World has served us well, today many of our most important applications are driven by statistics. How often will something really good, or really bad happen? How much of a population of devices will fail in a certain may? How likely is a certain event? Today most of our simulation capability is ill suited to answering these questions. In many cases we try to answer these incorrectly by merely examining the uncertainty in the mean field solution (i.e., sampling uncertainty parametrically, which is not the same thing). Almost none of the simulation techniques are suitable for examining the variability of the systems being simulated.
If failure is not an option, then neither is success.
― Seth Godin
The foundation of our limitations is not our intellectual abilities, but rather our taste for risk and change. With change and risk comes the potential for failure or unexpected outcomes. Lately, these sorts of things can’t be tolerated by our society. Without tolerance for bad things, our capacity to experience good things is undermined. Instead we are left to swim in an era of unmitigated mediocrity. It is sad that we’ve come to accept this as our mantra.
I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration.
― Frank Herbert, Dune