A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.

― George Bernard Shaw

Tianhe-2-supercomputerHigh performance computing is a hot topic these days. All sorts of promises have been made regarding its transformative potential. Computational modeling is viewed as the cure to the lack of ability to do expensive, dangerous or even illegal experiments. All sorts of benefits are supposed to rain down upon society as a driver of a faster, better and cheaper future. If we were collectively doing everything that should be done these promises might have a chance of coming true, but we’re not and they won’t, unless we start doing things differently.

The Chinese use two brush strokes to write the word ‘crisis.’ One brush stroke stands for danger; the other for opportunity. In a crisis, be aware of the danger–but recognize the opportunity.

― John F. Kennedy

 So, what the hell?

Computing’s ability to deliver on these promises is at risk, ironically due to a lack of risk taking. The scientific computing community seems to have rallied around taking the safe path of looking toward faster computing hardware as the route toward enhanced performance. High payoff activities such as new model or algorithm development are also risky and likely to fail with high probability. The relatively small number of successful projects in these areas result in massive payoffs in terms of performance.mistakesdemotivator

Despite a strong historical track record of providing greater benefits for computational simulation than hardware efforts to improve modeling, methods and algorithms are starved for support. This will kill the proverbial goose that is about to lay a golden egg. We are figuratively strangling the baby in the crib by failing to feed the core of creative value in simulation. We have prematurely declared that computational simulation is mature and ready for prime time. In the process we are stunting its growth and throwing money away on developing monstrous computers to feed computational power to a “petulant teen”. Instead we need to develop the field of simulation further and make some key steps toward providing society with a mature and vibrant scientific enterprise. Policy makers have defined a future where the only thing that determines computational simulation capability to be the computing power of the computer it runs on.

This mindset has allowed the focus to shift almost in its entirety toward computing hardware. Growth in the performance of computing power is commonly used as an advertisement for the access and ease of utilizing computational modeling. An increasing number of options exist for simply buying simulation capability in the form of computational codes. The user interfaces for the codes allows relatively broad access to modeling and definitely takes the capability out of the hands of the experts. For those selling capability this democratization is a benefit because it increases the size of the market. Describing this area as a mature, solved problem is another marketing benefit.12099970-aerodynamic-analysis-hitech-cfd

The question of whether this is a good thing still needs to be asked. How true are these marketing pitches?

It is relatively easy to solve problems today. Computer power allows the definition of seemingly highly detailed models and fine computational grid as well as stunning visual representations. All of these characteristics provide users with the feeling of simulation quality. The rise of verification and validation should allow the users to actually determine whether these feelings are justified. Generally V&V undermines ones belief in how good results are. On the other hand people like to feel that their analysis is good. This means that much of the negative evidence is discounted or even dismissed when conducting V&V. The real effect of the slipshod V&V is to avoid the sort of deep feedback that the quality of results should have on the codes.

When you fail, that is when you get closer to success.

― Stephen Richards

images-1At this juncture it’s important to talk about current codes and the models and methods contained in them. The core of the philosophy of code based modeling goes all the way back into the 1960’s and has not changed much since. This is a problem. In many cases the methods used in the codes to solve the models are nearly as old. In many cases the methods were largely perfected during the 1970’s and 1980’s. Little or no effort is presently being put forth to advance the solutions techniques. In summary most effort is being applied to simply implementing the existing solution techniques on the next generation of computers.

Remember the two benefits of failure. First, if you do fail, you learn what doesn’t work; and second, the failure gives you the opportunity to try a new approach.

― Roger Von Ouch

Almost certainly the models themselves are even more deeply ensconced and effectively permanent. No one even considers changing the governing equations being solved. Models of course have a couple of components, the basic governing equations are generally quite classical, and their closure is the part that slowly evolves. These equations were the product of 17th-19th Century science and philosophical mindset that should being questioned if science itself were healthy. If one thinks about the approach we take today, the ability to resolve new length and time scales it has changed monumentally. We should be able to solve vastly nonlinear systems of equations (we really can’t in a practical robust manner). Is it even appropriate to have the same equations? Or should the nature of the equations change as a function of the characteristic scales of resolution. Closure modeling evolves more readily, but only within the philosophical confines defined by the governing equations. Again, we are woefully static, and the lack of risk taking is undermining any promise for actual promise.

Take the practice of how material properties are applied to a problem as a key point. The standard way to apply material properties is to “paint” the properties into regions containing a material. For example if aluminum exists in the problem, a model defines the properties with its response to forces. The aluminum is defined, as being the same everywhere there is aluminum. As the scale size gets smaller aluminum (or any material) gets less and less homogeneous. There begins to be significant differences in the structure typically defined by the grain structure of the material and any imperfections. The model is systematically ignoring these heterogeneous features. Usually their collective effects are incorporated in an average way in the model, but the local effects of these details are ignored. Modern application questions are more and more focused upon the sort of unusual effects that happen due to these local defects.

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

Let’s be perfectly blunt and clear about the topic of modeling. The model we solve in simulation is the single most important and valuable aspect of a computation. A change in the model that opens new physical vistas is more valuable than any computer including one with limitless power. A computer is no better than the model it solves. This point of view is utterly lost today.

More dangerously we are continuing to write codes for the future in the same manner today. In other words we have had the same philosophy in computational modeling for the last 50 years or more. The same governing equations and closure philosophy are being used today. How much longer will be continue to do the same thing? I believe we should have changed a while ago. We can begin to study the impact of material and solution heterogeneity already, but the models and methods to do so are not being given any priority.

The reason is that it would be disruptive and risky. It would require changing our codes and practices significantly. It would undermine the narrative of computer power as the tonic for what ails us. It would be a messy and difficult path. It would also be consistent with the scientific method instead of following a poorly thought through intellectually empty article of faith. Because risk taking is so antithetical today this path has been avoided.

Our most significant opportunities will be found in times of greatest difficulty.

― Thomas S. Monson

ClimateModelnestingThe investments in faster computers are valuable and beneficial, but only if these investments are balanced with other investments. Modeling is the aspect of computation that is closest to reality and holds the greatest leverage and value. Methods for solving models and associated algorithms are next closest and have the next highest leverage. Neither of these areas is being invested in at a healthy level. Implementing these algorithms and models is next most important. Here there is a little more effort because existing models need to work on the computers. The two areas with the highest level of effort are system software and hardware. Ironically these two areas have the least amount of value in terms of effecting reality. No one in a position of power seems to recognize how antithetical to progress this state of affairs is.

Sometimes it’s the mistakes that turn out to be the best parts of life

― Carrie Ryan

Advertisements