The measure of intelligence is the ability to change.

― Albert Einstein

In looking at the codes we work with today one thing stands out. The methods used in production software are generally much simpler than it should be. Advances that should have been commonplace by now aren’t present. There seems to be a good reason for this; the complexity of implementing algorithms on modern computers biases choices toward the simple. The result is a relative stagnation in algorithms with telltale sign of utilizing adaptive concepts far less than one would have imagined.

Extraordinary benefits also accrue to the tiny majority with the guts to quit early and refocus their efforts on something new.

― Seth Godin

UnknownThe types of adaptivity most commonly seen are associated with adaptive grids (or “h” refinement). Grids lend themselves to straightforward understanding and impressive visualization. Even with its common presence, even this form of adaptivity is seen far less than one might have thought looking forward twenty years ago. Adaptivity takes other forms far less common than h-refinement such as p-adaptivity where the order of an algorithm is adjusted locally. A third classical form is r-adaptivity where the mesh is moved locally to improve solutions. This is the second most common approach in the guise of remesh-remap methods (or ALE codes). I’d like to chat about a handful of other approaches that could be big winners in the future especially if combined with the imagesclassical approaches.

To improve is to change; to be perfect is to change often.

― Winston S. Churchill

One of the really big options to exercise with adaptivity are algorithms. Simply changing the algorithm based on local solution characteristics should yield great enhancement in accuracy, and robustness. Taken broadly the concept has been around a long time even if it isn’t recognized as such. Right from the beginning with Von Neumann and Richtmyer’s artificial viscosity the addition of nonlinear dissipation renders the method adaptive. The dissipation is effectively zero if the flow is smooth, and dominant if the flow is discontinuous. Upwinding is another such approach where the support (or stencil) for a method is biased by the physics for better (less accurate, but physical) results.

imagesThese are relatively simple ideas. More complex adaptation in algorithms can be associated with methods that use nonlinear stencils usually defined by limiters. These methods use a solution quality principle (typically monotonicity or positivity) to define how a computational stencil is chosen (FCT, MUSCL, and TVD are good examples). More advanced methods such as essentially non-oscillatory (ENO) or the elegant Weighted ENO (WENO) method take this adaptivity up a notch. While algorithms like FCT and TVD are common in codes, ENO hasn’t caught on in serious codes largely due to complexity and lack of overall robustness. The robustness problems are probably due to the overall focus on accuracy over robustness as the key principle in stencil selection.

cyclesOne area where the adaptivity may be extremely useful is the construction of composite algorithms. The stencil selection in ENO or TVD is a good example as each individual stencil is a consistent discretization itself. It is made more effective and higher quality through the nonlinear procedure used for selecting. Another good example of this principle is the compositing of multigrid methods with Krylov iterations. Neither method is as effective on its own. They either suffer from robustness (multigrid) or suboptimal scaling (Krylov). Together the methods have become the standard. Part of the key to a good composite is the complementarity of the properties. In the above case multigrid can provide optimal scaling and Krylov offers stability. This isn’t entirely unlike TVD methods where upwinding offers the stability, and one of the candidate stencils offers optimal accuracy.ConvergenceHistory

A third area to consider is adaptive modeling approaches. One example can be found with multiscale methods where a detailed (assumed more accurate) model is used for the physics to make up for the crude baseline model. In many cases multiple models might be considered to be valid or applicable such as turbulence, failure or fracture modeling. In other cases none of the available models might be applicable. It might make sense to solve all the models and establish conditions for choosing or compositing their effect on the solution. If done correctly the limitations of a single method might be overcome through the selection procedure. In each of the cases mentioned above the current approaches are woefully inadequate.Unknown-2

A general issue with adaptivity that in estimation is holding it back is the relative balance of focus on accuracy over robustness. I believe great tipping the balance toward robustness demanded for applications could make progress. In academic research accuracy is almost always the focal point often at the cost of robustness. Efficiency would be the second focal point that undermines adaptivity’s adoption by codes.
As I noted in an earlier post, https://wjrider.wordpress.com/2014/11/21/robust-physical-flexible-accurate-and-efficient/, the emphasis is often opposite to what applications demand. The combination of robustness-physicality-flexibility might do well to replace the typical accurate-efficient focus. The efficiency focus has hamstrung methods development for the whole of the MPP era, and the next generation of computers promises to make this worse. Combined with the research focus on accuracy this produces a combined impact of spurring outright stagnation in deployment of the adaptive approaches that ought to be dominating computation today.images

The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.

― Albert Einstein

Despite our massive advances with the raw power of computers, we have missed immense opportunities to unleash their full potential. The mindset that has created this environment is still dominant; more emphasis is placed on running old methodology on new computers than inventing new (better) methodologies optimal to the new computers. The result of this pervasive mismanagement is a loss of opportunity, and a loss potential. The end result is also a lack of true capability and problem solving capacity on these computers. Over time this stagnation has cost us more problem solving capability than we have gained over the same period of time with faster computers.

It’s never too late

…to start heading in the right direction.

― Seth Godin

Advertisements