Divide each difficulty into as many parts as is feasible and necessary to resolve it.

― René Descartes

In numerical methods for partial (integral too) differential equations there are two major routes to solving problems “well”. One is resolving the solution (physics) with some combination of accuracy and discrete degrees of freedom. This is the path of brute force where much of the impetus to build ultra-massive computers comes from. It is the tool of pure scientific utilization of computing. The second path is capturing the physics where ingenious methods allow important aspects of the solution to be found without every detail being known. This methodology forms the backbone of and enables most useful applications of computational modeling. Shock capturing is the archetype of this approach where the actual singular shock is smeared (projected) onto the macroscopic grid instead of demanding infinite resolution through the application of a carefully chosen dissipative term.12099970-aerodynamic-analysis-hitech-cfd

In reality both approaches are almost always used in modeling, but their differences are essential to recognize. In many of these cases we resolve unique features in a model such as the geometric influence on the result while capturing more universal features like shocks or turbulence. In practice, modelers are rarely so intentional about this or what aspect of numerical modeling practice is governing their solutions. If they were the practice of numerical modeling would be so much better. Notions of accuracy, fidelity and general intentionality in modeling would improve greatly. Unfortunately we appear to be on a path to allow an almost intentional “dumbing down” of numerical modeling by dulling down the level knowledge associated with the details of how solutions are achieved. This is the black box mentality that dominates modeling and simulation in the realm of applications.

No where does the notion of resolving physics come into play like direct numerical simulation of turbulence, or direct simulation of any other physics for that matter. Rayleigh-Taylor_instabilityTurbulence is the archetype of this approach. It also serves are a warning to anyone interested in attempting this approach. In DNS there is a conflict between fully resolving the physics and most successfully computing the most dynamic physics given existing computing resources. As a result the physics being computed by DNS rarely uses a mesh in the “asymptotic” range of convergence. Despite being fully resolved, DNS is rarely if ever subjected to numerical error estimation. In the cases where this has been achieved the accuracy of DNS falls short of expectations for “resolved” physics. In all likelihood truly resolving the physics would require far more refined meshes than current practice dictates, and would undermine the depth of scientific exploration (lower Reynolds numbers). We see the balance between quality and exploration in science is indeed a tension that remains ironically unresolved.

Perhaps more concerning is the tendency to only measure the integral response of systems subject to DNS. We rarely see a specific verification of the details of the small scales that are being resolved. Without all the explicit and implicit work to assure the full resolution of the physics, one might be right to doubt the whole DNS enterprise and its results. It remains a powerful tool for science and a massive driver for computing, but due diligence on its veracity remains a sustained shortcoming in its execution. As such the greater degree of faith in DNS results should be an endeavor of science rather than simply granted by fiat, as we tend to do today.

Given the issues with fully resolving physics where does “capturing” fit? In principle capturing means that the numerical method contains a model that allows it to function properly when the physics is not resolved. It usually means that the method will reliably produce integral properties of the solution. This is achieved by building the right asymptotic properties into the method. The first and still archetypical method is shock capturing and artificial viscosity. The method was developed to marry a shock wave to a grid by smearing it across a small number of mesh cells and adding the inherent entropy production to a method. Closely related to this methodology is large eddy simulation, which allows under-resolved turbulence to be computed. The subgrid model in its simplest form is exactly artificial viscosity from the first shock capturing method, and allows the flow to dissipate at a large scale without computing the small scales. It also stabilizes what would otherwise be an unstable computation.Supersonic_Bullet_Shadowgraph

Another major class of physics capturing is interface or shock tracking. Here a discontinuity is tracked with the presumption that it is a sharp transition between two states. These states could be the interface between two materials, or pre- and post-shock values. In any cases a number of assumptions are encoded into the method on the evolution of the interface and how the states change. Included are rules for the representation of the solution, which define the method’s performance. Of course, stability of the method is of immense importance and the assumptions made in the solution can have unforeseen side-effects.

One of the key issues for the pragmatic issues associated with the resolved versus captured solution is that most modern methods blend the two concepts. It is a best of both Worlds strategy. Great examples exist in the World of high-order shock capturing methods. Once a shock exists all of rigor in producing high-order accuracy becomes a serious expense compared to the gains in accuracy. The case for using higher than second order method remains weak to this day. The question to be answered more proactively by the community is “how can high-order methods be used productively and efficiently in production codes?”

Combining the concepts of resolving and capturing is often done without any real thought on how this impacts issues of accuracy, and modeling. The desire is to have the convenience-stability-robustness of capturing with the accuracy associated with efficient resolution. Achieving this for anything practical is exceedingly difficult. A deep secondary issue is the modeling inherent in capturing physics. The capturing methodology is almost always associated with embedding a model into the method. People will then unthinkingly model the same physical mechanisms again resulting in a destructive double counting of physical effects. This can confound any attempt to systematically improve models. The key question to ask about the solution, “is this feature being resolved? Or is this feature being captured?” The demands on the computed solution based on the answer to these simple questions are far different.

These distinctions and differences all become critical when the job of assessing the credibility of computational models. Both the modeling aspects, numerical error aspects along with the overall physical representation philosophy (i.e., meshing) become critical in defining the credibility of a model. Too often those using computer codes to conduct modeling in scientific or engineering contexts are completely naïve and oblivious to the subtleties discussed here.

mistakesdemotivatorIn many cases they are encouraged to be as oblivious as possible about many of the details important in numerical modeling. In those cases the ability to graft any understanding onto the dynamics of the numerical solution of the governing equations onto their analysis becomes futile. This is common when the computer code solving the model is viewed as being a turnkey, black box sort of model. Customers accepting results presented in this fashion should be inherently suspicious of the quality. Of course, the customers are often encouraged to be even more naïve and nearly clueless about any of the technical issues discussed above.

Resolve, and thou art free.

― Henry Wadsworth Longfellow

 

Advertisements