Complexity, therefore, results in flexibility. Increasing complexity always increases capability and adaptability.

― Jacob Lund Fisker

One of the more revealing aspects of a modeling and simulation activity is the character of every aspect of activity in terms of complexity, sophistication and emphasis. Examining the balance in terms of simplicity versus complexity, and the overall sophistication is immensely revealing. Typically the level of complexity for each aspect of an activity shows the predispositions of those involved. It also varies deeply among various philosophical groundings of the investigators. Quite often people have innate tendencies that contradict the best interests of the modeling and simulation activity. It is useful to breakup the modeling and simulation activity into a set of distinct parts to understand the texture of this more keenly.

Modeling and simulation is a deep field requiring the combination of a great number of different disciplines to be successful. It invariably requires computers to be used, so software, computer science and computer engineering is involved, but the core of value arises from the domain sciences and engineering. At the level of practical use we see need an emphasis on physics and engineering with a good bit of application-specific knowledge thrown in. Modeling activities can run the gambit between very specific technology applications to general topics like fluid or solid mechanics. The activities can be more focused on governing equations or the closure of these equations with measured physical data or elaborate modeling that coarse grains phenomenology into a lower computational cost form. It is the difference in modeling between an equation of state or coefficient of viscosity and a turbulence model, or deriving a modelUnknown-2 of a solid from a molecular dynamics simulation.

In between we see a blend of mathematics, engineering and physics providing the glue between the specific application-based modeling and the computer needed to run calculations on. As I said before, the emphasis in all of this reveals so much about the intensions of work. Today, the emphasis in modeling and simulation has been drawn away from this middle ground between the utility of modeling and simulation in applications, and the powerful computers needed to conduct the calculations. This middle ground defines the efficiency, correctness and power of modeling and simulations. A closer examination of current programs shows clearly that the applications are merely a marketing tool for buying super-powerful computers, and a way of fooling people into believing their purchase has real world value. Lost in the balance is any sense that modeling and simulation is a holistic body of work succeeding or failing on the degree of synergy derived from successful multidisciplinary collaborations. The result of the current program’s composition is a lack of equilibrium that is sapping the field of its vitality.

The current exascale emphasis is almost entirely computer hardware focused where the real world drivers are contrived and vacuous. Aside fr220px-Peter_Lax_in_Tokyo copy 2om using applications to superficially market the computers, the efforts are proportional to their proximity to the computer hardware. As a result large parts of the vital middle ground are languishing without effective support. Again we lose the middle ground that is the source of efficiency and enables the quality of the overall modeling and simulation. The creation of powerful models, solution methods, algorithms, and their instantiation in software all lack sufficient support. Each of these activities has vastly more potential than hardware to unleash capability, yet it remains without effective support. When one makes are careful examination of the program all the complexity and sophistication is centered on the hardware. The result has a simpler is better philosophy for the entire middle ground and those applications drawn into the marketing ploy.

Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.

― George Pólya

Examining for any emphasis on verification and validation can draw the same conclusions. There is none, support for V&V is non-existent. As I’ve said on several occasions, V&V is the scientific method embodied. If V&V is absent from the overall activity there is a lack of seriousness about scientific (or engineering) credibility and the scientific method in general. Lack of support and emphasis on V&V is extremely telling with respect to exascale. Any science or applied credibility in the resulting simulations are purely coincidental and not part of programmatic success. V&V spans the scientific enterprise and underpins the true seriousness of applicability and quality in the overall enterprise. If an activity lacks any sort of V&V focus, the true commitment to either application impact or quality results should be questioned strongly.

There is no discovery without risk and what you risk reveals what you value.

― Jeanette Winterson

Within any of these subsets of activities, the emphasis on simplicity can be immensely revealing regarding the philosophy of those involved. Turbulence modeling is a good object lesson in this principle. One can look at several approaches to studying turbulence that focus on great complexity in a single area: modeling for Reynolds averaged (RANS) flows, solution methods for astrophysics with the PPM method, or direct numerical simulation (DNS) using vast amounts of computer power, but in each area the rest of the study is simple. With RANS the combination of method, and computing sophistication is usually quite limited. Alternatively the PPM method is an immensely successfudag006l and complicated numerical method run with relatively simple models and simple meshes. DNS uses vast amounts of computer power on leading edge machines, but uses no model at all aside from the governing equations and very simple (albeit high-order) methods. As demands for credible simulations grow we need to embrace complexity in several directions for progress to be made.

Underpinning each of these examples are deep philosophical conclusions about the optimal way to study the difficult problem of turbulence. With RANS modeling there is the desire for practical engineering results and modeling driving a focus on modeling. With PPM difficult flows with shock waves drive a need to provide methods with good accuracy and great robustness tailored to precise difficulties of these flows. DNS is focused on numerical accuracy through vast meshes, computer power, and accurate numerical methods (which can be very fragile). In each case a single area is the focal point of complexity and the rest of the methodology pushes for simplicity. It is quite uncommon to find cases where the complexity is partaken in several aspects of a modeling and simulation study. There may be great benefits to do this and current research directions are undermining necessary progress.

Another important area in modeling and simulation is the analysis of the results of calculation. This rapidly gets into the business of verification, validation and uncertainty quantification (VVUQ). Again the typical study that produces refined results tends to eschew complexity in other areas. This failure to embrace complexity is holding modeling and simulation back. Some aspects of complexity are unnecessary for some application, or potentially detract from an emphasis on a more valuable complexity. For example simple meshes may unleash more complex and accurate numerical methods where geometric complexity for meshing has less value. Nonetheless, combined complexity may allow levels of quality in simulations to be achieved that currently elude us. A large part of the inhibition to embracing complexity is the risk it entails in project-based work. Again we see the current tendency to avoid project risk results in the diminishment of progress by shunning complexity where it is necessary. Put differently, it is easy to saturate the tolerance for risk in the current environment and design programs that fail for failing to attack problems with sufficient aggression.

The greatest risk is not taking any.

― Tim Fargo

For VVUQ various aspects of complexity can detract from focus significantly. For example great depth in meshing detail can potentially completely derail verification of calculations. Quite elaborate meshes are created with immense detail effectively using all the reasonable computing resources. Often such meshes cannot be trivially or meaningfully coarsened to provide well-grounded and connected simulations of the finer mesh. Then to make matters worse, the base mesh, which can be functionally refined results in a calculation too expensive to conduct. The end result is a lack of verification and error estimation, or more colloquially, a “V&V fail”. This state of affairs is so common as to transition from comedy to outright tragedy. The same dynamic often plays out with UQ work where the expensive model of interest and its cost of solution squeezes out computations needed to estimate uncertainty. A better course of action would view the uncertainty estimation holistically and balance numerical, modeling, and experimental error to find the best overall estimation of uncertainty. More importantly we could more easily produce assessments that are complete and don’t cut corners.

Cielo rotatorAnother key aspect of current practice in high performance computing is the tendency to highlight only the most expensive and large calculations in computer use policies. As a result the numerous smaller calculations necessary for the overall quality of simulation-based studies are discouraged. Often someone seeking to do a good credible job of simulating needs to conduct a large number of small calculations to support a larger calculation, yet the use of the big computers punishes such use. The results are large (impressive) calculations that lack any credibility. This problem is absolutely rampant in high performance computing. This is a direct result of a value system that prizes large meaningless calculations over small meaningful calculations. The credibility and meaning of the simulation based science and engineering is sacrificed to the altar of bigger is better. This value system has perverted large swaths of the modeling and simulation community, undermines VVUQ and ultimately leads to false confidence in the power of computers.

The same issue wrecks havoc on scenario uncertainty where the experimental result has intrinsic variability and no expectation of uniqueness should exist. For many such cases single experiments are conducted and viewed as the “right” answer. Instead such experiments should be viewed as a single sample from an ensemble of potential physical results. To compound matter these experiments are either real world events, terribly expensive or dangerous, or both. Doing replicate experiments is simply not going to happen. Modeling and simulation should be leaping into this void and provide information and analysis to cover this gap. Today our modeling and simulation capability is utterly and woefully inadequate to fill this role, and the reasons are multiple. A great degree of the blame lies in the basic philosophy of the modelers, the solution of a single well-posed problem where the reality is an ensemble of ill-posed problems and a distribution of answers.

Deeper issues exist with respect to the nature of equations being solved as a mean field theory. This mean field theory effectively removes many of the direct sources of solution variability from the simulation. Each of these complexities has tremendous value for enhancing the value of modeling and simulation, but is virtually unsupported by today’s research agenda. To support such an agenda we need a broad multidisciplinary focus including a complementary experimental program centered around understanding these distributional solutions. Physics and engineering modeling would need to evolve to support closing the equations, and the governing equations themselves would need to be fundamentally altered. Finally a phenomenal amount of applied mathematics would be needed to support appropriate rigor in the analysis of the models (equations), the methods of solutions, and the algorithms.

Instead of this forward looking program that might transform simulation and modeling, we have a backwards looking program obsessed with the computers and slighting everything that produces true value with their use. All of the highest value and most impactful activities for the real world are provided almost no support. The program is simply interested in putting yesterday’s models, methods, algorithms and codes on tomorrow’s computers. Worse yet, the computer hardware focus is the least effective and least efficient way to increase our grasp on the world through modeling and simulation. For an utterly naïve and uninformed person, the supercomputer is a clear product for 800px-Cray_Y-MP_GSFCmodeling and simulation. For the sophisticated and knowledgeable person, the computer is merely a tool, and the real product is the complete and assessed calculation tied to a full V&V pedigree.

To put this conclusion differently, high performance computing hardware is only necessary to do scientific computing that impacts the world. It is far from sufficient. The current programs are focusing on an important necessary element of modeling and simulation, but virtually ignoring a host of the sufficient activities. The consequence is a program that is incredibly inadequate to provide the value for society that it should promise.

Greatness and nearsightedness are incompatible. Meaningful achievement depends on lifting one’s sights and pushing toward the horizon.

― Daniel H. Pink

 

Advertisements