I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
― Abraham Maslow
A collection of my thoughts on issues with modeling and simulation for the future with an emphasis on modeling. Without modeling improvements, the promise of simulation cannot be achieved. Our current approach to high performance computing focused on faster computers without balance is intellectually bankrupt. Without changes in our fundamental philosophy on modeling in computational simulation, the investments in hardware will yield little benefit for society.
A common point of view these days regards the existing code base as a massive investment to be preserved and ported to the new generation of computers. What is often not articulated is the antiquated nature of the modeling available in these codes. The approach used to model material has been in use for over 50 years and has quite a lot in common with a “textbook” approach to material description found in undergraduate courses in engineering. These undergraduate courses define the basic approach to analysis of systems, which are by their very nature macroscopic. Are these descriptions appropriate for highly detailed computer models? And at what point must the description change to reflect the physics of the scales being explored?
Computers allow a far more detailed description of these systems discretizing the macroscopic system into pieces that should resolve ever more of the microstructure and its response. The problem is that description of the materials is still almost isomorphic to the philosophy expressed in the undergraduate textbook. The material’s description is invariant all the way from the macroscopic view to scales that are clearly uncovering microscopic details. This highlights several clear scientific problems that extending the current code base to exascale computation will only exacerbate.
A key to moving forward is the recognition that the class of problems that can be simulated has grown. The older homogeneous average response modeling is still useful and valid, but only for a restricted class of problems. New capabilities and models will enrich simulation’s impact through providing avenues to new classes of problem solving. The new class of problems is defined by creating simulations that more faithfully provide the role of experiments and device testing. The simulations should be able to selectively probe the cases where off-normal response of devices arises. This will allow the analysis to assist in the determination of the limits of operation and safety for engineered systems.
- At a macroscopic level, systems are not deterministic, yet the models we rely upon are. The models are exercised in an overly deterministic manner.
- The material descriptions are invariant to the scales, which they are described at.
- The questions answered by the codes do not match the asked about these systems any more.
- A scientific vibrant field would not tolerate the level of inflexibility implied by current modeling practice. Vibrant science would demand that the models evolve to better match reality.
Addressing this set of issues is going to require deep scientific investigations, and perhaps more deep cultural evolution. We have a wealth of approaches investigating and solving problems associated with multiscale, multiphysics which bridge detailed microstructural simulation to macroscopic scales. The problem is that these approaches are not being utilized to solve the applied problems we currently tackle with our code base. None of these methods are being forced to displace the ancient techniques we rely upon today. As a result the state of practice is stuck in quicksand and remains static.
The importance of modeling as a driver for simulation capability should be obvious as well as its role as the essence of utility for the entire enterprise. This importance is not as obvious when looking at the balance of efforts in simulation science. For example no amount of accuracy, or computer power, or software quality can rescue a model that is inadequate or wrong. Only a focus on improving the model itself can rescue it. Today improving models is far down on the list of priorities for simulation despite its primal role in the quality of the enterprise. Nearby issues of solution methods and algorithms for model are also poorly funded. Most of the emphasis is tilted toward high performance computing and are implicitly predicated on the models themselves being correct.
Even if the models were judged to be correct, the advances in experimental science should be providing pressure to improve models. Improvements in detection, information and analysis will all yield ever better experimental measurements and access to uniquely innovative experimental investigations. These should provide a constant impetus to advance models beyond their current state. This tension is essential to the conduct of high quality science. If science is healthy there is a push and pull with theory and experiment where a theoretical advance will drive experiments, or new experimental observations will drive theory to explain. Without modeling being allowed to advance in response to experimental evidence, the fundamental engine of science is broken.
Furthermore the culture of analysis in engineering and science reinforces these approaches. First and foremost is the commitment to deterministic outcomes in simulation. Experimental science makes it very clear that our everyday macroscopic world has stochastic elements. There is a deterministic aspect to events, but the non-deterministic aspects are equally essential. By and large our analysis of experiments and simulations works steadfastly to remove the stochastic. Usually this is the adoption of averaging (or regression fits to data). These average properties or events then become the working model of our systems. In the past this approach allowed great progress, but more and more our engineered systems are defined more properly by the extremes of behavior that they can exhibit.
Our entire modeling approach especially that used in simulation are completely ill suited to address these extreme behaviors. A fundamental change in modeling and simulation philosophy is necessary to advance our understanding. Our models do not produce actual physically realizable simulations because not system actually acts like an average system everywhere. Instead the average behavior results from variations in behavior throughout the system. Sometimes these variations produce effects that are exactly associated with the newer questions being asked about extreme behavior.
The new methods do not displace the need for the old methods, indeed the new methods should appropriately limit to the solutions found by the old methods. The new methods allow the resolution of scale-dependent behavior, and off-average behavior of the system, but need to be self-consistent with traditional methods for simulation. Perhaps just as importantly, those conducting simulations should be deeply aware of when the old methods lose validity both in terms of scale-dependent behavior, and the questions being addressed through the simulation.
This brings the idea of questions to the forefront of the discussion. What questions are being addressed via simulation? There is a set of questions that older simulation methods are distinctly capable of answering. These questions are not the same questions driving the need for simulation capability today. In providing new models for simulation, the proper questions are primal in importance.
The current simulation capability is tied to answering old questions, which are valid today, but less important as new topics are crowding them out. Examples of the older questions are “what is the performance of this system under average conditions?” “What is the yield of this production process?” “How large is the average margin of performance beyond the requirements for the system?” With the key aspect of the questions being answered being the capacity of the modeling to attack the average properties and performance of engineered systems. By the same token the uncertainty we can assess via modeling today via simulation is the lack of knowledge about the average behavior of these systems, which is not the same as the uncertainty in the behavior of the actual system.
This mindset influences the experimental comparisons done for the purposes of validation as well. Experimental data is often processed into an average, and then compared to the simulation. No single experiment is appropriately simulated, but rather the simulation is modeling the average of the experiments. As such, the simulations are not truly modeling reality because for many physical systems, the average response of the system is never produced in a single experiment. As discussed below, this mindset infects the interpretation of experiments in a deeply pernicious manner.
The new questions being asked of simulations are subtly different, and require different models, methods, algorithms and codes to answer. Key among these questions is “how much variation in the behavior of the system can be expected?” “How often will the system manifest certain extreme behavior?” “How will the entire population of the system behave under certain conditions?” “What is the worst behavior to expect from the system and how likely is it to happen?”
Ideally, the calculation should be the same as observations from a physical experiment (validation), not the average of all experiments. In this way our simulations do not model any reality today because they are almost invariably too homogeneous and deterministic in character. Experiments, on the other hand, are heterogeneous and variable yielding some degree of stochastic response. Systems truly have both a variable stochastic character, which usually acts as a non-deterministic component around a major homogeneous and deterministic aspect of the system. Today our models are predominantly focused on the homogeneous, deterministic aspect of these systems. This aspect is the focus of traditional models and the older questions. The new questions are clearly focused on the secondary stochastic aspects that we average away today. The result is a strong tendency to treat single experiments inappropriately as instances of average response when they are simply a single instance from a population of possible experiments. When the deterministic calculation is forced to compare too closely to the non-deterministic aspects of an experiment, problems ensue.
Of course this decomposition is only approximate. For nonlinear systems the separation between stochastic and deterministic is dependent on the circumstances, and the nature of the system itself. Some instances of the system will yield a different decomposition because of the coupling of the system’s response to variability. Examples of the newer questions to be addressed by simulation abound in areas such as device engineering, stockpile stewardship and weather/climate modeling. For example, a key aspect of an engineered device is the portion of the population of devices that can be expected to fail under the extreme conditions associated with normal use. This may have significant reliability consequences and economic side effects. Similar questions are key in stockpile stewardship in part to address shortcomings in the degree of testing in the field or as populations of devices diminish and reduce statistical method’s effectiveness. Extreme weather events such a rain, wind or snowfall have extreme consequences on the mortality and economic impacts on society. The degree to which climate change causes an increase in such occurrences has significant policy consequences. Simulations are being relied up to an ever-greater degree to estimate this issue.
In many cases the modeling in our workhorse engineering analysis codes is quite recognizable from our undergraduate engineering textbooks. Rather than forming a distinct field of study as the modeling unveils more mesoscopic and ultimately microscopic details, the modeling is still couched in terms of the macroscopic methods used in classical desk modeling and calculations. The modeling does not account for the distinct aspects of being applied to a discretized system where smaller scales are available. Many of these models are clearly associated with average, steady state behavior of the full macroscopic system. The multiscale modeling is simply short-circuited by the traditional view of modeling defined in many codes. For example continuum codes for fluids, solids, heat transfer and mechanics all use uniform, homogenized properties for solving problems. The philosophy is virtually identical to the macroscopic material description that would be familiar to undergraduate engineering students.
This is madness! This was a reasonable fifty years ago as these methods first came into use and the number of computational elements was small and the elements were large. Today these methods are quite mature, and the number elements is huge and their size is clearly separated from the large scale. The scale separation dictates that a model that more properly describes the material at the scale of simulation overturns the homogenized models. A homogenized material can only describe the homogenized outcome, or the average solution for the material. Furthermore this homogeneous model will not match any actual circumstances from reality.
One of the key aspects of real experiments is the ever-present random component of results. The initial and boundary conditions all have a random uncontrolled variability that yields the variation in results. In homogenized simulations, this aspect of reality is washed out and for this reason the simulation is unrealizable in the real World. At times the random component is significant enough that the result of the experiment will radically depart from the average response. In these cases, however small in probability, the simulations fall completely short of serving to replace experiments and testing. This aspect of simulation is woefully lacking from current plans despite in centrality to the role of the simulation in providing a transformative scientific tool.
Another place where current simulation approaches fall demonstrably short of serving the modeling of reality are ideal models. These models are often mathematically beautiful evoking Hamiltonian structure and deep provable properties that breed devotion by the mathematically inclined. All of this simply detracts from the lack of physical reality bound up in this idealization. These models lack dissipative forces, which define the presence of the second law of thermodynamics, a necessary element for continua associated with reality. By too greatly focusing on the beauty and majesty of the ideal model, the primal focus of modeling reality is ultimately sacrificed. This is simply too great a price to pay for beauty. More perniciously the approach produces models with seemingly wonderful properties and rigor that seduce the unwary into modeling the World in utterly unphysical manners. In many cases the modeling is constructed as the solution to the ideal model plus an explicit model for the non-ideal effects. It should be a focus of modeling to assess whether the intrinsically unphysical aspects of the ideal model are polluting the objective of modeling reality.
In computing there is a chain of activities that provide value to the World. Modeling is the closest thing to reality. No amount of computing speed, algorithmic efficiency, and methodological accuracy can rescue a model that is inadequate. Once a model is defined in needs to be solved on the computer via a method. The method can be streamlined and made more efficient via algorithmic advances. Finally we must consider that all of these need to have software for implementation and as well as mapping to the computing hardware. At the end of the chain the computing hardware is dependent on everything above it for its capacity to impact our reality. Again, modeling is the absolute key to any value at all in simulation.
Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.
― Isaac Asimov