We are deeply biased by our perceptions and preconceptions all the time. We make many decisions without knowing we are making a decision constantly. Any recognition of this would probably terrify most rational people. We often frame our investigations to prove the conclusion we have already made. Computer modeling and simulation has advanced to the point where it is forming biases. If one’s most vivid view of an unseeable event is a simulation, a deep bias can be shaped in favor of the simulation that unveiled the unseeable. We are now at the point where we need to consider if improvement in modeling and simulation can be blocked by such biases.

For example in one modeling effort, for high explosives efforts had a favored a computer code that is Lagrangian (meaning the mesh moves with the material). The energy release from explosives causes fluid to rotate vigorously and this rotation can render the mesh into a tangled mess. Besides becoming inaccurate, the tangled mesh will invariably endanger the entire simulation. To get rid of the problem, this code converts tangled mesh elements into particles. This is a significant upgrade over the practice of “element death” where the tangled grid is completely removed when it becomes a problem along with mass, momentum and energy… Conservation laws are laws, not suggestions! Instead the conversion to particles allows the simulation to continue, but bring all the problems with accuracy and ultimately conservation that particles bring along (I’m not a fan of particles).

More tellingly, competitor codes and alternative simulation approaches will add particles to their simulation. The only reason the particles are added is to give the users something that looks more like what they are used to. In other words the users expect particles in interesting parts of the flow, and the competitors are eager to give it to them whether it is a good idea or not (it really isn’t!). Rather than develop an honest and earnestly better capability, the developers focus on providing the familiar particles.

Why? The analysts running the simulations have come to expect particles, and the particles are common where the simulations are the most energetic, and interesting. To help make the analysts solving the problems believe the new codes particles come along. I, for one, think particles are terrible. Particles are incredibly seductive and appealing for simulation, but ultimately terrible because of their inability to satisfy even more important physical principles, or provide sufficient smoothness for stable approximations. Their discrete nature causes an unfortunate trade space to be navigated without sufficiently good alternatives. In some cases you have to choose between smoothness for accuracy and conservation. Integrating particles is often chosen because they can be done without dissipation, but dissipation is fundamental to physical, casual events. Causality, dissipation and conservation all trump a calculation with particles without these characteristics. In the end the only reason for the particles is the underlying bias of the analysts who have grown to look for them. Nothing else, no reason based on science, it is based on providing the “customer” what they want.

“If I had asked people what they wanted, they would have said faster horses.”– Henry Ford.

There you have it, give people what they don’t even know they need. This is a core principle in innovation. If we just keep giving people what they think they want, improvements will be killed. This is the principle that code related biases create. They are biased strongly toward what they already have instead of what is possible.

Modeling and simulation has been outrageously successful over the decades. This success has spawned the ability to trick the human brain to believing that what they see is real. The fact that simulations look so convincing is a mark of massive progress that has been made. This is a rather deep achievement, but it is fraught with the danger of coloring perceptions in ways that cannot be controlled. The anchoring bias I spoke of above is part of that danger. The success now provides a barrier to future advances. In other words enough success has been achieved that the human element in determining quality may be a barrier to future improvements.

It might not come as a surprise for you to think that I’ll say V&V is part of the answer.

V&V has a deep role to play in improving upon this state of affairs. In a nutshell, the standard for accepting and using modeling and simulation must improve in order to allow the codes to improve. A colleague of mine has the philosophy, “you can always do better.” I think this is the core of innovation, success and advances. There is always a way to improve. This needs to be a steadfast belief that guides our choices, and provides the continual reach toward bettering our capabilities.

What can overcome this very human reaction to the visual aspects of simulation?

First, the value of simulation needs to be based upon the comparisons with experimental measurements, not human perceptions. This is easier said than done. Simulations are prone to being calibrated to remove differences from experimental measurements. Most simulations cannot match experimental observables without calibration, and/or the quality standards cannot be achieved without calibration. The end result is the inability to assess the proper value of a simulation without the bias that calibration brings. An unambiguously better simulation will require a different calibration, and potentially a different calibration methodology.

 

In complex simulations, the full breadth of calibration is quite difficult to fully grapple with. There are often multiple sources of calibration in simulation including any subgrid physics, or closure relations associated with physical properties. Perhaps the most common place to see calibration is the turbulence model. Being an inherently poorly understood area of physics; turbulence modeling is prone to being a dumping ground for uncertainty. For example, ocean modeling often uses a value for the viscous dissipation that far exceeds reality. As a friend of mine like to say, “if the ocean were as viscous as we model it, you could drive to England (from the USA).” Without strong bounds being put on the form and value of parameters in the turbulence model, the values can be modified to give better matches to more important data. This is the essence of a heavy-handed calibration common. An example might be the detailed equation of state for a material. Often a simulation code has been used in determining various aspects of the material properties or analyzing the experimental data used.

 

I have witnessed several difficult areas of applied modeling and simulation overwhelmed by calibration. The use of calibration is so commonly accepted, the communities engage in it without thinking. If one isn’t careful the ability to truly validate the state of “true” modeling knowledge becomes nearly impossible. The calibration begins to become intimately intertwined with what seems to be fundamental knowledge. For example, a simulation code might be used to help make sense of experimental data. If one isn’t careful errors in the simulation used in reducing the experimental data can be transferred over to the data itself. Worse yet, the code used in interpreting the data might utilize a calibration (it almost certainty does). At that point you are deep down the proverbial rabbit hole. Deep. How the hell do you unwind this horrible knot? You have calibrated the calibrator. Even more pernicious errors might be the failure to characterize the uncertainties in the modeling and simulation that is used to help look at the experiment. In other cases calibrations are used so frequently that they simply get transferred over into what should be fundamental physical properties. If these sorts of steps are allowed to proceed forward, the original intent can be lost.

These steps are in addition to a lot of my professional V&V focus, code verification and numerical error estimation. These practices can provide unambiguous evidence that a new code is a better solution on analytical problems and real applications. Too often code verification simply focuses upon the correctness of implementations as revealed by the order of convergence. The magnitude of the numerical error can be revealed as well. It is important to provide this evidence along with the proof of correctness usually associated with verification. What was solution verification should be called numerical error estimation, and it provides important evidence on how well real problems are solved numerically. Moreover, if part of a calibration is accounting for numerical error, the error estimation will unveil this issue clearly.

The bottom line is to ask questions. Ask lots of questions, especially ones that might seem to be stupid. You’ll be surprised how many stupid questions actually have even stupider answers!

Advertisements