“It is not the answer that enlightens, but the question.” – Decouvertes

Research is all about answering questions. The nature and quality of the question determines the power of the answers. I’ll just assert that we haven’t been asking very good questions lately and the quality of the research is showing the shortcomings. Lack of real risk coupled to intolerance to failure in research agendas are major problems today. Together these tendencies are tantamount to choosing a research agenda that produces little or nothing of value. These twin ills are reflected in the quality of the research questions. Poor questions that fail to probe the boundaries of knowledge lead to poor research that keeps those boundaries fixed. It is easier to continue to ask the same questions as before. There is a distinct predisposition to asking softball questions because you can be sure of the answer. If I’m sure of the answer, I haven’t asked a good question. The answer will do little to enlighten me beyond what is already self-evident.

For example, I realize now that major opportunities were missed in my previous life in Los Alamos. Up there, the nuclear weapons’ designers are kings. They also project a certain distain for computer codes despite using them virtually every day in the conduct of their work. I missed some really good questions that might have opened some doors to deeper discussions that are sorely necessary for progress. Instead we just beat around the proverbial bush and avoided the issues that hold back progress. I can imagine a dialog (past the third line its not clear where it would actually lead),

Me: “why do you believe your calculation is right?”

Designer: “I don’t, the code always lies to me”

Me: “then why do you use it?”

Designer: “it helps me solve my problems”

Me: “even if it lies?”

Designer: “I know how to separate the truth from the lies”

Me: “So it does contain some useful information?”

Designer: “Yes.”

Me: “How do you know where the utility ends, and the lies begin?”

Designer: “My judgment”

Me: “How do you know your judgment is sound?”

Designer: “I match the calculations against a lot of experimental data”

Me: “Do you know that the path taken to solution is unique, or can it be done multiple ways?”

Designer: “There is probably more than one way, but lots of experiments provide more confidence,”

Me: “What are the implications of this non-uniqueness?”

Designer: “I haven’t thought about that.”

Me: “Why? Isn’t that important or interesting?”

Designer: “It is a little frightening.”

This is the point where the discussion starts to veer into interesting and essential territory. We are confronted with systems dripping with uncertainty of all sorts. Many scientists are inherently biased toward solving well-posed initial value problems. For instance they will generally interpret experiments as a unique instantiation of the physical system, and expect the simulation to get that precise answer. This is reasonable for a stable system, but completely unreasonable for unstable systems. Remarkably, almost every technological and natural system of great interest has instabilities in it. Even more remarkably these systems often have a large enough ensemble of unstable events for them average out to reliable behavior. Nonetheless, they are not, nor should not be simulated as well-posed problems. Dealing with this situation rationally is a huge challenge that we have not stood up to as a community despite its pervasive nature.

Recently, I picked up a Los Alamos glossy (LANL publication, National Security Science) that discussed the various issues associated with nuclear weapons in today’s world. The issues are complex and tinged with deep geopolitical and technical issues. Take for instance the question of what the role of nuclear weapons is in national security today. Maybe a better question would be to answer the question, “imagine a world where the USA didn’t have nuclear weapons, but other nations did, what would it be like?” “Would you be comfortable in that World?”

The importance and training of a new generation of weapons’ designers was also highlighted in the glossy. In the dialog associated with that discussion, the gem of the “codes lie” shows up. This is a slightly more pejorative version of George Box’s quote “All models are wrong” without the positive retort “but some are useful.” I strongly suspect that the “codes lie” would be followed by “but they were useful” if the article had probed a bit deeper, but glossy publications don’t do that sort of thing. The discussion in the LANL glossy didn’t go there, and lost the opportunity to get to the deeper issues. Instead it was purely superficial spin. My retort is that codes don’t lie, but people sure do. Codes have errors. Some of these errors result from omission of important, but unknown physical effects. Other errors are committed out of necessity, such as numerical integration, which is never perfect. Other errors are merely the finite nature of knowledge and understanding such as the use of mathematics for governing equations, or imperfect knowledge of initial conditions. The taxonomy of error is the business of verification and validation with uncertainty quantification. The entire V&V enterprise is devoted to providing evidence for the quality (or lack thereof) of simulation.

We analyze systems with computer codes because those systems are deeply nonlinear and complex. The complexity and nonlinearity exceeds our capacity to fully understand. The computer code allows us to bridge our human capability for comprehension to these cases. Over time intuition can be developed when combined with concrete observation leads to confidence. This confidence is an illusion. Once the circumstances depart from where the data and simulations have taken us, we encounter a rapid degradation in predictive intuition. There is where danger lies. The fact is that the codes have errors, but people lie. People lie to gain advantage, or more commonly they lie to themselves because to answer truthfully requires them to stare in to the abyss of ignorance. In that abyss we can find the research questions worth answering and allowing mankind’s knowledge to advance.

The key is to get to a better question. It is about pulling a thread, doing an interrogation of the topic that peels away the layers of triviality, and gets to something with depth. First, the codes are more powerful than they will admit, but more deeply the path to solution is not unique. Both aspects are deeply important to the entire enterprise. I might imagine doing the same dialog with regard to climate science where similar issues naturally arise. Answers to these questions gets to the heart of computational science and its ability to contribute to knowledge.

The punch line is to push you to get at better, deeper questions as the route to better research. We need to ask questions that are uncomfortable, even unsettling. Not uncomfortable because of their personal nature (those are the “you idiot” questions where adding that phrase makes sense at the end out the question), but uncomfortable because they push us up to the chasm of our knowledge and understanding. These are questions that cause one to rethink their assumptions and if answered expand their knowledge.

I had an episode the other day that provided such a thread to pull. The issue resolves around the perniciousness of calibration and the false confidence that it produces. People looking at reactor criticality hold their calculations to a withering standard demanding five digits of accuracy. When I saw how they did this, my response was “I don’t believe that”. This was a sort of question, “can you justify those five digit?” The truth is that this answer is highly calibrated where the physical data is adjusted (homogenized) to allow this sort of accuracy, but its not “accuracy” in the sense that numerical modeling is built upon. It is precision. It is a calibrated precision where the impact of data and numerical uncertainty has been compensated for. This procedure and capability lacks virtually any predictive capability at the level of accuracy asserted. The problem is that reactor criticality is a horribly nonlinear problem, and small deviations are punished with an exponential effect. Practically speaking, the precision of getting the criticality correct (its an eigenvalue problem) is enormously important and this importance justifies the calibration.

A similar issue arises in climate science where the global energy balance must be nailed lest the Earth heat or cool unphysically. There a calibration is conducted that only applies to the specific mesh, numerical integration and subgrid models. If any of these things change the calibration must change as well to maintain the proper energy balance. The issue is whether the overall approach can be trusted at all as the system being modeled departs from the observed system that has been calibrated. For computational science this may be one of the most important issues to answer, “how far can a calibrated model be trusted?” “How can a calibrated model be trusted to assist in decisions?” Without the calibration the model is functionally useless, but with the calibration is it useful?

Questions are a way on encapsulating the core of what is wrong with computational science’s obsession with high performance computing. The question that would be better to ask is “are we focused on leveraging the right technological trends to maximize the impact of computational science on society at large?” I believe that we are not. We are missing the mark by a rather large margin. We are in the process of “doubling down” on the emphases of the past while largely ignoring how the World has changed. The change we see today is a merely the beginning of even bigger things to come. The approaches of the past will not suffice moving forward. For instance, the real hard truth is that the secrets of physical systems we are interested in will not simply submit to brute force computational power. Rather we need to spend some time thinking deeply about the questions we are trying to answer. With a little bit of deep thought we might actually start asking better questions and start down the path of getting more useful answers.

Scientific computing was once a major playing in the computing industry. Now it is merely a gnat on a whale’s back. The scientific computing community seems to be trying to swim against the incoming tidal wave instead of trying to ride it. Opportunity lies in front of us; can we muster the bravery to grasp it?

“The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.” – Anthony Jay

 

Advertisements