tl;dr : VVUQ injects the fundamentals of the scientific method into modeling and simulation. The general lack of VVUQ in HPC should cause one to question how much actual science is being done.

Modeling and simulation has been hailed by many as a third way to do science taking its place next to theory and observation as one of the pillars of practice. I strongly believe that this proposition does not bear up to scrutiny. For this to be true the advent of modeling and simulation would need to change the scientific method is some fashion; it does not. This does not minimize the importance of scientific computing, but rather puts it into the proper context. Instead of being a new way to do science, it provides tools for doing partsthe_scientific_method_as_an_ongoing_process-svgof science differently. First and foremost modeling and simulation enhances our ability to make predictions and test theories. As with any tool, it needs to be used with care and skill. My proposition is that the modeling and simulation practice of verification and validation combined with uncertainty quantification (VVUQ) defines this care and skill. Moreover VVUQ provides an instantiation of the scientific method for modeling and simulation. An absence of emphasis on VVUQ in modeling and simulation programs should bring doubt and scrutiny on the level of scientific discourse involved. In order to see this one needs to examine the scientific method in a bit more detail.

The Scientific Method is a wonderful tool as long as you don’t care which way the outcome turns; however, this process fails the second one’s perception interferes with the interpretation of data. This is why I don’t take anything in life as an absolute…even if someone can “prove” it “scientifically.

― Cristina Marrero

To continue our conversation we need a serious discussion of the scientific method itself. What is it? What are its parts? Who does it, and what do they do? We can then map all the activities from VVUQ onto the scientific method, proving my supposition.

In science and society, the scientific method conjures a large degree of reverence. In human discourse few basic processes have the same degree of confidence and power. The two basic activities in science are theory and observation (experiment) along with some basic actions that power each, and drive the connection between these ways of doing science. We devise theories to help explain what we experience in reality. These theories are the result of asking deep questions and proposing hypothesized mechanisms for our experience. Ultimately these theories usually take on the form of principles and mathematical structure. A theory that explains a certain view of reality can then be tested by making a prediction about something reality that has not been observed. The strength of the prediction is determined by the degree of difference between the observation that formed the basis of the theory and the test of the prediction. The greater the difference in circumstance for the experiment, the stronger the test of the theory is. Ultimately there are a great number of details and quality assessments needed to put everything in context.

One thing that modeling and simulation does for science expands the ability to make predictions for complex and elaborate mathematical models. Many theories produmediocritydemotivatorce elaborate and complex mathematical models, which are difficult to solve and inhibit the effective scope of predictions. Scientific computing relaxes this limitations significantly, but only if sufficient care is taken with assuring the credibility of the simulations. The entire process of VVUQ serves to provide the assessment of the simulation so that they may confidently be used in the scientific process. Nothing about modeling and simulation changes the process of posing questions and accumulating evidence in favor of a hypothesis. It does change how that relaxing limitations on the testing of theory arrives at evidence. Theories that were not fully testable are now open to far more complete examination as they now may make broader predictions than classical approaches allowed.

Science has an unfortunate habit of discovering information politicians don’t want to hear, largely because it has some bearing on reality.

― Stephen L. Burns

The first part of VVUQ, the verification, is necessary to be confident that the simulation is a proper solution of the theoretical model, and suitable for further testing. The other element of verification is error estimation from the approximate solution. This is a vastly overlooked aspect of modeling aimagesnd simulation where the degree of approximate accuracy is rarely included in the overall assessment. In many cases the level of error is never addressed and studied as part of the uncertainty assessment. Thus verification plays two key roles in the scientific study using modeling and simulation. Verification acts to define the credibility of the approximate solution to the theory being tested, and an estimation of the approximation quality. Without an estimate of the numerical approximation, we possibly suffer from conflating this error with modeling imperfections, and obscuring the assessment of the validity of the model. One should be aware of the pernicious practice of simply avoiding error estimation by declarative statements of being mesh-converged. This declaration should be coupled with direct evidence of mesh convergence, and the explicit capacity to provide estimates of actual numerical error. Without such evidence the declaration should be rejected.

Verification should be a prerequisite for then examining the validity of the model, or validation. As mentioned that validation without first going through verification is prone to false positives or false negatives with a risk that numerical error will be confused with the true assessment of the theoretical model and its predictions. The issue of counting numerical error as modeling is deep and broad in modeling and simulation. A proper VVUQ process with a full breadth of uncertainty quantification must include it. Like any scientific endeavor the uncertainty quantification is needed to place the examination of models in a proper perspective. When the VVUQ process is slipshod and fails to account for the sources of error and uncertainty, the scientific process is damaged and the value of the simulation is shortchanged.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

Of course, validation requires data from reality to be done. This data can come from images-1experiments or observation of the natural world. In keeping with the theme an important element of the data in the context of validation is its quality and a proper uncertainty assessment. Again this assessment is vital for its ability to put the whole comparison with simulations in context, and help define what a good or bad comparison might be. Data with small uncertainty demands a completely different comparison than large uncertainty. Similarly for the simulations where the level of uncertainty has a large impact on how to view results. When the uncertainty is unspecified either data or simulation are untethered and scientific conclusions or engineering judgments are threatened.

Cielo rotatorIt is no understatement to note that this perspective is utterly missing from the high performance computing world today and the foolish drive to exascale we find ourselves on. Current exascale programs are almost completely lacking any emphasis on VVUQ. This highlights the lack of science in our current exascale programs. They are rather naked and direct hardware-centric programs that show little or no interest in actual science, or applications. The whole program is completely hardware-focused. The holistic nature of modeling and simulation is ignored and the activities connecting modeling and simulation with reality are systematically starved of resources, focus and attention. It is not too hyperbolic to declare that our exascale programs are not about science.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feign

The biggest issue in the modern view of project management for VVUQ is its injection of risk into work. We live in a world where spin and BS can easily be substituted for actual technical achievement. Doing VVUQ often results in failures by highlighting problems with modeling and simulation. One of the greatest skills in being good at VVUQ is honesty. Today it is frequently impossible to be honest about shortcomings because it is dude_wtfperceived as vulnerability. Stating weaknesses or limitations to anything cannot be tolerated in today’s political environment, and risks project existence because it is perceived as failure. Instead of an honest assessment of the state of knowledge and level of theoretical predictivity, today’s science prefers to make over-inflated claims and publish via press release. VVUQ runs counter to this practice if done correctly. Done properly VVUQ provides people using modeling and simulation for scientific or engineering work with a detailed assessment of credibility and fitness for purpose.

Scientific objectivity is not the absence of initial bias. It is attained by frank confession of it.

― Mortimer J. Adler

Just as science has a self-correcting nature in how the scientific method work, VVUQ is a means of self-correction for modeling and simulation. A proper and complete VVUQ assessment will produce good knowledge of strengths and weaknesses in modeling and where opportunities for improvement lie. A lack of VVUQ both highlights the lack of commitment to science in a project and its unsuitability for serious work. This assessment is quite damning to current HPC effort that have failed to include VVUQ in the efforts much less their emphasis. It is basically a deunderachievementdemotivatorclaration of intent by the program to seek results associated with spin and BS instead of a serious scientific or engineering effort. This end state is signaled by far more than merely a lack of VVUQ, but also the lack of serious application and modeling support. This simply compounds the lack of method and algorithm support that also plagues the program. The most cynical part of all of this is the centrality of application impact to the case made for the HPC programs. The pitch to the nation or the World is the utility of modeling and simulation to economic or physical security, yet the programs are structured to make sure this cannot happen, and will not be a viable outcome.

We may not yet know the right way to go, but we should at least stop going in the wrong direction.

― Stefan Molyneux

The current efforts seem to be under the impression that giant (unusable, inefficient, monstrous,…) computers will magically produce predictive, useful and scientifically meaningful solutions. I could easily declare those running these programs to be naïve and foolish, but this isn’t the case, the lack of breadth and balance in these programs is willful. People surely know better, so the reasons for the gaps are more complex. We have a complete and utter lack of brave, wise and courageous leadership in HPC. We know better, we just don’t do it.

Advertisements