Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

jean-rostand-computers-quotes-think-why-think-we-have-computers-to-doAs tools the frameworks of PIRT and PCMM are only as good as how they are used to support high quality work. Using these tools will not necessarily improve your credibility, but rather help you assess it holistically. Ultimately the credibility of your modeling & simulation capability is driven by the honesty and integrity of your assessment. It is easy to discuss what you do well and where you have mastery over a topic. The real key to assessment includes a will and willingness to articulate where an effort is weak and where the fundamental foundational knowledge in a field is the limiting factor in your capacity for solving problems. The goal in credibility assessment is not demonstration of mastery over a topic, but rather a demonstration of the actual state of affairs so that decisions can be made with full knowledge of the weight to place and risks inherent in modeling & simulation.

A thing like quality is highly subjective and certainly subject to a great deal of relativity where considerations of the problem being solved and the modeling & simulation capabilities of the field or fields relevant come into consideration. As such the frameworks serve to provide a basic rubric and commonly consistent foundation for the consideration of modeling & simulation quality. As such PIRT and PCMM are blueprints for numerical modeling. The craft of executing the modeling & simulation within the confines of resources available is the work of the scientists and engineers. These resources include the ability to muster effort towards completing work, but also the knowledge and capability base that you can draw upon. The goal of high quality is to provide an honest and holistic approach to guiding the assessment of modeling & simulation quality.

In the assessment of quality the most important aspect to get right is honesty about the limitations of a modeling & simulation capability. This may be the single most difficult thing to accomplish. There are significant physiological and social factors that lead to a lack of honesty in evaluation and assessment. No framework or system can completely overcome such tendencies, but might act as a hedge against the tendency to overlook critical details that do not reflect well on the assessment. The framework assures that each important category is addressed. The ultimate test for the overall integrity and honesty of an assessment of modeling & simulation credibility depends upon deeper technical knowledge than any framework can capture.

Quite often an assessment will avoid dealing with systematic problems for a given capability that have not been solved sufficiently. Several examples of this phenomenon are useful in demonstrating where this can manifest itself. In fluid dynamics, turbulence remains a largely unsolved problem. Turbulence has intrinsic and irreducible uncertainty associated with it, and no single model or modeling approach is adequate to elucidate the important details. In Lagrangian solid mechanics the technique of element death is pervasively utilized for highly strained flows where fracture and failure occur. It is essential for many simulations and often renders a simulation to be non-convergent under mesh refinement. In both cases the communities dependent upon utilizing modeling & simulation with these characteristics tend to under-emphasize the systematic issues associated with both. This produces a systematically higher confidence and credibility than is technically justifiable. The general principle is to be intrinsically wary of unsolved problems in any given technical discipline.

mediocritydemotivatorTogether the PIRT and PCMM adapted and applied to any modeling & simulation activity form part of the delivery of defined credibility of the effort. The PIRT gives context to the modeling efforts and the level of importance and knowledge of each part of the work. It is a structured manner for the experts in a given field to weigh in on the basis for model construction. The actual activities should be strongly reflected in the sort of assessed importance and knowledge basis reflected in the PIRT. Similarly the PCMM can be used for a structured assessment of the specific aspects of the modeling & simulation.

The degree of foundational work providing the basis for confidence for the work is spelled out in the PCMM categories. Included among these are the major areas of emphasis some of which may be drawn from outside the specific effort. Code verification being an exemplar of this where its presence and quality provides a distinct starting point for the specific aspects of the estimation of the numerical error for the specific modeling & simulation activity being assessed. Each of the assessed categories forms the starting point for the specific credibility assessment.

One concrete way to facilitate the delivery of results is the consideration of the uncertainty budget for a given modeling activity. Here the delivery of results using PIRT and PCMM is enabled by considering them to be resource guides for the concrete assessment of an analysis and its credibility. This credibility is quantitatively defined by the uncertainty and the intended application’s capability to absorb such uncertainties for the sort of questions to be answered or decision to be made. If the application is relatively immune to uncertainty or only needing a qualitative assessment then large uncertainties are not worrisome. If on the other hand an application is operating under tight constraints associated with other considerations (sometimes called a design margin) then the uncertainties need to be carefully considered in making any decisions based on modeling & simulation.

This gets to the topic of how modeling & simulation are being used. Traditionally modeling & simulation goes through two distinct phases of use. The first phase is dominated by “what if” modeling efforts where the results are largely qualitative and exploratory in nature. The impact of decisions or options is considered on a qualitative basis and guides decisions in a largely subjective way. Here the standards of quality tend to focus on completeness and high-level issues. As modeling & simulation proves its worth for these sorts of studies, it begins to have greater quantitative demands placed on it. This forms a transition to a more demanding case for modeling & simulation where design or analysis decision is made. In this case the standards for uncertainty become far more taxing. This is the place where these frameworks become vital tools in organizing and managing the assessment of quality.

This is not to say that these tools cannot assist in earlier uses of modeling & simulation. In particular the PIRT can be a great tool to engage with in determining modeling requirements for an effort. Similarly, the PCMM can be used to judge the appropriate level of formality and completeness for an effort to engage with. Nonetheless these frameworks are far more important and impactful when utilized for more mature, “engineering” focused modeling & simulation efforts.

Any high level integrated view of credibility is built upon the foundation of the issues exposed in PIRT and PCMM. The problem that often arises in a complex modeling & simulation activity is managing the complexity of the overall activity. Invariably gaps, missing efforts and oversights will creep into the execution of the work. The basic modeling activity is informed by the PIRT’s structure. Are there important parts of the model that are missing, or poorly grounded in available knowledge? From PCMM, are the important parts of the model tested adequately? The PIRT becomes a fuel for assessing the quality of the validation, and planning for an appropriate level of activity around important modeling details. Questions regarding the experimental support for the modeling can be explored in a structured and complete manner. While the credibility is not built on the PCMM and PIRT, the ability to manage its assessment is enabled by their mastery of the complexity of modeling & simulation.

In getting to the quantitative basis for assessment of credibility, the definition of the uncertainty budget for a modeling & simulation activity can be enlightening. While the PCMM and PIRT provide a broadly encompassing view of modeling & simulation quality from a qualitative point of view, the uncertainty budget is ultimately a quantitative assessment of quality. Forcing the production of numerical values to the quality is immensely useful and provides important focus. For this to be a useful and powerful tool, this budget must be determined with well-defined principles and fairly good disciplined decision-making.

imagesOne of the key principles underlying a successful uncertainty budget is the determination of unambiguous categories for assessment. Each of these broad categories can be populated with sub-categories, and finer and finer categorization. Once an effort has committed to a certain level of granularity in defining uncertainty, it is essential that the uncertainty be assessed broadly and holistically. In other words, it is important, if not essential that none of the categories be ignored.

This can be extremely difficult because some areas of uncertainty are truly uncertain, no information may exist to enable a definitive estimation. This is the core of the difficulty for uncertainty estimation, the unknown value and basis for some quantitative uncertainties. Generally speaking, the unknown or poorly known uncertainties are more important to assess than some of the well-known ones. In practice the opposite happens, when something is poorly known the value often adopted in the assessment is implicitly defined quantitatively as “zero”. This is implicit because the uncertainty is simply ignored, and it is not mentioned, or assigned any value. This is dangerous. Again, the availability of the frameworks comes in handy to help the assessment identify major areas of effort.

A reasonable decomposition of the sources of uncertainty can fairly generically be defined at a high level: experimental, modeling and numerical sources. We would suggest that each of these broad areas be populated with a finite uncertainty, and each of the finite values assigned be supported by well-defined technical arguments. Of course, each of these high level areas will have a multitude of finer grained components describing the sources of uncertainty along with routes toward their quantitative assessment. For example, experimental uncertainty has two major components, observational uncertainty and natural variability. Each of these categories can in kind be analyzed by a host of additional detailed aspects. Numerical uncertainty lends itself to many sub-categories: discretization, linear, nonlinear, parallel consistency, and so on.

The key is to provide a quantitative assessment for each category at a high level with a non-zero value for uncertainty and a well-defined technical basis. We note that the technical basis could very well be “expert” judgment as long as this is explicitly defined. This gets to the core of the matter; the assessments should always be explicit and not leave essential content for implicit interpretation. A successful uncertainty budget would define the major sources of uncertainty for all three areas along with a quantitative value for each. In the case where the technical basis for the assessment is weak or non-existent, the uncertainty should be necessarily large to reflect the lack of technical basis. Like statistical sampling, the benefit to doing more work is a reduction in the magnitude of the uncertainty associated with the quantity. Enforcing this principle means that follow-on work that produces larger uncertainties requires the admission that earlier uncertainties were under-estimated. The assessment process and uncertainty budget are inherently learning opportunities for the overall effort. The assessment is simply an encapsulation of the current state of knowledge and understanding.

Too often in modeling & simulation uncertainty, efforts receive a benefit through ignoring important sources of uncertainty. By doing nothing to assess uncertainty they report no uncertainty associated with the quantity. Insult is done to this injury when the effort realizes that doing work to assess the uncertainty then can only increase its value. This sort of dynamic becomes self-sustaining, and more knowledge and information results in more uncertainty. This is a common and often seen impact of uncertainty assessment. Unfortunately this is a pathological issue. The reality is that this indicts the earlier assessment of uncertainty where the estimate that was made earlier is actually too small. A vital principle is that more work in assessing uncertainty should always reduce uncertainty. If this does not happen the previous assessment of uncertainty was too small. This is an all to common occurrence that occurs when a modeling & simulation effort is attempting to convey a too large sense of confidence in their predictive capability. The value of assessed uncertainty should converge to the irreducible core of uncertainty associated with the true lack of knowledge or intrinsic variability of the thing being modeled. In many cases the uncertainty is interacting with an important design or analysis decision where a performance margin needs to be balanced with the modeling uncertainty.

An ironic aspect to uncertainty estimation is the tendency to estimate large uncertainties where expertise and knowledge are strong, while under-estimating uncertainty in areas where expertise is weak. This is often seen with numerical error. A general trend in modeling & simulation is the tendency to treat computer codes as black boxes. As such, the level of expertise in numerical methods used in modeling & simulation can be quite low. This has the knock-on effect of lowering the estimation of numerical uncertainty, and utilizing the standard methodology for solving the equations numerically. Quite often the numerical error is completely ignored in analysis. In many cases the discretization error should dominate the uncertainty, but aspects of the solution methodology can color this assessment. Key among these issues is the nonlinear error, which can compete with the discretization error if care is not taken.

This problem is compounded by a lack of knowledge associated with the explicit details of the numerical algorithm and the aspects of the solution that can lead to issues. In this case the PCMM can assist greatly in deducing these structural problems. The PCMM provides several key points that allow the work to proceed with greater degrees of transparency with regard to the numerical solution. The code verification category provides a connection to the basis for confidence in any numerical method. Are the basic features and aspects of the numerical solution being adequately tested? The solution verification category asks whether the basic error analysis and uncertainty estimation is being done. Again the frameworks encourage a holistic and complete assessment of important details.

The final aspects to highlight in the definition of credibility are the need for honesty and transparency in the assessment. Too often assessments of modeling & simulation lack the fortitude to engage in a fundamental honesty regarding the limitations of the technology and science. If the effort is truly interested in not exposing their flaws, no framework can help. Much of the key value in the assessment is defining where effort can be placed to improve the modeling & simulation. It should help to identify the areas that drive the quality of the current capability.

imagesIf the effort is interested in a complete and holistic assessment of its credibility, the frameworks can be invaluable. The value is key in making certain that important details and areas of focus are not over- or under-valued in the assessment. The areas of strong technical expertise are often focused upon, while areas of weakness can be ignored. This can produce systematic weaknesses in the assessment that may produce wrong conclusions. More perniciously, the assessment can willfully or not ignores systematic shortcomings in a modeling & simulation capability. This can lead to a deep under-estimate in uncertainty while significantly over-estimating confidence and credibility.

For modeling & simulation efforts properly focused on an honest and high integrity assessment of their capability, the frameworks of PCMM and PIRT can be an invaluable aid. The assessment can be more focused and complete than it would be in their absence. The principle good of the frameworks is to make the assessment explicit and intentional, and avoid unintentional oversights. Their use can go great lengths to provide direct evidence of due diligence in the assessment and highlight the quality of the credibility provided to whomever utilizes the results.

We should not judge people by their peak of excellence; but by the distance they have traveled from the point where they started.

― Henry Ward Beecher

Advertisements