The people I work with like to think of them as being smart, and they are. Most of all they like believing that they are right, and that they have solved important problems. The truth is a bit more complicated than that. The truth is that we know a lot less than we would like to admit. Actually we know hell of a lot less. Admitting this to us is scary.
Denial can be beautiful
But only when you’re a fantastic liar
― Kim Holden,
Still, many of us would like to convince ourselves of the opposite for the comfort it provides the soul. This is a selfish and self-serving mentality, which most clearly comes from the desire to have accomplished more than we have. Often the honesty in admitting our flawed knowledge and capability for understanding is too much to bear and we submerge it in falsehoods. This is basic human nature and it is inescapable.
Humankind cannot bear too much reality.
It seems to be a lot easier to metaphorically put our heads in the sand. A lot of the time we go to great lengths to convince ourselves of the opposite of the truth, to convince ourselves that we are the master’s of the universe. Instead we can only achieve the mastery we crave though the opposite. We should never consider our knowledge and capability to be flawless, but flawed and incomplete.
Integrity is telling myself the truth. And honesty is telling the truth to other people.
This comes up all the time when you’re doing V&V. When V&V gets into the assessment mode we constantly butt our heads into the people putting their heads in the sand. They want to think that everything is OK and mastery is at hand, and the problems that exist are under control. The reaction to finding problems is often full of emotion and anger because the truth is so unpleasant. It confronts the control of reality they have worked hard to build for themselves.
It takes strength and courage to admit the truth.
Often the control they have is local and it is really OK. For a lot of modeling the results are calibrated heavily and as long as the analysis is done close to where the calibrating data was taken there is credibility. In the long run this is a difficult circumstance that is unsustainable. The problem is that the calibrated modeling is often quite successful. The people applying calibrated models are often lauded as the models of success. The problems with this are deep and pernicious. We want to do much more than calibrate results, we want to understand and explore the unknown. The only way to do that is systematically uncover our failings, and shortcomings with a ken focus on exposing the limits we have. The practical success of calibrated modeling stands squarely in the way of pushing the bounds of knowledge.
A program that produces incorrect results twice as fast is infinitely slower.
A close analog to this process exists in code development. In debugging code finding a bug is to be celebrated, but one should never believe that all the bugs have been located. A more healthy and reasonable philosophy is to assume that more bugs are hiding waiting to be discovered. Anyone who thinks their code is bug-free is delusional. Such attitudes are rightly greeted with skepticism. The same skepticism should be offered to those who think their modeling is similarly “bug-free”.
Most men would rather deny a hard truth than face it.
It is so very difficult to achieve because of the human element; it is because people would rather think that things are better than they actually are. It takes leadership to overcome these issues. An environment that provides the impetus for improving modeling quality means confronting issues that make people uncomfortable. Specific measures need to be taken that reward people for finding “bugs” in the modeling capability. Finding problems in the modeling needs to be expected, instead of confirmations of mastery.
Quality means doing it right when no one is looking.