Testing the limits of our knowledge

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

― Daniel J. Boorstin

All knowledge hurts.

― Cassandra Clare

Science is an important source and process for generating knowledge for humanity. Much of science is a core of well-known and well-tested knowledge about the univeunnamedrse. Most of the activity for working scientists is at the boundaries of our knowledge working to push back our current limits on what is known. The scientific method is there to provide structure and order to the expansion of knowledge. We have well chosen and understood ways to test proposed knowledge. A method of using and testing our theoretical knowledge in science is computational simulation. Within computational work the use of verification, validation with uncertainty quantification is basically the scientific method in action (https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/ ). One of the key activities in the accomplishment of our understanding of theory is the determination of uncertainties.

Unfortunately what we call “uncertainty quantification” is only a small piece of what uncertainty needs to be evaluated in testing knowledge. Too often people only do this narrow part of uncertainty quantification and falsely believe this is sufficient for science.

The basic premise is that we can test our knowledge via the systematic understanding of uncertainties. If we examine the uncertainty in an honest and complete manner, the limits of knowledge can be explored. Some uncertainty can be reduced via greater computational effort, some uncertainty can be reduced with improved modeling and some uncertainty is irreducible. It is absolutely essential to understand the nature of what we do and don’t know systematically. Careful and honest study is clear, and the strong tendency of people is to view all uncertainty as bad. This is not necessarily true. Uncertainty is completely unavoidable, and understanding the degree to which this is true can be a great unburdening. If an uncertainty is unavoidable, one can dispense with attempting to reduce it and simply figure out how to live with it.

Crays-Titan-SupercomputerIf the uncertainty is irreducible and unavoidable, the problem with not assessing uncertainty and taking an implied value of ZERO for uncertainty becomes truly dangerous (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In this case there is an uncertainty that should be there, and instead of building this knowledge into our work, we mindlessly ignore it. Sometimes it is actually mindfully ignored, which is utterly contemptible. This situation is so common as to be laughable, but actually provides the source for tragedy. Looking at weather and climate provides innumerable situations where ignorance of uncertainty pileofshitmay prove deadly in rather commonly encountered situations. As systems become more complex and energetic, chaotic character becomes more acute and common. This chaotic character leads to solutions that have natural variability. Understanding this natural variability is essential to understanding the system. Building this knowledge is the first step in moving to a capability to control and engineer it, and perhaps if wise, reduce it. If one does not possess the understanding of what the variability is, such variability cannot be addressed via systematic engineering or accommodation.

Courage doesn’t happen when you have all the answers. It happens when you are ready to face the questions you have been avoiding your whole life.

― Shannon L. Alder

This entire issue is engaged frequently. In the case of experiments for complex systems, the problem arises because lack of repeated experiments. Often such systems are complex and expensive, thus tests are carried out once. Even though any thoughtful examination of the situation would conclude that the results of the experiment are likely –almost certainly – highly variable, the experiment is treated as a unique event. Computational simulations of the experiment are viewed the same way; the calculation should try to reproduce this single experiment. This is potentially a recipe for disaster. A healthier point of view would be looking at this experiment as a single instance of drawn from a probability distribution. If the experiment were repeated there would be different results. A computational simulation if truly predictive would do exactly the same thing. Of course the simulations themselves are designed to compute the average response of such an experiment (based on mean field equations). All of this conspires to create big problems should the experiment actually draw from a low probability outcome (i.e., the tails of the distribution).5f282213e3d57606200fffd45374ecc5

To address this systemic issue we need to reframe both the experimental and theoretical practice. If an experiment fails to give repeatable results, we need to work to unveil the statistical nature of the results especially for complex, high consequence systems. Theoretical models need to have the same properties more mindfully rather than producing the variability by happenstance (when mean field models produce variability almost serendipitously). Computational simulations follow the theory by construction and great benefits to our knowledge and understanding by more structurally and mindfully building in the variability to the theory. Failure to address this issueostrich-head-in-sandsystematically is an ever-growing limit for science. We have a major scientific gap open in front of us and we are failing to acknowledge and attack it with our scientific tools. It is simply ignored almost by fiat. Changing our perspective would make a huge difference in experimental and theoretical science, and remove our collective heads from the sand about this matter.

Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.

― Marie Curie

At a deeper and more fundamental level the whole exploration of the true uncertainty is the search for the understanding. We seek to define our level of precision for the modeling of something by systematically examining the levels of certainty (by proxy of studying uncertainty). We need to understand the quality of our knowledge, and a complete survey of uncertainty is a path to this end. For computational modeling there are three broad categories for the uncertainty, the model itself, the model’s numerical solution, and the experimental fidelity used to grant confidence to the model. Each of these uncertainties can in turn be broken down into more detailed pieces. For

example we must compare to the model itself rather than the error in the solution of the model. It is important to choose some structure for the uncertainty and commit to an estimate of all portions of the structure. One should never take a zero magnitude for the uncertainty of any structural element by ignoring it.

Any fool can know. The point is to understand.

― Albert Einstein

One of the clearest ways to undermine this quest for understanding the boundaries of our knowledge is ignoring a key uncertainty. There are several usual suspects for the _12122_tex2html_wrap26willful uncertainty ignorance. Probably the most common uncertainty to be willfully ignorant of is numerical error. The key numerical error is discretization error that arises from the need to make a continuous problem, discrete and computable. The basic premise of computing is that more discrete degrees of freedom should produce a more accurate answer. Through examining the rate that this happens, the magnitude of the error can be estimated. Other estimates can be had though making some assumptions about the solution and relating the error the nature of the solution (like the magnitude of estimated derivatives). Other generally smaller numerical errors arise from solving systems of equations to a specified tolerance, parallel consistency error and round-off error. In most circumstances these are much smaller than discretization error, but are still non-zero.

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

A second category of uncertainty that is highly prone to being ignored is the experimental variability. Often this is the direct consequence of only doing a single experiment. Rather then know the physics of the problem well enough to make the conclusion that the experiment will be highly variable, this is ignored and we will endeavor to model the single experiment as a unique well-determined event. The result of this set of unjustified assumptions is wholesale ignorance of systematic and irreducible uncertainty. This is truly scientific malpractice. Addressing this shortcoming should be the focus of significant effort experimentally, as well as in modeling and its numerical solution. It is a very large and largely unaddressed issue in science.

In addition to ignoring the intrinsic variability in the experiments, the more classical and even less excusable uncertainty often ignored is measurement error. This error is always present even in cases where the experiment is well posed and utterly reproducible. Measurements are always finite precision and have some degree of error and uncertainty. This finite value should always be reported as part of the experimental comparison even when the uncertainty is small. In a deep sense the small uncertainty is more important because it lends credence to the sense that the data is high quality. Not reporting the uncertainty simply leaves this vague and undefined.

The-most-powerful-Exascale-ComputerThe last area of uncertainty is the modeling uncertainty. In the vast majority of cases this will be the largest source of uncertainty, but of course there will be exceptions. It has three major components, the choice of the overall discrete model, the choice of models or equations themselves, and the coefficients defining the specific model. The first two areas are usually the largest part of the uncertainty, and unfortunately the most commonly ignored in assessments. The last area is the most commonly addressed because it is amenable to automatic evaluation. Even in this case the work is generally incomplete and lacks full disclosure of the uncertainty.

Today in examining modeling uncertainty we most often attack the least important one systematically, and the more important modeling uncertainties are ignored. The “easy” uncertainty to attack is the coefficients in the model. This can be achieved using well-developed methods such as MCMC (Markov chain Monte Carlo). One can define a set of parameters to be varied and ranges for the variation. The calculations can bemcmc-samplingrepeated using values drawn to efficiently sample the probability space of the calculation and produce the uncertainty. This sampling is done for a very highly dimensional space, and carries significant errors. More often than not the degree of error associated with the under sampling is not included in the results. It most certainly should be.

The other two uncertainties are generally larger and thus more important to characterize. Unfortunately neither is amenable to the sort of turnkey black box approach the parametric uncertainty allows. As a result these uncertainties are usually completely ignored. These two areas of uncertainty are closely related. Any complex problems can be modeled in a variety of ways. An analyst ends up making innumerable assumptions and choices in the course of modeling a complex problem. One choice is the code used for the analysis and the specific settings within the code. Beyond this there are choices on how the problem is meshed, boundary conditions, initial conditions, submodels to close the equations, and analysis of results. Each and every one of these choices can produce changes in the results. It is very uncommon to see a problem solved in different or remotely independent ways. As a result the uncertainty from the modeling and analysis is usually completely hidden.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

To truly test our knowledge in a topic we must be open to a full and honest evaluation of the uncertainties. Knowledge and uncertainty are two sides of the same coin. If you don’t know the uncertainty, you don’t know the extent of your knowledge. Too often we only do the assessment that is easy. In this assessment strategy we also implicitly choose uncertainty estimates of ZERO for information we know is uncertain, but difficult to assess. This is a pernicious and dangerous practice. Due diligence and responsibility should dictate that some sort of uncertainty be estimated for all sources. If we cannot estimate these uncertainties in a credible and knowledgeable manner, we have no business in modeling and our experiments are not yielding their full value. The only responsible act in this case would be to produce work that would make such a bounding assessment possible.

Negative results are just what I want. They’re just as valuable to me as positive results. I can never find the thing that does the job best until I find the ones that don’t.

― Thomas A. Edison


The Inspirational Promise of TeD Talks

Your number-one mission as a speaker is to take something that matters deeply to you and to rebuild it inside the minds of your listeners. We’ll call that something an idea.
― Chris J. Anderson

janell-500x500Every September my wife and I attend the local TeDx event here in Albuquerque. It is a marvelous way to spend the day, and leaves a lasting impression on us. We immerse ourselves in inspiring, fresh ideas surrounded by like-minded people. It is empowering and wonderful to see the local community of progressive people together at once listening, interacting and absorbing a selection of some of the best ideas in our community. This year’s event was great and as always several talks stood out particularly including Jannell MacAulay (Lt.Olivia-GatwoodCol USAF) talking about applying mindfulness to work and life, or Olivia Gatwood inspiring poetry about the seeming mundane aspects of life that speaks to far deeper issues in society. The smallest details are illustrative of the biggest concerns. Both of these talks made me want to think deeply about applying these lessons in some fashion to myself and improving my life consequentially.


That’s part of the point of TeD, the talks are part of the gospel of progress, part marketing of great ideas and part performance art. All of these things have a great use to society in lifting up and celebrating a drive to be better and progress toward a better future. Humanity has immense power to change the world around them for the better. We can look across the globe and witness the collective power of humanity to change their environment. A great deal of this change is harmful or thoughtless, but much of it is a source of wonder. Our understanding of the World around us and the worlds within us has changed our biological destiny.

main-event-2017-web-headerWe have transitioned from an animal fighting for survival during brief violent lives, to beings capable of higher thought and aspiration during unnaturally long and productive lives. We can think and invent new things instead of simply fighting to feed us and reproduce a new generation of humans to struggle in an identical manner. We also can produce work whose only value is beauty and wonder. TeD provides a beacon for human’s best characteristics along with a hopeful forward-looking community committed to positive common values. It is a powerful message that I’d like to take with me every day. I’d like to live out this promise with my actions, but the reality of work and life comes up short.

There was a speaker from my employer this year, and there always is. There wasn’t anyone from my former employer, the other major scientific Lab in our state (what was once one of the premier scientific institutions in the World, but that’s a thing of the past). Also noticeable is the lack of support for the local TeD organization by either employer. I’ll grant you that Los Alamos has supported it in the past, but no longer. There’s probably some petty and idiotic reason for the withdrawal of support. My employer, Sandia, doesn’t support it, and hasn’t ever. It looks like our local University doesn’t support it either. I know that Los Alamos did their own local TeD conference and perhaps they thought that was enough TeD for them. That’s the sad best-case scenario, and I don’t know what the full story is.

For Sandia it’s not particularly surprising as it’s not exactly a progressive, idea-centered place, and these days no place is anyway. The University should be, but the lack of financial support from the state could explain it (its a common characteristic of GOP governance to eviscerate universities). It is quite hard for me to express my level of disappointment in these institutions’ lack of civic support for progressive thought. It is stark testimony on the current state of affairs where two National Laboratories and a University cannot be supportive of a major source of progressive thought in the community they are embedded within. An active progressive and intellectual community in the areas where these institutions are located should be beneficial for recruiting and retention of progressive and intellectual staff. It is one sign that this sort of long view isn’t at work. It is a sign of the times.

download-1TeD talks are often the focus of criticism for their approach and general marketing nature strongly associated with the performance art nature. These critiques are valid and worth considering including the often-superficial nature of how difficult topics are covered. In many ways where research papers can be criticized increasingly as merely being the marketing of the actual work, TeD talks are simply the 30-second mass market advertisement of big ideas for big problems. Still the talks provide a deeply inspiring pitch for big ideas that one can follow up on and provide the entry to something much better. I find the talk is a perfect opening to learning or thinking more about a topic, or merely being exposed to something new.

Control leads to compliance; autonomy leads to engagement.

– Daniel H. Pink

One prime example is one of my favorite talks of all time by Daniel Pink (https://www.ted.com/talks/dan_pink_on_motivation). This talk is basically a pitch for the book “Drive” and touches only superficially on the topic. The book itself is a distillation of very complex topics. All of this is true, but none of this undermines the value in the ideas. TeD provides a platform to inspire people to do more and get closer to the actual application of the ideas to their lives (not just buy Pink’s book, the true cynics take on the purpose). Interestingly, the managers at work were also reading Pink’s book and discussing the ideas therein. The rub was the observation that I coulddownload-2 not identify a single thing recommended in Pink’s book that made it to the workplace. It seemed to me that the book simply inspired the management to a set of ideals that could not be realized. The managers aren’t really in charge; they are simply managing the corporate compliance instead of managing in a way that maximizes the performance of its people. The Lab isn’t about progress any more; it is about everything, but progress. Compliance and subservience has become the raison d’etre.

For artists, scientists, inventors, schoolchildren, and the rest of us, intrinsic motivation the drive do something because it is interesting, challenging, and absorbing is essential for high levels of creativity.

– Daniel H. Pink


Intrinsic motivation is conducive to creativity; controlling extrinsic motivation is detrimental to creativity.

–Daniel H. Pink

This deep frustration isn’t limited to TeD talks; it is almost every source of great advice or inspiration available. Almost every manager I know reads the Harvard Business Review. I read it too. It is full of wonderful ideas and approaches to improving the way we work. It is impossible to see anything ever done with all the great advice or inspiration. My workplace looks like all the “before” cases studies in HBR and more like it every day, not less. Nothing ever recommended happens at work, nothing is tried, nothing changes in the positive direction; its like we are committed to moving backwards. HBR download-1is progressive in terms of the business world. The problem is that the status quo and central organizing principle today is anti-progressive. Progress is something everyone is afraid of, and the future appears to be terrifying and worth putting off for as long as possible. We see genuinely horrible lurch toward an embrace of the past along with all its anger, bigotry, violence and fear. Fear is the driving force for avoiding anything that looks progressive.

Management isn’t about walking around and seeing if people are in their offices, he told me. It’s about creating conditions for people to do their best work.

– Daniel H. Pink

Now that I’ve firmly established the lack of relevance of TeD and progressive thought in my workplace, I can at least appreciate and apply it at a personal level. I’d love for work to reflect a place for genuine progress, but this seems a bridge too far today. Work is a big part of life and these observations are rather dismaying. Ideally, I’d like a workplace that reflects my own values. The truth of the matter is that this is nearly impossible for a progressive-minded person in America today. Even the bastions of progressive thought like Universities are not working well. Society at large seems to be at war with elites and progressive thought far more under siege than whites, or Christians. I can ask the serious question, how many atheists are in Congress? How much well proven and accepted science does our government reject already? Don’t get me started on our judicial system, or the war on drugs both of which focus far more on oppressing minorities than crime or drug abuse. The bottom line is the sense that we are in a societal backlash against change; so more progress seems to be impossible. We will be fighting to hold onto the progress we’ve already made.

maxresdefault-1Still I can offer a set of TeD talks that have both inspired me and impacted my life for the better. They have either encouraged me to learn more, or make a change, or simply change perspective. I’ll start with a recent one where David Baron gave us an incredibly inspiring call to see the total eclipse in its totality (https://www.ted.com/talks/david_baron_you_owe_it_to_yourself_to_experience_a_total_solar_eclipse). I saw the talk concluding that I simply had to go, and then I showed to my wife to convince her. It did! We hopped into the car at midnight the day of eclipse and drove eight hours to get from Northern Idaho to Eastern Oregon. We got off I-82 atmaxresdefaultDurkee finding a wonderful community center with a lawn and watched it with 50 people from all over the local area plus a couple from Berlin! The totality of the eclipse lasted only two minutes. It was part of a 22-hour day of driving over 800 miles, and it was totally and completely worth every second! Seeing the totality was one of the greatest experiences I can remember. My life was better for it, and my life was better for watching that TeD talk.

61431a6a6184b8f62bc9e27705d1b9f8429541cf_2880x1620Another recent talk really provoked me to think about my priorities. It is a deep consideration of what your priorities are in terms of your health. Are you better off going to the gym or going to party, or the bar? Conventional wisdom says the gym will extend your life the most, but perhaps not. Susan Pinker provides a compelling case that social connection is the key to longer life (https://www.ted.com/talks/susan_pinker_the_secret_to_living_longer_may_be_your_social_life ). This gets at the disparity between men and women since women tend to connect in long life affirming friendships with greater ease than men. The talk is backed up by data, and by visiting places where people live long lives. These people live in communities where they are entangled in each other’s lives almost by design. It gets to the priorities associated with health care and self care along with the benefit of actions. Focusing on your social life is a genuinely beneficial act to prolonging your life.

Our modern computing world is a marvel, but it also has some rather pronounced downsides. In many ways our cell phones are making us far unhappier people. The phones and their apps are designed to grab, demand our attention. They can become sources of deep and pervasive anxiety. This is exactly what they are designed to do. As Adam Alter explains, an entire industry is set up to get as much of our attention as possible because our attention equals money, big money (https://www.ted.com/talks/adam_alter_why_our_screens_make_us_less_happy). He also explains that it doesn’t have to be like this. The same social engineering that has gone into making the phones so demanding could be harnessed to help us be better. If we balanced the naked profit motive with some measure of social responsibility, we might turn this problem into a benefit. This is a wonderfully inspiring idea; it is also terribly progressive and dangerous to the unfettered capitalism fueling this growing societal crisis.


Love rests on two pillars: surrender and autonomy. Our need for togetherness exists alongside our need for separateness.

– Esther Perel

The power of TeD extends to far deeper personal matters as well. A couple of talks by Esther Perel speak to reframing our love lives (https://www.ted.com/talks/esther_perel_the_secret_to_desire_in_a_long_term_relationship, https://www.ted.com/talks/esther_perel_rethinking_infidelity_a_talk_for_anyone_who_has_ever_loved ). Perel defies conventional thought on love, marriage and infidelity providing a counter theory to all these matters. Her first talk is an accompaniment to her first book and tackles the thorny issue of keeping your long-term relationship hot and steamy. It is a challenge many of us have tackled, and no doubt struggled with. This 27485struggle is for good reasons, and knowing the reasons provides insight to solutions. Perel powerfully explains the problem and speaks to working toward solutions.

The thornier issue of infidelity is the second talk (and her brand new book). Like before, she tackles the topic from a totally different perspective. Her approach is unconventional and utterly refreshing. The new perspectives provide an alternative narrative to handling this all too common human failing. Explaining and understanding the complex root of this all-to-common relationship problem can improve our lives. It is an alternative to the moral perspective that has failed to provide any solutions. Among the threads to concentrate on is the relatively new character of modern marriage in the history of humanity, and the consequences of the deep changes in the institution. One of the beauties of TeD is the exposure to fresh perspective on old ideas along side completely new ideas.

The very ingredients that nurture love mutuality, reciprocity, protection, worry, and responsibility for the other are sometimes the very ingredients that stifle desire.

– Esther Perel

Truth and courage aren’t always comfortable, but they’re never weakness.

– Brene Brown

The last talk I’ll highlight today is truly challenging to most of us. Brene Brown is a gifted and utterly approachable speaker presenting a topic that genuinely terrifies most of us, vulnerability (https://www.ted.com/talks/brene_brown_on_vulnerability). Begin vulnerable is an immensely valuable characteristic that almost everyone struggles with. Vulnerable often equates with being weak, but also open and honest. That openness and honesty is the key to being a better person and developing better relationships. In many cases the weakness and honesty is shared only with yourself. In either case vulnerability provides an avenue to connection and an embrace of humanity that both frees you and allows deeper relationships to flourish. The freedom you give yourself allows you to grow, learn and overcome bad experiences.

What would you be glad you did–even if you failed?

– Brene Brown

I always wish that I could focus on most of what I hear at a local TeD event, but one must make choices, time and effort are limited. While I do plan to more mindfully apply mindfulness to my life, right now I’ll hedge toward the artistic side of things, if for no Nature___Clouds_Clouds_in_the_moonlit_night_080130_other reason that I usually don’t. I will close by honoring the inspirational gift of Olivia Gatwood’s talk on poetry about seeking beauty and meaning in the mundane. I’ll write a narrative of a moment in my life that touched me deeply.

The Best Gift

A night of enchanting companionship was drawing to a close,

and I was longing for one last kiss before parting

Those early autumn nights are so welcoming,


the crisp nights promised, but not yet arrived,

summer still alive, but fading

I hadn’t even bothered to fully dress for the goodbye,


Conventions and neighbors be damned

It was a warm evening and my skin wanted to drink it in,

drink her in too, one last time


We slowly made our way out to my driveway

talking, still flirting, our banter unabated

The moon full, bright, and peeking between the gaps in the single cloud


adorning the sky as it illuminates the night

It will light her way home as a warm beacon

“Good,” I think, “you’ll be safe” on your long drive home


We draw close to each other, pressing hard while

savoring the time spent together fun and friendship

with a depth that was unexpected, but welcome


You ask, “What would you like for your birthday?”

My mind goes to my elaborate tattoo to adorn me soon,

“I’m already getting what I want for myself”


“I always ask for more time,” she said longingly

Her words cut me to the core,

of course, what else would she want?


My head spins with the truth revealed by her breathtaking honesty,

with words failing me for a breath or two, … or three

My mind opens with the realization of her precious offering


“I just want good memories”

Realization washes over me, she just gave me the best gift I could have hoped for

We kiss deeply and parted until we next renew making good memories



You are not special; you are replaceable

You are not special. You’re not a beautiful and unique snowflake. You’re the same decaying organic matter as everything else. We’re all part of the same compost heap. We’re all singing, all dancing crap of the world.

– Chuck Palahniuk

This post was inspired by twin events: a comment from a dear friend, and watching the fight-club-postermovie “Fight Club” again. This is my 300th blog post here. Its been an amazing experience thanks for reading.

If you consider the prospect of retirement and you feel that your place of work does not need you and would not suffer from you departure, you aren’t alone. This is an increasing trend for work today. You are an imminently replaceable cog in the machine, which can be interchanged with another person without any loss to the workplace. Your personal imprint on the products of work is not essential and someone else could do exactly what you do. If you work in one of the many service industry jobs, or provide the basic execution of tasks, the work is highly prescribed and you versus someone else doesn’t matter much. If you are reliable, show up and work hard, you are a good worker, but someone else with all the same characteristics is just as good.

What’s measured improves

–Peter F. Drucker

I didn’t used to feel this way, but times have changed. I felt this way when I worked at potentialdemotivator_largeMcDonalds for my first job. I was a hard worker, and a kick ass grill man, opener, closer, and whatever else I did. I became a manager and ultimately the #2 man at a store. Still I was 100% replaceable and in no way essential, the store worked just fine without me. I was interchangeable with another hard working person. It isn’t really the best feeling; you’d like to be a person whose imprint on the World means something. This is an aspiration worth having, and when your work is truly creative, you add value in a way that no one else can replicate.

When I started working almost 30 years ago at Los Alamos, this dynamic felt a lot different. People mattered a lot, and an individual was important. Every individual was important, unique and worth the effort. As a person you felt the warm embrace Los_Alamos_colloquiumof an incubator for aspiring scientists. You were encouraged to think of the big picture, and the long term while learning and growing. The Lab was a warm and welcoming place where people were generous with knowledge, expertise and time. It was still hard work and incredibly demanding, but all in the spirit of service and work with value. I repaid the generosity through learning and growing as a professional. It was an amazing place to work, an incredible place to be, an environment to be treasured, and made me who I am today.

Never attribute to malevolence what is merely due to incompetence

–Arthur C. Clark

It was also a place that was out of time. It was a relic. The modern World came to Los Alamos and destroyed it, creating a shadow of its former greatness. The sort of values that made it such a National treasure and one of the greatest institutions could not coexist with today’s culture. The individuals so treasured and empowered by th8286049510_dd79681555_ce scientific culture there were relabeled as “butthead cowboys,” troublemakers, and failures. The culture that was generous, long term in thought, viewing the big picture and focused on National service was haphazardly dismantled. Empowerment was ripped away from the scientists and replaced with control. Caution replaced boldness, management removed generosity, all in the name of formality of operations that removes anything unforeseen in outcomes. The modern world wants assured performance. Today Los Alamos is mere shadow of itself, stumbling forward toward the abyss of mediocrity. Witnessing this happen was one of the greatest tragedies of my life.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

–Peter F. Drucker

Along with assured performance we lose serendipity and discovery. We lose learning and surprises, good and bad. We lose the value in the individual, and the ability to have one person make a positive difference. All of this is to keep one person from making a negative difference or to avoid mistakes and failures. The removal of mistakes and failures removes the engine of learning and real scientific discovery from table as well. Each and every one of these steps is directly related to the fear of the bad things happening. Every good is a flip side of a bad thing, and when you can’t accept the bad, you can’t have the good either. In the process the individual has been removed from SONY DSCimportance. Everything is process today and anything bad can be managed out of existence. No one looks at the downside to this, and the downside is sinister to the quality of the workplace.

Let’s be clear about what I’m talking about. This isn’t about being cavalier and careless. It isn’t an invitation to be dangerous or thoughtless. This is about making a best earnest effort at something, and still failing. This is about doing difficult things that may not succeed, putting your best effort forward even if it falls short. In many ways we have lost the ability to distinguish between the good and bad failure with all failure viewed as bad, and punished. We have made the workplace an obsessively cautious and risk adverse place that lacks the soul it once embraced. We have lost the wonder and power of the supremely talented person in the prime of their creative powers to create game changing things or knowledge.

The core problem is the willingness to deal with the inevitable risks and failures with empowering people. Instead of seeing the risks and failures and a necessary element in enabling success, we have fallen victim to the fiction that we can manage the risk and failure out of existence, all while assuring productivity. This is utterly foolish and antithetical to reality. The risks are necessary to strive to achieve difficult and potentially great things. If one is working at the limit of their capability the result is frequently failure, and the ensemble of failures paves the way for success. It tells us clearly what does not work, and provides the hard lessons that educate us. Somehow we have allowed the delusion that achievement can be had without risk and failure to creep into our collective consciousness.

mediocritydemotivatorInstead of encouraging and empowering our people to take risks while tolerating and learning from failure, we do the opposite. We steer people away from doing risky work, punish failure and discourage lesson learning. It is as if we had suddenly become believers in the “free lunch”. True achievement is extremely difficult, and true achievement is powered by the ability to try to do risky almost impossible things. If failure is not used as an opportunity to learn, people will become disempowered and avoid the risks. This in turn will kill achievement before it can even be thought of. The entire system would seem to be designed to disempower people, and lower their potential for achievement.

The other aspect of this truly viscous cycle is the dismantling of expertise. Expertise is built on the back of years and years of failure. Of course this happens only if the failures are actively engaged as educational opportunities that empower the expert to engage in more thoughtful risks. These thoughtfully engaged in risks still need to fail and perhaps fail most of the time. Gradually the failures of today begin to look like the achievements of yesterday. What we see as a failure today would be a monumental achievement a decade ago. This is completely built on the back of seeing the failures of yesterday in the right light, and learning the lessons available from the experience.

When we empower people to take risks and grow them into experts, they also provide Unknown-3the knowledge necessary to mentor others. This was a key aspect of my early career experience at Los Alamos. At that time the Lab was teeming with experts who were generous with their time and knowledge. All you had to do was reach out and ask, and people helped you. The experts were eager to share their experience and knowledge with others in a spirit of collective generosity. Today we are managed to completely avoid this with managed time and managed focus. We are trained to not be generous because that generosity would rob our “customers” of our effort and time. The flywheel of the experts of today helping to create the experts of tomorrow is being undone. People are trained to neither ask, nor provide expertise freely.

What we are moving toward is a system that is less than the sum of its parts. What I started with was a system that added great value to every person, and effectively was far greater than the sum of its parts. The generosity that characterized my early career added immense value to every hour spent at work. Today this entire way of working is being torn apart by how we are managed. People can’t be generous if they have to account for all their time and charge it to a specific customer. The room for serendipity, discovery and the addition of personal value to activities is being removed to satisfy bean counters and small-mined people. We have allowed an irrational fear of one misspent dollar to waste billions of dollars and the productive potential of people’s lives. Worse yet, the whole apparatus erected to produce formal operations are ripping the creative force from the workplace and replacing it with soulless conformity. It matters less and less who we are each day; we are simply replaceable parts in a mindless machine.

I might be temped to simply end the discussion here, but this conclusion is rather dismal. imagesIt is where we find ourselves today. We also know that the state of affairs can be significantly better. How can we get there from here? The first step would be some sort of collective decision that the current system isn’t working. From my perspective, the malaise and lack effectiveness of our current system is so pervasive and evident that action to correct it is overdue. On the other hand, the current system serves the purposes of those in control quite well, and they are not predisposed to be agents of change. As such, the impetus for change is almost invariably external. It is usually extremely painful because the status quo does not want to be rooted out unless it is forced to. The circumstances need to demand performance that current system cannot produce, and as systems degrade this becomes ever more likely.

At the time, my life just seemed too complete, and maybe we have to break everything to make something better out of ourselves.

–Chuck Palahniuk

The current system is thoroughly disempowering and oriented toward explicit control of people’s actions. Keeping order and people in line while avoiding risk and failure are the core principles. The key to any change is enabling trust for the individual to move to centrality in the system. The upside to the trust is the degree of efficiency and effectiveness that is born from trust; the downside is the possibility of failure, poor performance and various human failings. The system needs to be resilient to these inevitable problems with people. The negative impact of trying to control and manage these failings results in destroying most of the positive things individuals can provide. Empowerment needs to trump control and allow people’s natural inclination toward success to be central to organizational design.

In most cases being a good boss means hiring talented people and then getting out of their way.

–Tina Fey

We need to completely let go of the belief that we can manage all the bad things away Steve_Jobs_Headshot_2010-CROPand not lose all the good things in the process. Bad things, bad outcomes and bad behavior happen, and perhaps need to happen to have all the good (in other words “shit happens”). Today we are gripped with a belief that negative outcomes can be managed away. In the process of managing away bad outcomes, we destroy the foundation of everything good. To put it differently we need to value the good and accept the bad as a necessary condition for enabling good outcomes. If one looks at failure as the engine of learning, we begin to realize that the bad is the foundation of the good. If we do not allow the bad things to happen, let people fuck things up, we can’t have really good things either. One requires the other and our attempts to control bad outcomes, removes a lot of good or even great outcomes at the same time.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

– Werner Heisenberg

So to sum up, let’s trust people again. Let’s let them fail, fuck up and do bad things. Let’s let people learn from these failures, fuck up’s and painful experiences. These people will learn a lot, including very painful lessons and get hurt deeply in the process. They will become wise, strong, and truly experts at things. People who are entrusted are empowered and love their work. They are efficient, productive and effective. They have passion for what they do, and give their work great loyalty. They will take risks in a fearless manner. They will be allowed to fail spectacularly because spectacular success and breakthroughs only come from these fearlessly taken risks.

May I never be complete. May I never be content. May I never be perfect.

–Chuck Palahniuk



If you don’t know uncertainty, bounding is the first step to estimating it

Sometimes the hardest thing any of us can hope for is finding the courage to be honest with ourselves.

― Kira Saito

Today I’m writing about dealing with the unfortunate practice of failing to address uncertainty, and implicitly setting its value to zero, the smallest possible value. This approach is pernicious, and ubiquitous in computational science (and a lot of other science). It is a direct threat to progress and far too acceptable in practice.  I wrote about this at length decrying this standard practice, but it remains the most common practice in uncertainty quantification (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In a nutshell when someone doesn’t know what the uncertainty is they simply assign a value of zero to it. We can do something better, but first this needs to be recognized for what it is, systematic and accepted ignorance.

chart-with-huge-error-barsThe reasons for not estimating uncertainties are legion. Sometimes it is just too hard (or people are lazy). Sometimes the way of examining a problem is constructed to ignore the uncertainty by construction (a common route to ignore experimental variability and numerical error). In other cases the uncertainty is large and it is far more comfortable to be delusional about its size. Smaller uncertainty is comforting and implies a level of mastery that exudes confidence. Large uncertainty is worrying and implies a lack of control. For this reason getting away with choosing a zero uncertainty is a source of false confidence and unfounded comfort, but a deeply common human trait.

mistakesdemotivator_largeIf we can manage to overcome the multitude of human failings underpinning the choice of the default zero uncertainty, we are still left with the task of doing something better. To be clear, the major impediment is recognizing that the zero estimate of uncertainty is not acceptable (most “customers” like the zero estimate because it seems better even though its assuredly not!). Most of the time we have a complete absence of information to base uncertainty estimates upon. In some cases we can avoid zero uncertainty estimates by being more disciplined and industrious, in other cases we can think about the estimation from the beginning of the study and build the estimation into the work. In many cases we only have expert judgment to rely upon for estimation. In this case we need to employ a very simple and well-defined technique to providing an estimate.

Learning is not the accumulation of knowledge, but rather, one thing only: understanding

― Donna Jo Napoli

The best way to explore estimates is using the time-honored approach of bounding the uncertainty. One should be able to provide clear evidence that the uncertainty is both smaller and larger than certain known values. This provides bounds for the magnitude of uncertainty.  Depending on the purpose of the study, these magnitudes can be used to more appropriately use the results. This can then be used to provide some sort of reasonable and evidence based uncertainty to energize progress and underpin credibility. If the estimate of the smallest possible uncertainty is that ubiquitous zero, the estimate should be rejected out of hand. The uncertainty is never ZERO, not ever. Nothing is known with absolute certainty. If the uncertainty is very small there should be very strong evidence to support the bold assertion. We do know some things extremely well like Planck’s constant, but it still has an uncertainty of a finite size.

The flip side to the lower bound is the upper bound for the uncertainty. Generally b7cb11b1c07d55e2165046959098156aspeaking, there will be a worst case to consider or something more severe than the scenario at hand. Such large uncertainties are likely to be quite uncomfortable to those engaging in the work. This should be uncomfortable if we are doing things right. The goal of this exercise is not to minimize uncertainties, but get things right. If such bounding uncertainties are unavailable, one does not have the right to do high consequence decision-making with results. This is the unpleasant aspect of the process; this needs to be the delivery of the worst case. To be more concrete in the need for this part of the bounding exercise, if you don’t know how bad the uncertainty is you have no business using the results for anything serious. As stated before the bounding process needs to be evidence based, the assignment of lower and upper bounds for uncertainty should have a specific and defensible basis.

Belief can be manipulated. Only knowledge is dangerous.

― Frank Herbert

Once the bounds for the uncertainty are established along with associated evidence, some choices need to be made to use the information. To a large extent the most conservative choice is the easiest to defend meaning that the upper bound for uncertainty should be used. If the work is being engaged in an honest sense this would be a pessimistic perhaps in the extreme. If one thinks about things in a probabilistic sense, the bounds should establish an interval for the potential uncertainty. This interval is most likely to be defensibly treated with a uniform distribution. For most cases using a midpoint averaging the lower and upper bounds is a reasonable choice. If the application associated with the decision-making is extremely important, the upper bound or something skewed in that direction is probably advisable.

imagesTo some extent this is a rather easy lift intellectually. Cultural difficulty is another thing altogether. The indefensible optimism associated with the default zero uncertainty is extremely appealing.  It provides the user with a feeling that the results are good. People tend to feel that there is a single correct answer. The smaller the uncertainty is the better they feel about the answer. Large uncertainty is associated with lack of knowledge and associated with low achievement. The precision usually communicated with the default, standard approach is highly seductive. It takes a great deal of courage to take on the full depth of uncertainty along with the honest admission of how much is not known. It is far easier to simply do nothing and assert far greater knowledge while providing no evidence for the assertion.

Uncertainty is a discomforting concept for people. Certainty is easy and comfortable while uncertainty is difficult and carries doubt. It is problematic to consider the role of chance in events, and the fickle nature of reality. A great many important events occur largely by chance and could have quite easily turned out quite differently. Consider how often you encounter a near miss in life, something where danger seemed far to close and just missed you. When these events turn out disastrously they can be tragedies. How often have similar tragedies been barely averted? This same dynamic plays out in experiments that are repeated. An attempt is made to make the experiment reproducible. Occasionally something completely different unfolds. The repeated effects are never exactly the same; there is a small variation. These variations are the uncertainty and depending on the experiment, they have a magnitude.

What happens when you do the experiment exactly once? The simplest thing to do is First-Time-Measurements-of-Turbulent-Mixingconsider this experiment to be a completely determined event with no uncertainty at all. This is the knee jerk response of people is the consideration of this single event as being utterly and completely deterministic with no variation at all. If the experiment were repeated with every attempt to make it as perfect as possible, it would turn out slightly differently. This comes from the myriad of details associated with the experiment that determine the outcome. Generally the more complex and energetic the phenomenon of being examined is, the greater the variation (unless there are powerful forces attracting a very specific solution). There is always a variation, the only question is how large it is; it is never, ever identically zero. The choice to view the experiment as perfectly repeatable is usually an unconscious choice that has no credible basis. It is an incorrect and unjustified assumption that is usually made without a second thought. As such the choice is unquestionably bad for science or engineering. In many cases this unconscious choice is dangerous, and represents nothing more than wishful thinking.

to hope was to expect

― Jane Austen




Doing Work that Is Worthwhile

The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.

― Ralph Waldo Emerson

As one gets older and enters into the heart of mid-life, it is natural to contemplate ones place in the World. I’m deep into such contemplation. I’ve been blessed with work with meaning for much of my adult life, but that meaning seems to have leaked away recently. Part of my thinking is the decision of whether this is a local or global conditiodownload-3n. Are things worse where I am, or better than the average? For most of my adult life, I’ve had far better conditions than average, and been able to find great meaning in my work. Is the steady erosion of the quality of the work environment a consequence of issues local to my institution or organization? Or is it part of the massive systemic dysfunction our society is experiencing?

If the problem is local, I could leave for another organization, or another institution that is functioning better. If it’s a global issue, then its not something I can likely influence (much), and its time to ride the storm out the best I can. Right now my money is on the issue being global, and we ought to all be ready for the shit to hit the fan. My guess it is already happened, the shit storm is in effect and we are headed into deep trouble as a Nation and the World. We have repugnantly dysfunctional National white_rally-620x412.jpggovernment, led by an incompetent narcissistic conman without a perceptible moral compass. Racial tensions, and a variety of white supremacist/right wing ultra-Nationalists are walking the streets. Left wing and anarchist groups are waking up as well. Open warfare may soon be upon us making us long for the days where sporadic terrorist attacks were our biggest worry. A shit storm is actually a severe understatement; this is a fucking waking nightmare. I hope this is wrong and I could simply find a better place to work at and feel value in my labors. I wish the problem was simple and local with a simple job change fixing things.

ooxdjduWork is an important part of life for a variety of reasons. It is how we spend a substantial portion of our time, and much of our efforts go into it. In work we contribute to society and assist in the collective efforts of mankind. As I noted earlier, I’ve been fortunate for most of my life, but things have changed. Part of the issue is a relative change in the degree of self-determination in work. The degree of self-determination has decreased over time. An aspect of this is the natural growth in scope of work as a person matures. As a person grows in work and is promoted, the scope of the work increases, and the degree of freedom in work decreases. Again this is only a part of the problem as the system is working to strangle the self-determination out of people. This is control, fear of failure and generic lack of trust in people. In this environment work isn’t satisfying because the system is falling apart, and the easiest way to resist this is controlling the little guy. My work becomes more of a job and a route to a paycheck every day. Earning a living and supporting your family is a noble achievement these days, and aspiring to more simply a waking dream contracting in the rear view mirror of life.

Creative autonomy is essential for the work I do to be satisfying. It is essential to the work being effective. I can’t be an effective problem solver if most of my best options are off the table. We exist in a system where the solutions are dictated in advance. No one is trusted to solve any real problems, just work toward the solutions that have been pre-ordained. Autonomy is threatening to the system because the trust in people is so intrinsically low. The result is the leaking away of meaning in work. The control that exists only calms the deep fears of a system that is failing. Inside the model where we are teetering on the edge of a societal shit storm, the attempt to control makes sense. The system is desperately trying to hold onto whatever control it has, fearing the unraveling about to unfold. Fear makes us do stupid things, and the fear is simply throwing fuel onto the fire by making everyone simply hate life.

The purpose of life is a life of purpose.

― Robert Bryne

I’m trying to grapple with what is happening in my own experience through the lens of the bigger picture. We see a contraction of the trust and autonomy necessary for me to enjoy work. This is in direct reaction to the fears unleashed by the changes in society, and the terror these changes have induced in much of the population. The old world is coming to an end, but not without a fight. People are genuinely frightened by change, and for most people the most comfortable place is the past. They are holding onto the past with a fervent passion, but the future is unstoppable. In between the two is conflict and pain. For someone like myself who demands work that makes progress, I might have to take a break and simply resign myself to defending the progress that has already been made. No new progress can happen without trust in an environment dominated by fear. We are simply trying to maintain the progress that has already been won.

Imagine what our story would look like if, rather than succumbing to the insistent voices of family or culture, we determined that our vocation was to be a better human.

― James Hollis, Ph.D.

The domination of fear has an extremely large impact on the appetite for risk; there isn’t any. Part of the fearful environment is the inability to accept anything that looks, smells or even hints at failure. Without failure you don’t have learning or achievement. Research depends on failure because research is basically learning in its rawest form. Let me be clear that I’m talking about good failure where you try your best, making a best effort and coming up short. Most of the time a failure leads to learning something new. You tweak your approach or knowledge on the basis of the experience and grow. Without failure you short-circuit expertise. We need to energize failure in many small things to engage success in big things. All of this requires the sort of deep trust that our current World is almost devoid of. A combination of courage and trusis-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360t can unleash people’s full potential through allowing them to fail spectacularly and then fully support the next step forward. Today, cowardice and mistrust dominate and even marginal failure results in punishment. It is corroding the foundation of achievement. It makes work simply a job and life more survival than living.

Since our current World is so deeply arrayed against personal success and growth, it might be wise to seek other avenues of fulfillment. Perhaps work is most healthily viewed as simply a task of mere survival. The current environment is so rife with fear, and patently incompetent that no one can really reach their potential. This isn’t a conclusion I like reaching, but the evidence seems overwhelming. Fear and mistrust have led to overarching control issues that remove any degree of personal control for achievement, or at least control while staying inside the rules. If one is willing to completely ignore the rules, success can be had. If one plays by the rules, success is absolutely impossible. The rules of the game are written to avoid all of the acts necessary for success because these involve risk and danger. Fundamental to success is trust, and trusting someone is beyond our collective ken.

The purpose of life is to contribute in some way to making things better.

― Robert F. Kennedy


The Culture of Computation

We seldom realize, for example that our most private thoughts and emotions are not actually our own. For we think in terms of languages and images which we did not invent, but which were given to us by our society.

― Alan W. Watts

Culture pervades our lives as people and shapes how we connect to our World, Nation, Community, Jobs, and personal lives. Much of what we do is deeply influenced by a web of cultures our lives are embedded within. All of this highlights the importance of understanding how culture influence computation as culture often defines what is georgewashingtoncomfortable and automatic. In many cases culture is the permanent habits of our social constructs, and often defines practices that impede progress. Accepted cultural practices are usually done without thinking and applied almost mindlessly. If these practices are wrong, they are difficult to dislodge or improve upon.

The imagination is the goal of history. I see culture as an effort to literally realize our collective dreams.

― Terence McKenna

Culture is a powerful thing. It defines so much about the collective activity of groups of people. Culture defines a set of beliefs, practices and habits that are naturally accepted and reinforced by the collective action of the people. Some cultures are driven by biologyRonaldRaygunor fundamental human needs, but most are constructs to help regulate the structures that our collective actions are organized about. The fundamental values, moral code and behaviors of people are heavily defined by culture. If the culture is positive, the effect is resonant and amplifies the actions of people toward much greater achievements. If the culture is negative, the effect can undo and overwhelm much of the best that people are capable of. Invariably cultures are a mixture of positive and negative. Cultures persist for extremely long times and outlive those who set the cultural tone for groups. Cultures are set or can change slowly unless the group is subjected to an existential crisis. When a crisis is successfully navigated the culture that arose in its resolution is enshrined, and tends to persist without change until a new crisis is engaged.

Every culture has its southerners — people who work as little as they can, preferring to dance, drink, sing brawl, kill their unfaithful spouses; who have livelier gestures, more lustrous eyes, more colorful garments, more fancifully decorated vehicles, a wonderful sense of rhythm, and charm, charm, charm; unambitious, no, lazy, ignorant, superstitious, uninhibited people, never on time, conspicuously poorer (how could it be otherwise, say the northerners); who for all their poverty and squalor lead enviable lives — envied, that is, by work-driven, sensually inhibted, less corruptly governed northerners. We are superior to them, say the northerners, clearly superior. We do not shirk our duties or tell lies as a matter of course, we work hard, we are punctual, we keep reliable accounts. But they have more fun than we do … They caution[ed] themselves as people do who know they are part of a superior culture: we mustn’t let ourselves go, mustn’t descend to the level of the … jungle, street, bush, bog, hills, outback (take your pick). For if you start dancing on tables, fanning yourself, feeling sleepy when you pick up a book, developing a sense of rhythm, making love whenever you feel like it — then you know. The south has got you.

― Susan Sontag

download.jpgWe see all sorts of examples of the persistence of culture. The United States is still defined by the North-South divide that fractured during the Civil War. The same friction and hate that defined that war 150 years ago dominate our politics today. The culture of slavery persists in systematic racism and oppression. The white and black divide remains unhealed even though none of the people who enslaved or who were enslaved are still alive with many generations having passed. The United States is still defined by the Anglo-Saxon Protestant beliefs of the founding fathers. Their culture is dominant even after being overwhelmed in numbers of people and centuries of history. The dominant culture was formed in the crucible of history by the originating crisis for the Nation, the Revolutionary war. Companies and Laboratories are shaped by their original cultures and these habits and practices persist long after their originators have left, retired or died.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

We all exist within a broad array of cultures all the way from our family to the entirety of humanity. Our culture is set by our biology, history and arc through life. This web of cultures connects together and runs much of our lives. We all have free will, but the decision to go against the culture tends to carry high costs to us personally. There are a number of things that influence culture including events, technology and new modes of engagement. Some events are part of natural world, such as disasters Unknown copy 26(earthquakes, floods, hurricanes, famines, …). These events can stress people and existing cultures providing the sorts of crises that shape the future to be more resilient to future disasters. Human events such as wars, trade, and general political events provide both the impact of culture in causing or navigating events, as well as producing crises that shape cultural responses and evolution. We can continue down this line of thinking to ever-smaller cultures such as organizations and businesses are influenced by crises induced by the larger systems (natural or political). This web of culture continues to smaller and smaller scale all the way to communities (towns, regions, schools, families) each having a culture shaped heavily by other cultures or events. In every case a crisis is almost invariably necessary to induce change, cultures are resistant to change unless something painful provides direct evidence of the incapacity of existing culture to succeed.

Men build too many walls and not enough bridges.

― Joseph Fort Newton

The culture emerging in the World today is deeply stressing may subcultures. A combination of demographic changes, ethnic conflict, technology and economic systems are all spiraling toward crisis. People across the World sense the depth of the impending changes to the structure of society. In many cases the combination of demographics and economic changes is stressing large populations of people to an extent that they exude a wholesale revolt against existing cultures and systems. When this population is large enough it becomes a movement, and starts driving other populations toward crisis. These movements ultimately create an environment where other events are triggered such as wars. These in turn are a crisis that ultimately must be resolved, and induce enough pain that people willingly overthrow existing cultures and embrace new cultures that enable successful resolution. We may be spiraling toward this cascade of crises that are almost necessary for our cultures to adapt to the reality of today.

One of the most effective ways to learn about oneself is by taking seriously the cultures of others. It forces you to pay attention to those details of life which differentiate them from you.

― Edward T. Hall

Before plunging into the specifics of the culture of computation, we should discuss theshapeimage_1culture of the broader scientific community. This culture exists within the broader network of cultures in society with give-and-take between them. In the past science has provided deep challenges to prevailing culture, and induced changes societal culture. Today the changes in main societal culture are challenging science. One key aspect of today’s culture wars is lack of support for expertise. One of the key rifts in society is mistrust of the elite and educated. The broader society is attacking and undermining educational institutions across the board. Scientific laboratories are similar in makeup and similarly under assault. Much of this broader assault is related to a general lack of trust. Some of this is a reaction to the surplus of trust granted science in the wake of its massive contributions to the resolution of World War 2 and the Cold War. These successes are waning in memory and science is now contracting for a distinguished role societally.

I work in a National Laboratory, and I have worked at a National Laboratory for my entire career. These Labs have strong cultures shaped by their history and work. Both Los Alamos and Sandia were born in the crucible of World War 2 and the Manhattan Project’s pursuit of an atomic bomb. The genesis of the third weapons’ Lab, Lawrence Livermore was also present albeit in an unexpressed form. During that war Los Alamos contained the origins of all three Labs. Los Alamos of course was the core of this revolving around Oppenheimer’s scientists pursuing the nuclear explosive part of the bomb. Sandia was contained within the engineering portion of Los Alamos that remained under military control. These identities are still evident in the Lab’s cultures today. At Los Alamos there is a scientific identity and habit that colors all engagements. Conversely the engineering character of Sandia is evident as is the shadow of General Groves’ priorities and approach on how the institution works today. Lawrence Livermore’s genesis was contained within a deep controversy associated with the follow-on to the atomic bomb, the hydrogen bomb. Many at Los Alamos opposed the hydrogen bomb, but Edward Teller was committed to this and ultimately created a Laboratory to pursue his vision. This adversarial, political and controversial nature still defines that Laboratory today.

The first step – especially for young people with energy and drive and talent, but not money – the first step to controlling your world is to control your culture. To model and demonstrate the kind of world you demand to live in. To write the books. Make the music. Shoot the films. Paint the art.

― Chuck Palahniuk

Each of these identities firmly expresses itself in the scientific-technical cultures of the Labs. Los Alamos has a core identity as an experimental physics Laboratory. Engineering and computational approaches are also taken at Los Alamos, but the experimental approach is the most natural and favored by the culture. Livermore is more favorable toward a theoretical-computational approach within its basic culture. Experimental approaches are also seriously engaged, but in particular computation is more naturally supported by its culture. Sandia is an engineering culture, and borrowing from Los Alamos, a predominantly test-based culture being a compliment to experiments. As such theory, design and computation is a hard sell and culturally discouraged. None of these approaches is the “right” approach and the cultures all move them toward a certain approach to be favored over others.

These characters graft themselves onto how computation is accomplished at each Lab. The computational side of things is favored at Lawrence Livermore yielding better support from the institution. This comes in the form of support for research and prestige for those doing computation. At the same time the validation of computatio220px-Foster_John_S_Jrns suffers a bit relative to the other Labs, as does the rigor of computed results. Los Alamos was the birthplace of all three labs and computational work, but always puts computation in a subservient role compared to experiments. This leads to a mighty struggle between validation and calibration. Often calibration wins out so that computed results are sufficiently close to experiment. Sandia excels at process and rigor in the conduct of calculations, but struggles at other aspects (at least in a relative sense). The whole verification and validation approach to simulation quality comes from Sandia reflecting the rigor. At the same time institutional support and emphasis are weaker leading to long-term effects.

B61-12All this texture is useful to think about because it manifests itself in every place computational science is done today. The scientific culture of any institution is reflected in its emphasis, and approach to the conduct of science. The culture produces a natural set of priorities that define investments and acceptable quality. We can speak volumes about how computational work should be done, but the specific acuity to the message is related to preconceived notions about the aspects. For example, some places are more prone to focus on computing hardware as an investment. In terms of the competition for resources, the purchase of hardware is a priority, and a typical route for enhancement. This becomes important when trying to move into new “hot” areas. If the opportunity falls in line with the culture, investments flow and if it is out of line the institution will miss it.

Cleland_taylor_320omputational science is a relatively new area of endeavor. It is at most 70 years old as practiced at Los Alamos; it is a new area of focus in most places. Sometime it is practiced at an institution and added to the repertoire as a new innovative way of doing work. In all these cases the computational work adopts the basic culture of the institution it exists within. It then differentiates based on the local conditions usually dominated by whatever the first acknowledged success is. One of the key aspects of a culture is origin stories or mythological achievements. Origins are almost invariably fraught situations with elements of crisis. These stories pervade the culture and define what success looks like and how investments in the future are focused.

downloadWhere I work at Sandia, the origin story is dominated by early success with massively parallel computers. The greatest success was the delivery of a computer, Red Storm. As a result the culture is obsessed with computer hardware. The path to glory and success runs through hardware; a focus on hardware is culturally accepted and natural for the organization. It is a strong predisposition. At Lawrence Livermore the early stages of the Laboratory were full of danger and uncertainty. Early in the history of the Lab there was a huge breakthrough in weapons design. It used computational modeling, and the lead person in the work went on to huge professional success (Lab Director). This early success became a blueprint for others and an expected myth to be repeated. A computational study and focus was always expected and accepted by the Lab. At Los Alamos all roads culturally lead to the Manhattan Project. The success in that endeavor has defined the Laboratory ever since. The manner of operation and approach to science adopted then is blueprint for success at that Laboratory. The multitude of crises starting with the end of the Cold War, spying, fires, and scandal have all weakened the prevailing culture, and undermined the future.

In each case the myths and legends of past success provide the basis for the culture and the means of understanding why a place is what the place it is. Old myths and legends have to be replaced to change the culture, and this can only happen in a crisis of sufficient magnitude to challenge the existing culture. We can’t usually manage to think about what culture arises from the resolution of a crisis, we are too busy surviving to make the best use of the opportunity.

Without culture, and the relative freedom it implies, society, even when perfect, is but a jungle. This is why any authentic creation is a gift to the future.

― Albert Camus


Credibility and Confidence

Leadership is fundamentally about credibility.

― Rick Crossland

Under the best of circumstances we would like to confidently project credibility for the modeling and simulation we do. Under the worst of circumstances we would have confidence in modeling and simulation without credibility. This is common. Quite often the confidence is the product of arrogance or ignorance instead of humility download-2and knowledge. This always manifests itself with a lack of questioning in the execution of work. Both of these issues are profoundly difficult to deal with and potentially fatal to meaningful impact of modeling and simulation. These issues are seen quite frequently. Environments with weak peer review contribute to allowing confidence with credibility to persist. The biggest part of the problem is a lack of pragmatic acceptance of modeling and simulation’s intrinsic limitations. Instead we have inflated promises and expectations delivered by over confidence and personality rather than hard nosed technical work.

blog-image-pursuit-of-excellenceWhen confidence and credibility are both in evidence, modeling and simulation is empowered to be impactful. It will be used appropriately with deference to what is and is not possible and known. When modeling and simulation is executed with excellence and professionalism along with hard-nosed assessment of uncertainties, using comprehensive verification and validation, the confidence is well grounded in evidence. If someone questions a simulations result, answers can be provided with well-vetted evidence. This produces confidence in the results because questions are engaged actively. In addition the limitations of the credibility are well established, and confidently be explained. Ultimately, credibility is a deeply evidence-based exercise. Properly executed and delivered, the degree of credibility depends on honest assessment and complete articulation of the basis and limits of the modeling.

When you distort the truth, you weaken your credibility.

― Frank Sonnenberg

One of the dangers of hard-nosed assessment is the tendency for those engaged in it to lose confidence in the work. Those who aggressively pursue credibility assessment tend to be cynics and doubters. They are prone to pessimism. They usually project doubt and focus on limitations of the modeling instead of confidence where it may be used. One of the hardest tricks of credibility assessment is pairing excellence in the execution of the work with an appropriate projection of confidence. The result is a mixed message where confidence is projected without credibility, and credibility is projected without confidence. Neither serves the purpose of progress in the impact of modeling and simulation.

The_Unknown_Known_posterOne of the major sins of over-confidence is flawed or unexamined assumptions. This can be articulated as “unknown knowns” in the famously incomplete taxonomy forwarded by Donald Rumsfeld in his infamous quote. He didn’t state this part of the issue even though it was the fatal flaw in the logic of the Iraqi war in the aftermath of 9/11. There were basic assumptions about Hussein’s regime in Iraq that were utterly false, and these skewed the intelligence assessment leading to war. They only looked at information that supported the conclusions they had already drawn or wanted to be true. The same faulty assumptions are always present in modeling. Far too many simulation professionals ignore the foundational and unfounded assumptions in their work. In many cases assumptions are employed without thought or question. They are assumptions that the community has made for as long as anyone can remember and simply cannot be questioned. This can include anything from the equations solved, to the various modeling paradigms applied as a matter of course. Usually these are unquestioned and completely unexamined for validity in most credibility assessments.

This is an immensely tricky thing to execute. The standard assumptions are essential to managing complexity and making progress. That said, it is a remarkably difficult and important task to detect when the assumptions become limiting. More succinctly put, the limitations of the standard assumptions need to be thought-through and tested. Usually these assumptions can only be tested through removing everything else from the field and doing very hard work. It is so much easier to simply downloadstay the course and make standard assumptions. In many cases the models have been significantly calibrated to match existing data, and new experiments or significantly more accurate measurements are needed to overturn or expose modeling limitations. Moreover the standard assumptions are usually unquestioned by peers. Questions are often met with ridicule. A deeply questioning assessment requires bravery and fortitude usually completely lacking from working scientists and utterly unsupported by our institutions.

Another manner for all of this to unfold is unwarranted confidence. Often this is couched in the form of arrogant perspectives where the proof of credibility is driven by download-1personality. This proof by authority is incredibly common and troubling to dislodge. In many cases personal relationships to consumers of simulations are used to provide confidence. People are entrusted with the credibility and learn how to give their customer what they want. Credibility by personality is cheap and requires so much less work plus it doesn’t raise any pesky doubts. This circumstance creates an equilibrium that is often immune to scientific examination. It is easier to bullshit the consumers of modeling and simulation results than level with them about the true quality of the work.

The credibility of the teller is the ultimate test of the truth of a proposition.

― Neil Postman

More often than not honest and technically deep peer review is avoided like a plague. If it is imposed on those practicing this form of credibility, the defense of simulations takes the personal form of attacking the peer reviewers themselves. This sort of confidence is a cancer on quality and undermines any progress. It is a systematic threat to excellence in simulation, and must be controlled. It is dangerous because it is effective in providing support for modeling and simulation along with the appearance of real World impact.

images-3One of the biggest threats to credibility is the generation of the lack of confidence honesty has. Engaging deeply and honestly in assessment of credibility is excellent at undermining confidence. Almost invariably the accumulation of evidence regarding credibility endows the recipients of this knowledge with doubt. These doubts are healthy and often the most confident people are utterly ignorant of the shortcomings. The accumulation of evidence regarding the credibility should have a benefit for the confidence in how simulation is used. This is a problem when those selling simulation oversell what it can do. The promise of simulation has been touted widely as transformative. The problem with modeling and simulation is its tangency to reality. The credibility of simulations is grounded by reality, but the uncertainty comes from both modeling, but also the measured and sensed uncertainty with our knowledge of reality.

The dynamic and tension with confidence and credibility should be deeply examined. When confidence is present without evidence, people should be deeply suspicious. A strong culture of (independent) peer review is an antidote to this. Too often these days the peer review is heavily polluted by implicit conflicts of interest. The honesty of peer review is hampered by an unwillingness to deal with problems particularly with respect to modification of the expectations. Invariably modeling and simulation has beedownload-5n oversold and any assessment will provide bad news. In today’s World we see a lot of bad news rejected, or repackaged (spun) to sound like good news. We are in the midst of a broader crisis of credibility with respect to information (i.e. fake news), so the issues with modeling and simulation shouldn’t be too surprising. We would all be well served by a different perspective and approach to this. The starting point is a re-centering of expectations, but so much money has been spent using grossly inflated claims.

Belief gives knowledge credibility.

― Steven Redhead

So what should we expect from modeling and simulation?

Modeling and simulation is a part of the scientific process and subject to its limits and rules. There is nothing magic about simulation that unleashes modeling from its normal bullshit_everywhere-e1345505471862limitations. The difference that simulation makes is the ability to remove the limitations of analytical model solution. Far more elaborate and accurate modeling decisions are available, but carry other difficulties due to the approximate nature of numerical solutions. The tug-of-war intellectually is the balance between modeling flexibility, nonlinearity and generality with effects of numerical solution. The bottom line is the necessity of assessing the uncertainties that arise from these realities. Nothing releases the modeling from its fundamental connection to validity grounded in real world observations. One of the key things to recognize is that models are limited and approximate in and of themselves. Models are wrong, and under a sufficiently resolved examination will be invalid. For this reason an infinitely powerful computer will ultimately be useless because the model will become invalid at some resolution. Ultimately progress in modeling and simulation is based on improving the model. This fact is ignored by computational science today and will result wasting valuable time, effort and money chasing quality that is impossible to achieve.

Bullshit is a greater enemy of the truth than lies are.

—Harry Frankfurt

In principle the issue of credibility and confidence in modeling and simulation should be based on evidence. Ideally this evidence should be quantitative with key indicators of its quality included. Ideally, the presence of the evidence should bolster credibility. Instead, paradoxically, evidence associated with the credibility of modeling and simulation seems to undermine credibility. This is a strong indicator that claims about the predictive power of modeling and simulation has been over-stated. 03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0aeThis is a nice way of saying this is usually a sign that the quality is actually complete bullshit! We can move a long way toward better practice by simply recalibrating our expectations about what we can and can’t predict. We should be in a state where greater knowledge about the quality, errors and uncertainty in modeling and simulation work improves our confidence.

If you can’t dazzle them with brilliance, baffle them with bullshit!

– W.C. Fields

Part of the issue is the tendency for the consumers of modeling and simulation work to not demand evidence to support confidence. This evidence should always be present and available for scrutiny. If claims of predictive power are made without evidence, the default condition should be suspicion. The various sources of error and uncertainty should be drawn out, and quantified. There should be estimates based on concrete evidence for the value of uncertainty for all sources. Any uncertainty that is declared to be zero or negligible must have very specific evidence to support this assertion. Even more important any claims of this nature should receive focused and heavy scrutiny because they are likely to be based on wishful thinking, and often lack any evidentiary basis.

incompetencedemotivatorOne of the issues of increasing gravity in this entire enterprise is the consumption of results using modeling and simulation by people unqualified to judge the quality of the work. The whole enterprise is judged to be extremely technical and complex. This inhibits those using the results from asking key questions regarding the quality of the work. With the people producing modeling and simulation results largely driven by money rather than technical excellence, we have the recipe for disaster. Increasingly, false confidence accompanies results and snows the naïve consumers into accepting the work. Often the consumers of computational results don’t know what questions to ask. We are left with quality being determined more by flashy graphics and claims about massive computer use than any evidence of prediction. This whole cycle perpetuates an attitude that starts to allow viewing reality as more of a video game and less like a valid scientific enterprise. Over inflated claims of capability are met with money to provide more flashy graphics and quality without evidence. We are left with a field that has vastly over-promised and provided the recipe for disaster.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― Ta-Nahisi Coates

A Good Question Makes Everything Better

Judge a man by his questions rather than by his answers.

― Voltaire

In thinking about what makes work good for me, I explored an element of the creative process for me revolving around answering questions. If one doesn’t have the right question, the work isn’t framed correctly and progress will stall. A thing to consider in this frame of reference is what makes a good question? This itself is an excellent question! The quality of the question makes a great difference in framing the whole scientific enterprise, and can either lead to bad places of “knowledge cul-de-sacs” or open stunning vistas of understanding. Where you end up depends on the quality of the question you answer. Success depends far more on asking the right question than answering the question originally put to you (or you put to yourself).

truth, like gold, is to be obtained not by its growth, but by washing away from it all that is not gold.

― Leo Tolstoy

Feynman_RichardA great question is an achievement in itself although rarely viewed as such. More often than not little of the process of work goes into asking the right question. Often the questions we ask are highly dependent upon foundational assumptions that are never questioned. While assumptions about existing knowledge are essential, finding the weak or invalid assumptions is often the key to progress. These assumptions are wonderful for simplifying work, but also inhibit progress. Challenging assumptions is one of the most valuable things to do. Heretical ideas are fundamental to progress; all orthodoxy began as heresy. If the existing assumptions hold up under the fire of intense scrutiny they gain greater credibility and value. If they fall, new horizons are opened up to active exploration.

If we have no heretics we must invent them, for heresy is essential to health and growth.

― Yevgeny Zamyatin

It goes without saying that important questions are good ones. Defining importance is tricky business. There are plenty of important questions that lead nowhere “what’s the meaning of life?” or we simply can’t answer using existing knowledge, “is faster than light travel possible?” On the other hand we might do well to break these questions down to something more manageable that might be attacked, “is the second law of thermodynamics responsible for life?” or “what do subatomic particles tell us about the speed of light?” Part of the key to good scientific progress is threading the proverbial needle of important, worthy and possible to answer. When we manage to ask an important, but manageable question, we serve progress well. Easy questions are not valuable, but are attractive due to their lack of risk and susceptibility to management and planning. Sometimes the hardest part of the process is asking the question, and a well-defined and chosen problem can be amenable to trivial resolution. It turns out to be an immensely difficult task with lots of hard work to get to that point.

I have benefited mightily from asking some really great questions in the past. These _12122_tex2html_wrap26questions have led to the best research, and most satisfying professional work I’ve done. I would love to recapture this spirit of work again, with good questioning work feeling almost quaint in today’s highly over-managed climate. One simple question occurred in my study of efficient methods for solving the equations of incompressible flow. I was using a pressure projection scheme, which involves solving a Poisson equation at least once, if not more than once a time step. The most efficient way to do this involved using the multigrid method because of its algorithmic scaling being linear. The Poisson equation involves solving a large sparse system of linear equations, and the solution of linear equations scales with powers of the number of equations. Multigrid methods have the best scaling thought to be possible (I’d love to see this assumption challenged and sublinear methods discovered, I think they might well be possible).

As problems with incompressible flows become more challenging such as involving large density jumps, the multigrid method begins to become fragile. Sometimes the optimal scaling breaks down, or the method fails altogether. I encountered these problems, but found that other methods like conjugate gradient could still solve the problems. The issue is that the conjugate gradient method is less efficient in its scaling than multigrid.cycles As a result as problems become larger, the proportion of the solution time spent solving linear equations grows ever larger (the same thing is happening now to multigrid because of the cost of communication on modern computers). I posed the question of whether I could get the best of both methods, the efficiency with the robustness? Others were working on the same class of problems, and all of us found the solution. Combine the two methods together, effectively using a multigrid method to precondition the conjugate gradient method. It worked like a charm; it was both simple and stunningly effective. This approach has become so standard now that people don’t even think about it, its just the status quo.

At this point it is useful to back up and discuss a key aspect of the question-making process essential to refining a question into something productive. My original question was much different, “how can I fix multigrid?” was the starting point. I was working from the premise that multigrid was optimal and fast for easier problems, and conjugate gradient was robust, but slower. A key part of the process was a reframing the question. The question I ended up attacking was “can I get the positive attributes of both algorithms?” This changed the entire approach to solving the problem. At first, I tried switching between the two methods depending on the nature of the linear problem. This was difficult to achieve because the issues with the linear system are not apparent under inspection.

The key was moving from considering the algorithms as different options whole cloth, to combining them. The solution involved putting one algorithm inside the other. As it turns out the most reasonable and powerful way to do this is make multigrid a preconditioner for conjugate gradient. The success of the method is fully dependent on the characteristics of both algorithms. When multigrid is effective by itself, the conjugate gradient method is effectively innocuous. When multigrid breaks down, the conjugate gradient method picks up the pieces, and delivers robustness along with the linear scaling of multigrid. A key aspect of the whole development is embracing an assault on a philosophical constraint in solving linear systems. At the outset of this work these two methods were viewed as competitors. One worked on one or the other, and the two communities do not collaborate, or even talk to each other. They don’t like each other. They have different meetings, or different sessions at the same meeting. Changing the question allows progress, and is predicated on changing assumptions. Ultimately, the results win and the former feud fades into memory. In the process I helped create something wonderful and useful plus learned a huge amount of numerical (and analytical) linear algebra.

imagesThe second great question I’ll point to involved the study of modeling turbulent flows with what has become known as implicit large eddy simulation. Starting in the early 1990’s there was a stunning proposition that certain numerical methods seem to automatically (auto-magically) model aspects of turbulent flows. While working at Los Alamos and learning all about a broad class of nonlinearly stable methods, the claim that they could model turbulence caught my eye (I digested it, but fled in terror from turbulence!). Fast forward a few years and combine this observation with a new found interest in modeling turbulence, and a question begins to form. In learning about turbulence I digested a huge amount of theory regarding the physics, and our approaches to modeling it. I found large eddy simulation to be extremely interesting although aspects of the modeling were distressing. The models that worked well were performed poorly on the structural details of turbulence, and the models that matched the structure of turbulence were generally unstable. Numerical methods for solving large eddy simulation were generally based on principles vastly different than those I worked on, which were useful for solving Los Alamos’ problems.

Having methods I worked on for codes that do solve our problems also model turbulence is tremendously attractive. The problem is the seemingly magical nature of this modeling. Being magical does not endow the modeling with confidence. The question that we constructed a research program around was “can we explain the magical capability of numerical methods with nonlinear stability to model turbulence?” We combined the observation that a broad class of methods seemed to provide effective turbulence modeling (or the universal inertial range physics). Basically the aspects of turbulence associated with the large-scale hyperbolic parts of the physics were captured. We found that it is useful to think of this as physics-capturing as an extension of shock-capturing. The explanation is technical, but astoundingly simple.

Upon study of the origins of large eddy simulation we discovered that the origins of the method were the same as shock capturing methods. Once the method was developed it evolved into its own subfield with its own distinct philosophy, and underlying assumptions. These assumptions had become limiting and predicated on a certain point-of-view. Shock capturing had also evolved in a different direction. Each field focused on different foundational principles and philosophy becoming significantly differentiated. For the most part they spoke different scientific languages. It was important to realize that their origins were identical with the first shock capturing method being precisely the first subgrid model for large eddy simulation. A big part of our research was bridging the divides that had developed over almost five decades and learn to translate from one language to the other.

We performed basic numerical analysis of nonlinearly stable schemes using a technique that produced the nonlinear truncation error. A nonlinear analysis is vital here. This uses a technique known as modified equation analysis. The core property of the methods empirically known to be successful in capturing the physics of turbulence is conservation (control volume schemes). It turns out that the nonlinear truncation error for a control volume method for a quadratic nonlinearity produces the fundamental scaling seen in turbulent flows (and shocks for that matter). This truncation error can be destabilizing for certain flow configurations, effectively being anti-dissipative. The nonlinear stability method keeps the anti-dissipative terms under control, producing physically relevant solutions (e.g., entropy-solutions).

A key observation makes this process more reasoned and connected to the traditional large eddy simulation community. The control volume term matches the large eddy simulation models that produce good structural simulations of turbulence (the so-called scale similarity model). The scale similarity model is unstable with classical numerical methods. Nonlinear stability fixes this problem with aplomb. We use as much scale similarity as possible without producing unphysical or unstable results. This helps explain why a disparate set of principles used to produce nonlinear stability provides effective turbulence modeling. Our analysis also shows why some methods are ineffective for turbulence modeling. If the dissipative stabilizing effects are too large and competitive with the scale similarity term, the nonlinear stability is ineffective as a turbulence model.

It is dangerous to be right in matters on which the established authorities are wrong.

― Voltaire

sankaran_fig1_360I should spend some time on some bad questions as examples of what shouldn’t be pursued. One prime example is offered as a seemingly wonderful question, the existence of solutions to the incompressible Navier-Stokes equations. The impetus for this question is the bigger question of can we explain, predict or understand fluid turbulence? This problem is touted as a fundamental building block in this noble endeavor. The problem is the almost axiomatic belief that turbulence is contained within this model. The key term is incompressible, which renders the equations unphysical on several key accounts: it gives the system infinite speed of propagation, and divorces the equations from thermodynamics. Both features sever the ties of the equations from the physical universe. The arguing point is whether these two aspects disqualify it from addressing turbulence. I believe the answer is yes.

In my opinion this question should have been rejected long ago based on the available evidence. Given that our turbulence theory is predicated on the existence of singularities in ideal flows, and the clear absence of such singularities in the incompressible Navier-Stokes equations, we should reject the notion that turbulence is contained in them. Despite this evidence, the notion that turbulence is contained whole cloth in these unphysical equations remains unabated. It is treated as axiomatic. This is an example of an assumption that has out lived its usefulness. It will eventually be tossed out, and progress will bloom the path of its departure. One of the key things missing from turbulence is a connection to thermodynamics. Thermodynamics is such a powerful scientific concept and for it to be so absent from turbulence is a huge gap. Turbulence is a fundamental dissipative process and the second law is grounded on dissipation. The two should be joined into a coherent whole allowing unity and understanding to reign where confusion is supreme today.

Unknown-2Another poorly crafted question revolves around the current efforts for exascale class computers for scientific computing. There is little doubt that an exascale computer would be useful for scientific computing. A better question is what is the most beneficial way to push scientific computing forward? How can we make scientific computing more impactful in the real world? Can the revolution of mobile computing be brought to science? How can we make computing (really modeling and simulation) more effective in impacting scientific progress? Our current direction is an example of crafting an obvious question, with an obvious answer, but failing to ask a more cutting and discerning question. The consequence of our unquestioning approach to science will be wasted money and stunted progress.

Trust is equal parts character and competence… You can look at any leadership failure, and it’s always a failure of one or the other.

― Stephen M.R. Covey

This gets at a core issue with how science is managed today. Science has never been managed more tightly and becoming more structurally mismanaged. The tight management of science as exemplified by the exascale computing efforts is driven by an overwhelming lack of trust in those doing science. Rather than ask people open-ended questions subject to refinement through learning, we ask scientists to work on narrowly defined programs with preconceived outcomes. The reality is that any breakthrough, or progress for that matter will take a form not envisioned at the outset of the work. Any work that pushes mankind forward will take a form not foreseeable. By managing so tightly and constraining work, we are predestining the outcomes to be stunted and generally unworthy of the effort put into them.

Whether you’re on a sports team, in an office or a member of a family, if you can’t trust one another there’s going to be trouble.

― Stephen M.R. Covey

This is seeded by an overwhelming lack of timagexsrust in people and science. Trust is a powerful concept and its departure from science has been disruptive and expensive. Today’s scientists are every bit as talented and capable as those of past generations, but society has withdrawn its faith in science. Science was once seen as a noble endeavor that embodied the best in humanity, but generally not so today. Progress in the state of human knowledge produced vast benefits for everyone and created the foundation for a better future. There was a sense of an endless frontier constantly pushing out and providing wonder and potential for everyone. This view was a bit naïve and overlooked the maxim that human endeavors in science are neither good or bad, producing outcomes dependent upon the manner of their use. For a variety of reasons, some embedded within the scientific community, the view of society changed and the empowering trust was withdrawn. It has been replaced with suspicion and stultifying oversight.

When I take a look at the emphasis in currently funded work, we see narrow vistas. There is a generally myopic and tactical view of everything. Long-term prospects, career development and broad objectives are obscured by management discipline and formality. Any sense of investment in the long-term is counter to the current climate. Nothing speaks more greatly to the overwhelming myopia is the attitude toward learning and personal development. It is only upon realizing that learning and research are essentially the same thing does it start to become clear how deeply we are hurting the scientific community. We have embraced a culture that is largely unquestioning with a well-scripted orthodoxy. Questions are seen as heresy against the established powers and punished. For most, learning is the acquisition of existing knowledge and skills. Research is learning new knowledge and skills. Generally speaking, those who have achieved mastery of their fields execute research. Since learning and deep career development is so hamstrung by our lack of trust, fewer people actually achieve the sort of mastery needed for research. The consequences for society are profound because we can expect progress to be thwarted.

Curiosity is more important than knowledge.

― Albert Einstein

One clear way to energize learning, and research is encouraging questioning. After encouraging a questioning attitude and approach to conducting work, we need to teach people to ask good questions, going back and refining questions, as better understanding is available. We need to identify and overcome assumptions subjecting them to unyielding scrutiny. The learning, research and development environment is equivalent to a questioning environment. By creating an unquestioning environment we short-circuit everything leading to progress, and ultimately cause much of the creative engine of humanity to stall. We would be well served by embracing the fundamental character of humanity as a creative, progressive and questioning species. These characteristics are parts of the best that people have to offer and allow each of us to contribute to the arc of history productively.

Curiosity is the engine of achievement.

― Ken Robinson

Brandt, Achi. “Multi-level adaptive solutions to boundary-value problems.” Mathematics of computation 31, no. 138 (1977): 333-390.

Briggs, William L., Van Emden Henson, and Steve F. McCormick. A multigrid tutorial. Society for Industrial and Applied Mathematics, 2000.

Kershaw, David S. “The incomplete Cholesky—conjugate gradient method for the iterative solution of systems of linear equations.” Journal of Computational Physics 26, no. 1 (1978): 43-65.

Melson, N. Duane, T. A. Manteuffel, and S. F. Mccormick. “The Sixth Copper Mountain Conference on Multigrid Methods, part 1.” (1993).

Puckett, Elbridge Gerry, Ann S. Almgren, John B. Bell, Daniel L. Marcus, and William J. Rider. “A high-order projection method for tracking fluid interfaces in variable density incompressible flows.” Journal of Computational Physics130, no. 2 (1997): 269-282.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Porter, David H., Paul R. Woodward, and Annick Pouquet. “Inertial range structures in decaying compressible turbulent flows.” Physics of Fluids 10, no. 1 (1998): 237-245.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Fefferman, Charles L. “Existence and smoothness of the Navier-Stokes equation.” The millennium prize problems (2006): 57-67.

Question, Analyze, Understand and Create,… Repeat

We don’t receive wisdom we must discover it for ourselves.

― Marcel Proust

Work is best when you start with a good question, analyze and learn until you discover and understand an answer to the question (questions often have many answers). Then you use this understanding to create something wonderful so that you can find a new and better question to answer. This virtuous cycle leads to be best work and provides the foundation for excellence. It is precisely the recipe for the best work experiences I’ve had, built my expertise and definitely how I’d prefer to keep doing work.

I’m on vacation this week (San Francisco is an amazing city!) and it is the perfect opportunity to think deeply about life and work. Work is an extremely important part of life, and I’ve concluded that some key things determine whether or not it is really good. The same things determine your ability to achieve excellence. What I’ve observed is a process that takes place leading up to my happiness and satisfaction. More importantly, it leads to great work, productivity and excellence. The elements of this successful recipe are founded on attacking a question that needs to be answered. This question can either come from something larger than myself, or simple innate personal curiosity. At the end of the process the question has been refined and answered yielding new understanding, knowledge, learning and tools to create something better. For me, the act of creation is the ultimate in job satisfaction for me. This is a virtuous cycle that leads to deep knowledge and the ability to recycle this process with an even better question using what has been learned and created.

Our real discoveries come from chaos, from going to the place that looks wrong and stupid and foolish.

― Chuck Palahniuk

patriley1The largest portion and most important part of this process is the analysis that allows us to answer the question. Often the question needs to be broken down into a series of simpler questions some of which are amenable to easier solution. This process is hierarchical and cyclical. Sometimes the process forces us to step back and requires us to ask an even better or more proper question. In sense this is the process working in full with the better and more proper question being an act of creation and understanding. The analysis requires deep work and often study, research and educating oneself. A new question will force one to take the knowledge one has and combine it with new techniques producing enhanced capabilities. This process is on the job education, and fuels personal growth and personal growth fuels excellence. When you are answering a completely new question, you are doing research and helping to push the frontiers of science forward. When you are answering an old question, you are learning and you might answer the question in a new way yielding new understanding. At worst, you are growing as a person and professional.

This is an utterly noble endeavor and embodies the best of mankind. At times you are simply pushing your self forward into areas others know very well already, but to you it is fresh and new. This is OK and even essential to get to the place where your work is unique. An under appreciated aspect of this sort of learning is the path you take is the potential to learn things in new ways. Your path is likely to be different than anyone else’s and grafts your own experience and understanding on to the topic anew. This is immensely valuable and can unveil new paths and depth to existing knowledge. Today this sort of thing is wholly unsupported and under appreciated. We need to make a new commitment to use this path to excellence.

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

― Marcel Proust

Sometimes the question being answered has been well studied and one is simply discovering knowledge others have already mastered. This is important growth for a professional getting to the point where the frontier of knowledge exists. This is a necessary element in getting to research, which doesn’t happen automatically. One needs to climb up the mountain of human knowledge before getting to the apex. This is the process of education as a professional and an immensely exciting calling. The mastery of a topic requires many essential elements be mastered drawing together knowledge from diverse forces. Often the best research draws together rather pedestrian bits of knowledge from diverse fields in novel manners heretofore unseen before. When we don’t support this sort of endeavor, we smother important avenues of discovery and deny our society of the most important discoveries. Charting new paths to knowledge is either a wondrous personal journey and/or an alternative way to understand.

Discovery consists of looking at the same thing as everyone else and thinking something different.

― Albert Szent-Györgyi

Ultimately the elements are drawn together and allow the question to be answered productively. This often produces a new kernel of understanding. This knowledge can often be harnessed to produce the wherewithal for something new. The understanding will allow a new and unique act of creation. Sometimes you are creating something that others already know about, but for you it is new. That is enough for excellence; it is the engine of personal excellence. If you complete this cycle often enough eventually the creation will be genuinely original and new. The deep and powerful educational elements of this process leads to outstanding professionals well before one gets to genuinely new and unique research. It is essential to realize that very few creations are completely original with most discoveries being the combination of elements that are well known in other applications. In many cases the analysis and study of the answer to the original question itself creates something new and wonderful of many forms.

What is wanted is not the will to believe, but the will to find out, which is the exact opposite.

― Bertrand Russell

mediocritydemotivatorOnce this creation is available, new questions can be posed and solved. These creations allow new questions to be asked answered. This is the way of progress where technology and knowledge builds the bridge something better. If we support excellence and a process like this, we will progress. Without support for this process, we simply stagnate and whither away. The choice is simple either embrace excellence by loosening control, or chain people to mediocrity.

Science is the process that takes us from confusion to understanding…

― Brian Greene

The Foundations of Verification: Solution Verification

A very great deal more truth can become known than can be proven.

― Richard Feynman

Solution verification involves examining error and results without the knowledge the imgresexact solution. This makes it a more difficult task than code verification where an exact solution is known removing a major uncertainty. A secondary issue associated with not knowing the exact solution is the implications on the nature of the solution itself. With an exact solution, a mathematical structure exists allowing the solution to be achievable analytically. Furthermore, exact solutions are limited to relatively simple models that often cannot model reality. Thus, the modeling approach to which solution verification is applied is necessarily more complex. All of these factors are confounding and produce a more perilous environment to conduct verification. The key product of solution verification is an estimate of numerical error and the secondary product is the rate of convergence. Both of these quantities are important to consider in the analysis.

The way to cope with this generally more hostile analysis environment involves improved analysis methods. One of the key elements in the analysis is contending with the lack of certainty about the solution, its nature and character mathematically. For this reason the knowledge and guarantees about the results is missing. For instance we don’t know what order of convergence to reasonably expect from the analysis and cannot use this to screen our results. Generally speaking if the verification result shows convergence at the theoretical rate for the method we can be sure we are solving a relatively simple “easy” problem. Usually the applied problems that modeling & simulation are attacking are mathematically difficult. Philosophically, the whole reason for modeling & simulation is solving problems that are beyond our analytical grasp. In a deep sense the complex and difficult character to problems is unavoidable for the practical a use of modeling with computers. When we’ve successfully attacked the problem of verification for a problem without an exact solution, the same analysis methodology can improve our code verification practice.chart-with-huge-error-bars

It is important to understand solution verification within the broader context of computational modeling. Solution verification contributes to the overall enterprise of analysis uncertainty quantification. The most classical investigation will involve comparing the modeled results with observations in the real World (ideally an experiment). There are many elements to the uncertainty in this case including the model parameters, the constitutive properties, the experimental measurements and the numerical solution. Solution verification is the process for examining and estimating the numerical error and specifying its uncertainty. Sometimes this is applied in the use of computational modeling for purposes of decision-making or scenario testing where no real World data exists. In this case the numerical error is an important element in the overall lack of certainty about the results. If the numerical error is well behaved it will be a bias from the exact continuum solution to the model. This bias is important to understand in how it might skew the results and any advise.

There are two ways to do great mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else — but persistent.

― Raoul Bott

When one lays out the mathematical framework for solution verification, the immediate impression is the added difficulty compared to code verification is the lack of direct knowledge of the precise solution. The full solution to the problem is inferred from the inaccurate numerical solutions. The equation to solve is the following S_0 = S_k + C h_k^a where the new unknown is the obstensible estimate of the exact solution S_0 that is the solution where h=0. The solutions used to determine this estimate are S_k the solutions found with h_k. We notice that we have imagesthree unknowns, S_0, C, a meaning the well-determined solution requires three pieces of determined data, S_k. As we will discuss this problem can be solved in a variety of ways including under-, fully and over-determined forms.

One of the key issues to recognize with solving this problem is an aspect of complexity because of the general nonlinearity of the determination of the model. The solution to this coupled system of nonlinear equations is generally subtle, and necessarily solved numerically. As such, the solution can have its own errors requiring some care and verification. The system of equations admits a simple analytical solution in special cases where the discrete solutions use a sequence of meshes where r = h_k/h_{k-1} is constant. In this case we can write the solution in closed form \log (E_{1,2}/E_{2,3}) / \log (r) , where E_{k,k-1} = S_k - S_{k-1}. More generally we need to attack this with a coupled nonlinear solve. If we deal with an over-determined version of the problem we will use a nonlinear least squares solver (or this is the knee-jerk response). As we discuss next, thinking about this decision opens the door to some more interesting and robust choices.

The general over-determined version of the solution verification equation (i.e., more than three grids) would be amenable to solution via nonlinear least squares method. This is not the only choice, and consideration of this opens the door to other choices. The solution to the over-determined problem is not unique, and the solution has the imprint of the method of solution. As such the choice of least squares implies a number of explicit assumptions that the typical practitioner doesn’t even know they are making. For example, one may choose to solve the over-determined problem in a different norm than the two norm (i.e., least squares). One may choose to solve a constrained problem instead of an unconstrained problem. In addition, one could consider solving an under-determined problem adding either constraints or regularizing the solution. A classical example of regularization is the Tikhonov method where a penalty is added to make the problem well determined. A popular recent approach focuses on a similar regularization, but in the one norm (compressed sensing, LASSO, …).

mediocritydemotivatorThere are several practical issues related to this whole thread of discussion. One often encountered and extremely problematic issue is insanely high convergence rates. After one has been doing verification or seeing others do verification for a while, the analysis will sometimes provide an extremely high convergence rate. For example a second order method used to solve a problem will produce a sequence that produces a seeming 15th order solution (this example is given later). This is a ridiculous and results in woeful estimates of numerical error. A result like this usually indicates a solution on a tremendously unresolved mesh, and a generally unreliable simulation. This is one of those things that analysts should be mindful of. Constrained solution of the nonlinear equations can mitigate this possibility and exclude it a priori. This general approach including the solution with other norms, constraints and other aspects is explored in the paper on Robust Verification. The key concept is the solution to the error estimation problem is not unique and highly dependent upon assumptions. Different assumptions lead to different results to the problem and can be harnessed to make the analysis more robust and impervious to issues that might derail it.

The techniques discussed in that paper were originally devised to deal with the all too often case where only one or two different grids are used and the error estimation problem is under-determined. The approach taken to solve this problem involves adding constraints to the solution based on expert knowledge and judgment. The overall approach was then approached when it was realized that the under- fully- and over-determined cases should all be treated consistently. The verification problem is solved repeatedly using different assumptions resulting in a natural variation in the results providing uncertainty in the error estimation and the rate of convergence. If the data is self consistent with a well-defined solution the uncertainty in the error will itself be small and the convergence rate will also be certain. Conversely if the data is conflicting or opposes expert expectations, the uncertainty will be large. This entire methodology produces a more robust numerical uncertainty that adapts to the data, and avoids using fixed size safety factors. It turns out that this expert judgment is usually called into action with verification, but in an ad hoc manner and only when the issues are serious. The robust verification adds the expert judgment from the outset so that more subtle issues are subject to the same treatment.

Instead of solving the verification equation once using a nonlinear least squares approach, robust verification solves the problem in a multitude of ways. This involves solving the verification problem using other error norms in a constrained minimization framework. The data is also used over. One standard assumption is that the solutions on the finer grids (smaller h) are closer to the exact solution, and this data is more prominent in the solution. The end result of the analysis is a multitude of estimates of the numerical error and convergence. These results are then subjected to robust statistical examination using median statistics. We report the median of the estimates as the error and convergence rate. The median deviation is used to place and uncertainty on this estimate. One of the key benefits of this estimation is its lack of susceptibility to corruption by outliers in the analysis. Outliers are further suppressed in the analysis by the use of expert judgment as constraints. For example, the absurdly large convergence rates are removed by the constraints if the rate of convergence is constrained to be below a given value.

forwardEulerBefore moving to examples of solution verification we will show how robust verification can be used for code verification work. Since the error is known, the only uncertainty in the analysis is the rate of convergence. As we can immediately notice that this technique will get rid of a crucial ambiguity in the analysis. In standard code verification analysis, the rate of convergence is never the exact formal order, and expert judgment is used to determine if the results is close enough. With robust verification, the convergence rate has an uncertainty and the question of whether the exact value is included in the uncertainty band can be asked. Before showing the results for this application of robust verification, we need to note that the exact rate of verification is only the asymptotic rate in the limit of h = 0. For a finite step size the rate of convergence should deviate from this value and for simple cases the value can be derived using a modified version of classical numerical analysis.

Our first example of solution verification will repeat our examination of simple ODE integrators, but disregard our knowledge of the exact solution. It is a useful example because we can examine the efficacy of solution verification with a precise knowledge of the true errors. We can use the data from our code verification study to good effect here. Here is the raw data used for the forward Euler study.

h Solution, t=1 Error, t=1
0.20 0.3277 0.0402
0.10 0.3487 0.0192
0.05 0.3585 0.0094
0.02 0.3642 0.0037
0.01 0.3660 0.0018
estimate 0.3678±0.0002  

For the code verification part of the example, the estimated truncation error is E=0.2030 h^{1.0245\pm0.0124}. The error bars do not take us to the theoretical convergence rate of one. The data is consistent with the rate being above one (and this is analytically expected). Using this same data for solution verification yields the following model, S(h) = 0.3678 \pm 0.0002 - 0.2080 h^{1.0386 \pm 0.0207}. Close examination shows that this solution is quite close to the exact solution 0.0001 and within the error bars. If we use the standard techniques of simply least square fitting the data we get the following model, S(h) = 0.3677 - 0.2239 h^{1.0717} . The error estimate here is 0.0017, which ends up being rather over generous when the standard safety factor of 1.25 is applied. Using the robust verification technique we get a better estimate of the exact solution, the actual convergence rate and a tighter error bound.

Supposing is good, but finding out is better.

― Mark Twain

It is also useful to look at a pathological case where the rate of convergence is absurd and standard analysis would be prone to missing it. The case we have at our fingertips involved very coarse grid solutions to large eddy simulation in a complex geometry relevant to heat transfer and fluid flow in nuclear reactors. Early calculations were used to estimate the mesh required for well-resolved calculations. As we found out, this is a perilous enterprise. A couple codes (one production and one research) we enlisted in thirodbundles study using some initial grids that were known to be inadequate. One of the codes was relatively well trusted for this class of applications and produced three solutions that for all appearances appeared reasonable. One of the key parameters is the pressure drop through the test section. Using grids 664K, 1224K and 1934K elements we got pressure drops of 31.8 kPa, 24.6 kPa and 24.4 kPa respectively. Using a standard curve fitting for the effective mesh resolution gave an estimate of 24.3 kPa±0.0080 kPa for the resolved pressure drop and a convergence rate of 15.84. This is an absurd result and needs to simply be rejected immediately. Using the robust verification methodology on the same data set, gives a pressure drop of 16.1 kPa±13.5 kPa with a convergence rate of 1.23, which is reasonable. Subsequent calculations on refined grids produced results that were remarkably close to this estimate confirming the power of the technique even when given data that was substantially corrupted.

drekarOur final example is a simple case of validation using the classical phenomena of vortex shedding over a cylinder at a relatively small Reynolds number. This is part of a reasonable effort to validate a research code before using in on more serious problems. The key experimental value to examine is the Stouhal number defined, St = f \ell/U the shedding frequency normalized by the size of cylinder and the velocity, which has the value experimentally of 0.164\pm 0.005 for a flow of Reynolds number 100 (the Reynolds number is the non-dimensional ratio of inertial to viscous force in a flow).

∆t RMS h St
0.002 0.054111988 0.110474853
0.002 0.023801688 0.152492294
0.002 0.010786082 0.164777976
0.002 0.005264375 0.165127187

When we apply the robust verification methodology to this data we find that the code produces a Strouhal number that is slightly larger than the experimental value St(h) = 0.1657\pm 0.0013 + C h^{1.8486\pm 0.1476}. Including error bars recovers the experimental value. This can be regarded as a modest success for the code’s ability to be considered for more complex flows.

The foundation of data gathering is built on asking questions. Never limit the number of hows, whats, wheres, whens, whys and whos, as you are conducting an investigation. A good researcher knows that there will always be more questions than answers.

― Karl Pippart III

Rider, William, Walt Witkowski, James R. Kamm, and Tim Wildey. “Robust verification analysis.” Journal of Computational Physics 307 (2016): 146-163.