Mission Focused Research Is Better Research

Great research depends on taking big risks with a large chance of failure and mission-focus is a clear way to get there. Failure is the key to learning, and research is fundamentally learning. We need to avoid failing too early by shying away from solving the harder problems and devotion to a mission provides the drive to keep focus on results that provide value to that mission.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

Unknown-3For a lot of people working at a National Lab there are two divergent paths for work, the research path that leads to lots of publishing, deep technical work and strong external connection, or the mission path that leads to internal focus and technical shallowness. The research path is for the more talented and intellectual people who can compete in this difficult world. For the less talented, creative or intelligent people, the mission world offers greater security at the price of intellectual impoverishment. Those who fail at the research focus can fall back onto the mission work and be employed comfortably after such failure. This perspective is a cynical truth for those who work at the Labs and represents a false dichotomy. If properly harnessed the mission focus and empower and energize better research, but it must be mindfully approached.

The measure of greatness in a scientific idea is the extent to which it stimulates thought and opens up new lines of research.

― Paul A.M. Dirac

As I stated, I believe the dichotomy of mission versus research is false. The mission imposes limitations and constraints on research. In a nutshell, the mission imposes a fixed problem to solve and one must adapt the solution to impact this mission. Conversely, pure research is unconstrained by a mission, which encourages people to change the problem to fit a solution. The fixed problem, adaptive solution mindset is imagesmuch better for engaging innovation and producing breakthrough results. It also means a great amount of risk and lots of failure. Pure research can chase unique results, but the utility of those results is often highly suspect. This sort of research entails less risk and less failure as well. If the results are necessarily impactful on the mission, the utility is obvious. The difficulty is noting the broader aspects of research applicability that mission application might hide.

Examples of great mission-focused research abound and our modern world is testimony to the breakthrough nature of Cold War defense focused research. The shape of the modern world is a testament to the power of mission-focused work to succeed. images copyUbiquitous aspects of modernity such as the Internet, cell phones and GPS all owe their existence to Cold War research focused on some completely different mission. All of these technologies were created through steadfast focus on utility that drove innovation as a mode of problem solving. This model for creating value has fallen into disrepair due to its uncertainty and risk. Risk is something we have lost the capacity to withstand as a result the failure necessary to learn and succeed with research never happens.

Failure is a greater teacher than success

― Clarissa Pinkola Estés

Mission focused research falls prey to concerns over risk. In many mission organizations there is a fear of taking on too much risk in adopting research results into the mission delivery. The thought is that the research might not pan out aimages-1nd the mission will suffer as result. This is both shortsighted and foolhardy. The truth is vastly different than this fear-based reaction and the only thing that suffers from shying away from research in mission-based work is the quality of the mission-based work. Doing research causes people to work with deep knowledge and understanding of their area of endeavor. Research is basically the process of learning taken to the extreme of discovery. In the process of getting to discovery one becomes an expert in what is known and capable of doing exceptional work. Today to much mission focused work is technically shallow and risk adverse. It is over-managed and underled in the pursuit of false belief that risk and failure are bad things.

There is a key tension to maintain in harnessing this engine of knowledge. The successful delivery of value and success to the mission work must take priority. Those conducting the research should have a deep commitment to the mission and its success. Ultimately the success at the mission work must supersede the research objectives. Even better the research objectives need to be guided by the mission needs. In this sense the mission acts to constrain the research and shape its direction and focus. This sort of dynamic must be carefully and adroitly managed if it can be achieved. Unconstrained research without mission focus is quite tempting and much simpler to manage. It is also less successful at producing real value for society. Almost every breakthrough of great significance was the result of results-focused work although many of the breakthroughs had far greater reach beyond their intended use.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

– Werner Heisenberg

mellado_turb_mixing_01In my own experience the drive to connect mission and research can provide powerful incentives for personal enrichment. For much of my early career the topic of turbulence was utterly terrifying, and I avoided it like the plague. It seemed like a deep, complex and ultimately unsolvable problem that I was afraid of. As I began to become deeply engaged with a mission organization at Los Alamos it became clear to me that I had to understand it. Turbulence is ubiquitous in highly energetic systems governed by the equations of fluid dynamics. The modeling of turbulence is almost always done using dissipative techniques, which end up destroying most of the fidelity in numerical methods used to compute the underlying ostensibly non-turbulent flow. These high fidelity numerical methods were my focus at the time. Of course these energy rich flows are naturally turbulent. I came to the conclusion that I had to tackle understanding turbulence.

One Winter break my laptop broke leaving without the abilicontentty to work on my computer codes over the break (those were the days!). So I went back to my office (those were the days!) and grabbed seven books on turbulence that had been languishing on my bookshelves unread due to my overwhelming fear of the topic. I started to read these books cover to cover, one by one and learn about turbulence. I’ve included some of these references below for your edification. The best and most eye opening was Uriel Frisch’s “Turbulence: the Legacy of A. N. Kolmogorov”. In the end, the mist began to clear and turbulence began to lose its fearful nature. Like most things one fears; the lack of knowledge of a thing gives it power and turbulence was no different. Turbulence is actually kind of a sad thing; its not understood and very little progress is being made.

First-Time-Measurements-of-Turbulent-MixingThe main point is that the mission focus energized me to attack the topic despite my fear of it. The result was a deeply rewarding and successful research path resulting in many highly cited papers and a book. All of a sudden the topic that had terrified me was understood and I could actually conduct research in it. All of this happened because I took contributing work to the mission as an imperative. I did not have the option of turning my back on the topic because of my discomfort over it. I also learned a valuable lesion about fearsome technical topics; most of them are fearsome because we don’t know what we are doing and overelaborate the theory. Today the best things we know about turbulence are simple, and old discovered by Kolmogorov as he evaded the Nazis in 1941.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

― Peter F. Drucker

In today’s world we have allowed a system to come into power that funds useless research. We have created vast swaths of safe research topics that ultimately produce rafts of papers, but little or no real utility for society. A big driver behind this mentality is the need to fund “sure things” that can’t fail. This pushes research into the realm of squalid incrementalism. Incremental research is safe and almost never prone to the risk of failure. It is also an actual waste of money that can produce the appearance and guise of success without actual achievement. Our unremitting fearful society including the worry of the appearance of scandal has driven us to this horrible point. Research has become cowardly and uninspired so that it doesn’t ever fail. Being mission-focused is too hard and too risky because the mission is too important to ever fail at. The true attitude should be that the mission is too important not fail at!

The main reason of fear of failure lies in people’s being unaware of the impact of misfortune on achieving success

― Sunday Adelaja

The current sorry state in high performance computing is a direct result of the current milieu where mission-focus is neglected in favor of carefully managed projects with sure things as targets. Project management is not leadership, and without leadership we will continue to steadfastly underachieve. For example, we have utterly eviscerated applied mathematics by pushing a product-oriented approach that demands the delivery of Unknown-2results in software. Producing software in the conduct of applied mathematics used to be a necessary side activity instead of the core of value and work. Today software is the main thing produced and actual mathematics is often virtually absent. Actual mathematical research is difficult, failure prone and hard to measure. Software on the other hand is tangible and managed. It is still is hard to do, but ultimately software is only as valuable as what it contains, and increasingly our software is full of someone else’s old ideas. We are collectively stewarding other people’s old intellectual content, and not producing our own, nor progressing in our knowledge.

This trend would be bad enough on its own, but it is the tip of a proverbial iceberg of underachievement. The second pillar for underachievement in high performance computing is, ironically, a devotion to computer hardware. Again computer hardware is tangible and easy to measure. To a naïve person (or congressman) the ability to measure our ability to do things with computers should be a one-to-one match with the raw power of our computers. Nothing could be farther from the truth as computing is a completely multi-disciplinary field depending on a huge swath of science for success. The computer hardware is actually one of the least important components in our modeling and simulation competence. Instead of producing a program that strives for true success in modeling and simulation based on real mission value, we have constructed programs that are intellectually vacuous because they are easier to fund and explain to unsophisticated people. The hardware program more naturally lends itself to management and simple metrics of success. It can be sold to uninformed people. Its current form is the abdication of leadership and antithetical to the concept of mission-focus. Our approach to high performance computing is only likely to achieve supremacy for the Chinese in the field.

titan

Success is stumbling from failure to failure with no loss of enthusiasm.

― Winston S. Churchill

What is gained by this mission focus? The focus on mission-focused research means the problem being solved is fixed and unwavering, and the results and knowledge must contribute to the solution of this problem. This forces the research to adapt itself to the needs5064 of the problem rather than the problem to the research. The result of this model is the tendency to confront difficult thorny issues rather than shirk them. At the same time this form of research can also lead to failure and risk manifesting itself. This tendency is the rub, and leads to people shying away from it. We are societally incapable of supporting failure as a viable outcome. The result is the utter and complete inability to do anything hard. This all stems from a false sense of the connection between risk, failure and achievement.

If a single characteristic is contributing to a societal feeling that we have lost greatness, it is that we cannot accept failure. Without failure, great things cannot be achieved. Failure is the vehicle of achievement and learning whether we are talking about individuals, organizations or nations. The inability to accept failure as a possible outcome is the tacit acceptance of not wanting to do anything that matters, or anything great. The road to greatness is paved with many failures and the unerring drive to learn and grow from these failures. For the complex missions we are charged with, the commitment to mission focus in research means accepting failure as a necessary outcome of endeavor. This is the hard message that our spineless politicians and managers cannot give us. The inability to grasp this core truth is utter societal cowardice. True leadership would provide us the necessary support and encouragement to be courageous and steadfast. Instead we succumb to fear and the false belief that achievement can be managed and had without risk.

leadersResearch is about learning at a fundamental, deep level, and learning is powered by failure. Without failure you cannot effectively learn, and without learning you cannot do research. Failure is one of the core attributes of risk. Without the risk of failing there is a certainty of achieving less. This lower achievement has become the socially acceptable norm for work. Acting in a risky way is a sure path to being punished, and we are being conditioned to not risk and not fail. For this reason the mission-focused research is shunned. The sort of conditions that mission-focused research produces are no longer acceptable and our effective social contract with the rest of society has destroyed it.

If we are to successfully do great things again as people, as organizations, as laboratories and as a nation, the irony is that we need to fail a lot more. One way to assure the sort of failure we need is mission-focused research where providing value to a difficult mission is the primal goal of research. Better research is founded on devotion to meaningful outcomes, taking big risks and tolerating lots of failure.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

Launder, Brian Edward, and Dudley Brian Spalding. “Lectures in mathematical models of turbulence.” (1972).

Frisch, Uriel, and Russell J. Donnelly. “Turbulence: the legacy of AN Kolmogorov.” (1996): 82-84.

Pope, Stephen B. “Turbulent flows.” (2001): 2020.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Margolin, L. G., W. J. Rider, and F. F. Grinstein. “Modeling turbulent flow with implicit LES.” Journal of Turbulence 7 (2006): N15.

dt161005 copy

Communicating is the hardest thing we do

Many times in life I’ve regretted the things I’ve said without thinking. But I’ve never regretted the things I said nearly as much as the words I left unspoken.

― Lisa Kleypas

I’m a scientist and I tackle lots of difficult intellectual topics, higher-level mathemalogotics and deep physical principle daily. All of these things are very complex and require immense amounts of training, experience and effort. For most people the things I do, think about, or work on are difficult to understand or put into context. None of this is the hardest thing I do every day. The thing that we trip up on, and fail at more than anything is simple, communication. Scientists fail to communicate effectively with each other in a myriad of ways leading to huge problems in marshaling our collective efforts. Given that we can barely communicate with each other, the prospect of communicating with the public becomes almost impossible.

Listen with curiosity. Speak with honesty. Act with integrity. The greatest problem with communication is we don’t listen to understand. We listen to reply. When we listen with curiosity, we don’t listen with the intent to reply. We listen for what’s behind the words.

― Roy T. Bennett

It is as much a problem of listening as talking, and we do neither very well. It is arguable that the art of listening is something in deep crisis society wide. We seem to be quite capable of expressing a wealth of opinions to each other, but incapable of listening and attempting to understand each other. This makes every problem we have worse and stymies our efforts to solve them. In the sciences, these issues are generally compounded by the nature of the people capable of carrying out the deep scientific work. With all that effort and energy put toward the intellectual labor and their basic nature as people, little is left over to do the heavy lifting of communication. This leaves this essential effort languishing from a combination of lack of effort and outright incompetence.

If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be ‘meetings.

― Dave Barry

Meetings are at the heart of an effective organization, and each meeting is an opportunity to clarify issues, set new directions, sharpen focus, create alignment, and move objectives forward.

― Paul Axtell

A big conduit for communication is the “meeting” a proper target of derision and ridicule. We all spend way too much time in horrible meetings that masquerade as communication. In addition to time wasting, the worst thing about them is thaimgrest they give people the impression that communication has taken place when it hasn’t. The meeting doesn’t provide effective broadcast of information and it’s even worse as a medium for listening. Our current management culture seems to have gotten the idea that a meeting is sufficient to do the communication job. Meetings seem efficient in the sense that everyone is there and words are spoken, and even time for questions is granted. With the meeting, the managers go through the motions. The problems with this approach are vast and boundless. The first issue is the general sense that the messaging is targeted for a large audience and lacks the texture that individuals require. The message isn’t targeted to people’s acute and individual interests. Conversations don’t happen naturally, and people’s questions are usually equally limited in scope. To make matters worse, the managers think they have done their communication job.

People who enjoy meetings should not be in charge of anything.

― Thomas Sowell

The tendency to do everything through meetings results in the illusion that communication has happened. The same happens with mass e-mails where managemeimagesnt assumes the exchange of information was successful. A lot of the necessary vehicles for communication are overlooked or discounted in the process. Managers avoid the one-on-one conversations needed to establish deep personal connections and understanding. We have filled manager’s schedules with lots of activities involving other managers and paperwork, but not prioritized and valued the task of communication. We have strongly tended to try to make it efficient, and not held it in the esteem it deserves. Many hold office hours where people can talk to them rather than the more effective habit of seeking people out. All of these mechanisms give the advantage to the extroverts among us, and fail to engage the quiet introverted souls or the hardened cynics whose views and efforts have equal value and validity. All of this gets to a core message that communication is pervasive and difficult. We have many means of communicating and all of them should be utilized. We also need to assure and verify that communication has taken place and is two ways.

Employees hate meetings because they reveal that self-promotion, sycophancy, dissimulation and constantly talking nonsense in a loud confident voice are more impressive than merely being good at the job – and it is depressing to lack these skills but even more depressing to discover one’s self using them.

― Michael Foley

We haven’t touched on the other context of the word “meeting” for scientists. The technical talk, seminar or conference version of the word has great importance. The act of formally giving a presentation of technical work is one of the key wUnknown-1 copy 13ays of delivering information in a professional setting. It forms a core of opportunity for peer review in a setting that allows for free exchange. Conferences are an orgy of this and should form a backbone of information exchange. Instead conferences have become a bone of contention. People are assumed to only have a role there as speakers, and not part of the audience. Again the role of listening as an important aspect of communication is completely disregarded in the dynamic. The digestion of information and learning or providing peer feedback provide none of the justification for going to conferences, yet these all provide invaluable conduits for communication in the technical world.

Americans no longer talk to each other, they entertain each other. They do not exchange ideas, they exchange images. They do not argue with propositions; they argue with good looks, celebrities and commercials.

― Neil Postman

Part of the irony of this problem is the role of science in providing technology that makes communication easier than ever. The issue is the content of that
imgres-2communication, which varies depending on people’s innate taste for clarity and focus. We have issues with transparency of communication even with automatic and pervasive use of all the latest technological innovations. These days we have e-mail, instant messaging, blogging, Internet content, various applications (Twitter, Snapchat,…), social media and other vehicles information transfer through people. The key to making the technology work to enable better performance still comes down to people’s willingness to pass along ideas within the vehicles available. This problem is persistent whether communications are on Twitter or in-person. Again the asymmetry between broadcasting and receiving is amplified by the technology. I am personally guilty of the sin that I’m pointing out, we never prize listening as a key aspect of communicating. If no one listens, it doesn’t matter who is talking.

We are still woeful in our ability to confront difficult issues. Concepts like “ghosting” or “breadcrumbing” are useful for online relationships, and actually have great meaning in non-romantic settings. For difficult issues there are a strong tendency to avoid discussion and contention, this is just like ghosting where people disappear to avoid a real breakup. Breadcrumbing is pretending to have a deep relationship that really has no basis in fact and gets nursed along by a stream of limited communications. Both practices are emblematic of deeply toxic engagements that exhibit a lack of humanity, empathy and understanding. Each of these characteristics is deeply helpful in aiding the depth of communication, and things scientists and engineers are generally quite ill equipped to deal with.

Scientists and engineers tend to be introverts and not necessarily good at communicating. Nothing about technology changes these facts; it only gives more mechanics for communication to be poorly executed. All the various communication vehicles require skill to execute and basic ideas to be presented. The ideas are often complicated by their esoteric nature meaning that standard communication is already stressed. Add controversy and tension to the communication will simply lead to greater stress and encourage bad behavior. This greater stress will serve to bait people into various ghosting or breadcrumbing exercises. All of these elements go into undermining the depth of communication possible if all the means of achieving it are exercised.

imgres-2I work on vastly complex projects involving big ideas and massively deep technical issues. The concepts and ideas in the project are deep and need vast wells of technical expertise to even understand much less solve. In spite of this the biggest issue holding us back is communication. We are more challenged by having access to information and knowledge of the full nature of the problems we face. The complex issues typically need the expertise of many people to understand, and solutions are equally dependent on wide ranging expertise. More than simply understanding being fuelled by communication, the solutions need collaborations across many technical disciplines to work together. Instead of harnessing the possibility and potential that communication has to unleash productivity and innovation, we embrace secrecy and information hiding that only empowers control and destroys potential. Information hiding has become the standard in reaction to the possibilities of greater transparency that the modern world offers.

When the trust account is high, communication is easy, instant, and effective.

― Stephen R. Covey

One of the key issues contributing to our systematic communication issues is conflict aversion. If we talk about our collective problems, we run the risk of conflict. Problems and issues that hold us back are often the source of conflict. Our problems are usecretsually associated with vested interests, or solving the problems might involve some sort of trade space where some people win or lose. Most of the time we can avoid the conflict for a bit longer by ignoring it. Facing the problems means entering into conflict, and conflict terrifies people (this is where ghosting and breadcrumbing come in quite often). The avoidance is usually illusory; eventually the situation will devolve to the point where the problems can no longer be ignored. Usually the situation is much worse, and the solution is much more painful. We need to embrace means of making facing up to problems sooner rather than later, and seek solutions when problems are small and well confined.

Given that scientists can barely communicate with each other effectively, the ge11neral public is almost impossible to manage. In many ways the modern world acts to amplify the issues that the technical world has with communication to an almost unbearable level. Perhaps excellence in communication is too much to ask, but the inabilities to talk and listen effectively with the public are hurting science. If science is hurt than society also suffers from the lack of progress and knowledge advancement science provides. When science fails, everyone suffers. Ultimately we need to have understanding and empathy across our societal divides whether it is scientists and lay people or red and blue. Our failure to focus on effective, deep two-way communication is limiting our ability to succeed at almost everything.

The single biggest problem in communication is the illusion that it has taken place.

― George Bernard Shaw

Validation is much more than uncertainty quantification

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

In looking at the dynamic surrounding verification and validation recently I’ve noticed imagesa very grim evolution of the narrative. Two things have happened to undermine the maturity of V&V. One I’ve spoken about in the past, the tendency to drop verification and focus solely on validation, which is bad enough. In the absence of verification, validation starts to become rather strained and drift toward calibration. Assurances that one is properly solving the model they are claiming to be solving are unsupported by evidence. This is bad enough all by itself. The use of V&V as a vehicle for improving modeling and simulation credibility is threatened by this alone, but something worse looms even larger.

A more common and pervasive trend is the conflation of validation with uncertainty quantification. It has become very common for uncertainty quantification (UQ) to be defined as the whole of validation. To some extent this is fueled by a focus on high performance computing where UQ provides a huge appetite for computationimgresal cycles thus eliciting lots of love and support in HPC circles. Validation must be about experiments and a broad cross section of uncertainties that may only be examined through a devotion to multi-disciplinary work and collaboration. One must always remember that validation can never be separated from measurements in the real world whether experimental or observational. The experiment-simulation connection in validation is primal and non-negotiable.

There are three types of lies — lies, damn lies, and statistics.

― Benjamin Disraeli

A second part of the issue is the hot topic nature of UQ. UQ has become a buzzword and seems to be a hot issue in publishing and research. Saying you’re doing UQ seems to be a means to squeezing money out of funding agents. In addition UQ can be done relatively automatically and mechanically. Tools and techniques exist to enable UQ to be dfig10_roleone without much deep thought even though it touches upon many deep technical topics. Actual validation is far harder and more holistic. The core to any work in validation is serious experimental expertise and hard-nosed comparison with simulations. The detailed nature of the experiment and its intrinsic errors and uncertainties is the key to any comparison. Without knowing the experimental uncertainty any computational uncertainty is context free. My grumpy intellectual would quip that validation requires thinking and that leads people to avoid it because thinking is so hard. The deeper issues are that validation is complex and mutli-disciplinary in nature making it collaborative and difficult. Experts in a single discipline can do UQ, so it is an easy out.

Five percent of the people think;

ten percent of the people think they think;

and the other eighty-five percent would rather die than think.

― Thomas A. Edison

quote-thinking-is-hard-work-that-s-why-so-few-do-it-albert-einstein-79-80-43.jpg

One of the biggest issues is the stunning incompleteness of UQ in general. Most commonly UQ is done via an exploration of the variation of parameters in models. Complex models of reality have a lot of constants that are not known with great precision. Various techniques may be utilized to efficiently examine the variation in imgrescomputational solutions due to changes in these parameters. Among the methods used are things like Markov Chain Monte Carlo (MCMC), polynomial chaos, and other sampling methods. The results from this work are useful and sound, but form a rather incomplete view of uncertainty. Even in these cases the sampling is often subject to lack of certainty with the assessment driven by the difficulty of determining uncertainty in high dimensional spaces. Modeling and simulation suffers from a host of other uncertainties not covered by these methodologies. For example most simulations have some degree of numerical error that may be quite large. Numerous techniques exist for exploring its magnitude and nature. Many systems being modeled have some stochastic or variability associated with them. Modeling assumptions are often made in simulating a system or experiment. The solution may change greatly on the basis of these assumptions or modeling approximations. A different computational modeler may make much different assumptions and produce a different solution.

Judge a man by his questions rather than by his answers.

― Voltaire

If validation is to be done properly a fairly complete accounting of modeling and simulation uncertainty is needed. One also needs to also understand the experimental error and uncertainty with equal completeness. One must be acutely aware of the intrinsic lack of certainty in the estimation of uncertainty. The combination of the solutions and the sizes of each uncertainty provides a modeling and simulation solution into proper context. Without knowledge of the uncertainties in each data source, the distance between solutions cannot be judged. For example if the experimental precision is very good and the uncertainty is quite small, the simulation needs to be equally precise to be judged well. Conversely a large experimental uncertainty would allow model to be much looser, and still be judged well. More critically the experiment wouldn’t provide actionable evidence on research needs, and expert judgment would reign.

ZmachineThe whole of the uncertainty provides an important source of scientific tension. If experimental uncertainty is small, it requires modeling and simulation to be equally precise to imply good results. It pushes the modeling to improve to meet the high standard of the experiment. If the modeling and simulation is very good, but the experiments have large uncertainty, it should push the experiments to improve because they fail to constrain and distinguish between models. By having a deep and complete understanding of uncertainty, we can define where we need to put resources to improve. We know what aspects of our current knowledge are the most in need of attention and limiting progress.

One must always be aware of the significant attraction of short changing uncertainty estimation. Doing a complete job of estimating uncertainty almost always results in an increase in the magnitude of uncertainty. This is where science as a fundamentally human enterprise comes into play. People would rather think uncertainties are small than large. Uncertainty is uncomfortable and people shy away from discomfort. By under-estimating uncertainty people unconsciously put themselves at ease by doing incomplete work. A more rigorous and complete approach almost always produces a discomforting result. When one combines discomfort with difficulty of accomplishment, the necessary factors for lack of effort and completeness becomes clear. With this temptation in mind the tendency to take the easy route must be acknowledged.

The bottom line is the necessity understanding uncertainty in a holistic manner can produce useful and defensible context for science. It can allow us to understand where we need to improve our knowledge or practice. Without this accounting the whole images-1issue falls into the area of relying upon expert judgment or politics to make the decisions. We fail to understand where our knowledge is weak and potentially overlook experiments necessary for understanding. We may have the right experiments, but cannot make measurements of sufficient accuracy. We might have models of insufficient complexity, or numerical solutions with too much numerical error. All of these spell out different demands for resource allocation.

Much of the tension is captured in these two quotes although I hope Eddington was probably trying to be ironic!

Never trust an experimental result until it has been confirmed by theory

― Arthur Stanley Eddington

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

What makes a production code, a production code?

It is not enough for code to work.

― Robert C. Martin

One of the big selling points for high performance computing is real world impact. A focus for this impact revolves around computer codes with the capability to produce answers for people working on real world problems. In organizations such as mine where modeling and simulation produces analysis used to assist decision-making, these codes are known as production codes, or the production codes. As these codesthree-men-looking-at-data-on-screens.jpg.rendition.intel.web.480.270 provide real value to our applied programs, and with this value provides generous financial support. This support is necessary for the codes to do their job and creates financial stability. With this support comes acute responsibility that needs to be acknowledged and serious effort needs to apply to meeting these.

Being a production code means producing results that are meaningful. The results are important by virtue of their utility in decision-making that impacts the real world. In the past those using production codes as expert users provided the credibility needed to make these codes important. In other words credibility was granted by the weight of the expertise of the users. This element in expertise-based credibility is still alive today, but increasingly being replaced by systematic approaches to bolster the purely human element. The appropriate and proper use of verification, validation, and uncertainty quantification along with software quality assurance provides a system for assessing credibility and constructing evidence. More and more this evidence is demanded to document credibility and reassure decision makers.

One of the main things production codes give is the ability to model important problems in the real world. This is the primary characteristic of production codes, modeling HPCcapability for real problems. While this character is primary in defining production codes everything else important in high performance computing is eclipsed by the modeling imperative. These codes are essential for the utility of high performance computing resources and often become the first codes to make their way and use high-end resources. They quite often are the explicit justification for the purchase of such computing hardware. This character usually dominates and demands certain maturity of software professionalism.

csm_group1_2c3e352676On the flipside there are significant detrimental aspects of such codes. For example the methods and algorithms in production codes are often crude and antiquated in comparison to state of the art. The same can be said for the models, the algorithms and often the computer code itself. The whole of the production code credibility is deeply impacted by these pedigrees and their impact on real World programs and things. This issue comes from several directions; the codes are often old and used for long periods of time. The experts who traditionally define the credibility drive this to some extent. It often takes a long time to develop the code to the level needed to solve the hard real world problems as well as the expertise to navigate the code’s capability into results that have real world meaning. Older methods are robust, proven and trusted (low order, and dissipative is usually how robust happens). Newer methods are more fragile, or simply can’t deal with all the special cases and issues that threaten the solution of real problems. Again, the same issues are present with models, algorithms and the nature or quality of the computer code itself.

Knowledge is something which you can use.
Belief is something which uses you.

― Idries Shah

In the final analysis the production code status must be earned and can not granted by fiat. Generally one might intend for a code to become a production code, but it only becomes a production code when it begins to produce. That production must be delivered with credibility and trust. It is an utterly organic process that cannot be forced. While significant support comes with production code status, it also comes with responsibilities as well. Increasingly in a modern context these responsibilities include software quality, verification and validation to be executed in a serious professional manner. Hopefully the “customers” for production code analysis will be more demanding and insistent on a more systematic pedigree. Even today this pull for systematic pedigree is poorly supported by the customers of production code results creating an environment where counter-productive practices and results continue to supported.

imagesIn far too many instances, the systematic pedigree defining steps are being skipped for the old system of person-centered credibility. The old person-centered system is simple and straightforward. You trust somebody and develop a relationship that supports credibility. This person’s skills include detailed technical analysis, but also inter-personal relationship building skills. If such a system is in place there is not a problem as long as the deeper modern credibility is also present. Too often the modern credibility is absent or shorted and effectively replaced by a cult of personality. If we put our trust in people who do not value the best technical work available in favor of their force of personality or personal relationships we probably deserve the substandard work that results.

Tell me what you pay attention to and I will tell you who you are.

― José Ortega y Gasset

 

Results using smoothed operators in actual code

Reality is that which, when you stop believing in it, doesn’t go away.

― Philip K. Dick

I applied the smoothed operators to the actual solution of a scalar advection law, and wanted to show how the methods impact the solution. This should put the discussion of the past couple of weeks into a bit sharper and more relevant focus. We can also explore the impact of the degree of regularization chosen in the smoothing. The good things below are finding out that my analysis seems to completely coincide with the results, and in that case the results are the lowest error and highest quality solution. More surprisingly, the best result is obtained with a smoothed function, not the original discontinuous ones!

We will solve the scalar advection equation u_t + u_x = 0 for a square wave on a mesh of 200 cells in one dimension using a Sweby version of Fromm’s scheme. The solution will do one rotation on the grid using 400 time steps. I’ll plot the solution and the error plus display the L1, L2 and L-infinity norms of the error.

First with the standard unsmooth functions.

standarderror-standard

Error= 0.0314099 0.00649721 0.438233

Now with smooth functions all based on using the \mbox{softsign}(x) = \tanh(10 x) and \mbox{softabs}(x) = x\tanh(10 x) .

tanh10.jpgerror-tanh10.jpg

Error= 0.0313124 0.00648103 0.4384

Now with smooth functions all based on using the \mbox{softsign}(x) = x/(|x| + 0.1) and \mbox{softabs}(x) = x^2 /(|x| + 0.1) .

linear.jpgerror-linear.jpg

Error= 0.0350856 0.00694473 0.454124

Now with smooth functions all based on using the \mbox{softsign}(x) = x/(|x| + 0.1) and \mbox{softabs}(x) = |x| + 0.1 .

inverse-linear.jpgerror-inverse-linear.jpg

Error= 0.0257311 0.00578931 0.420996

What we see is that the smoothed operators produce high quality and lower error solutions in most cases. The one case with the linear version of the smoothed sign function the larger error is supported by the analysis I gave last week. Generally as the regularization allows the function to come as close as we might like to the original functions.

How Useful are Smoothed Operators?

To test a perfect theory with imperfect instruments did not impress the Greek philosophers as a valid way to gain knowledge.

― Isaac Asimov

Note: I got super annoyed with the ability of WordPress to parse LaTeX so there are a lot of math type in here that I gave up on. Apologies!

Last week I introduced a set of alternatives to discontinuous intrinsic functions providing logical functional capability in programming. In the process I outlined some of the issues that arise due to discontinuous aspects of computer code operation. The utility of these alternatives applies to common regression testing in codes and convergence of nonlinear solvers. The issue remains with respect to how useful these alternatives are. My intent is to address components of this aspect of the methods this week. Do these advantageous functions provide their benefit without undermining more fundamental properties of numerical methods like stability and convergence of numerical methods? Or do we need to modify the implementation of these smoothed functions in some structured manner to assure proper behavior.

The answer to both questions is an unqualified yes, they are both useful and they need some careful modification to assure correct behavior. The smoothed functions may be used, but the details do matter.

To this end we will introduce several analysis techniques to show these issues concretely. One thing to get out of the way immediately is how this analysis does not change some basic aspects of the functions. For all of the functions we have the property that the original function is recovered in an asymptotic limit that is \mbox{softsign}(x) = \tanh(n x) becomes the original sign function, \rightarrow \infty . Our goal is to understand the behavior of these functions within the context of a numerical method away from this limit where we have obviously deviated substantially from the classical functions. We fundamentally want to assure that the basic approximation properties of methods are not altered in some fatal manner by their use. A big part of the tool kit will be systematic use of Taylor series approximations to make certain that the consistency of the numerical method and order of accuracy are retained when switching from the classical functions to their smoothed versions. Consistency simply means that the approximations are valid approximations to the original differential equation (meaning the error is ordered).

There was one important detail that I misplaced during last week’s post. If one takes the definition of the sign function we can see an alternative that wasn’t explored. If we have \mbox{sign}(x) = \|x\|/x = x/\|x\|. Thus we can easily rearrange this expression to give two very different regularized absolute value expressions, \|x\| = x \mbox{sign}(x) and \|x\| = x /\mbox{sign}(x). When we move to the softened sign function the behavior of the absolute value changes in substantive ways. In particular in the cases where the softened absolute value was everywhere less than the classical absolute value, it would be greater for the second interpretation and vice-versa. As a result functions like \mbox{softsign}(x) = \tanh(n x) now can produce an absolute value, \mbox{softabs}(x)=x/\mbox{softsign}(x) that doesn’t have issues with entropy conditions as the dissipation will be more than the minimum, not less. Next we will examine whether these different views have significance in truncation error.

Our starting point will be the replacement of the sign function or absolute value in upwind approximations to differential equations. For entropy satisfaction and generally stable approximation the upwind approximation is quite fundamental as the foundation of robust numerical methods for fluid dynamics. We can start with the basic upwind approximation with classical function, the absolute value in this case. We will base the analysis on the semi-discrete version of the scheme using a flux difference, u_t = - f(u)_x = $\frac{1}{h} \left[ f(j+1/2) – f(j-1/2) \right]  $. The basic upwind approximation is f(j+1/2) = \frac{1}{2} \left[ f(j) + f(j+1)\right] $ – \frac{1}{2} \left|a\right| \left[ u(j+1)  –  u(j)  \right]$ where a is the characteristic velocity for the flux and provides the upwind dissipation. The Taylor series analysis gives the dissipation in this scheme as f(u)_x - \frac{1}{2} \left|a\right| u_{xx} + {\cal O}(h^2) . We find a set of very interesting conclusions can be drawn almost immediately.

All of the smoothed functions introduced are workable as alternatives although some versions seem to be intrinsically better. In other words all produce a valid consistent first-order approximation. The functions based on analytical functions like \tanh or \mbox{erf} are valid approximations, but the amount of dissipation is always less than the classical function leading to potential entropy violation. They approach the classical absolute value as one would expect and the deviations similarly diminish. Functions such as \mbox{softabs}(x) = x^2 /(\left|x\right| + n) or \mbox{softabs}(x) = x^2 /\sqrt{x^2+ n} result in no change in the leading order truncation error although similarly the deviations are always produce less dissipation than classical upwind. We do find that for both functions we need to modify the form of the regularization to get good behavior to \mbox{softabs}(x) = x^2 /(\left|x\right| + n h) and \mbox{softabs}(x) = x^2 /\sqrt{x^2+ n^2 h^2}.For the classical softmax function based on logarithms and exponentials is the same, but it always produces more dissipation than upwinding rather than less, \mbox{softabs}(x) = \mbox{softmax}(a,-a) \ge \|a \|. This may make this functional basis better for replacing the absolute value for the purpose of upwinding. The downside to this form of the absolute value is the regularized sign function’s passage through hard zero, which makes division problematic.

Let’s look at the functions useful for producing a more entropy satisfactory result for upwinding.  We find that these functions work differently than the original ones. For example the hyperbolic tangent does not as quickly become equivalent to the upwind scheme as n \gg 0. There is a lingering departure from linearity with \mbox{softsign}(x) =x/ (\|x\| + n h) \rightarrow \mbox{softabs}(x) = (\|x\| + n h) proportional to the mesh spacing and n. As a result the quadratic form of the softened sign is best because of the h^2 regularization. Perhaps this is a more widely applicable conclusion as will see as we develop the smoothed function more with limiters.

Where utility ends and decoration begins is perfection.

― Jack Gardner

Now we can transition to looking at a more complex and subtle subject, limiters. Briefly put, limiters are nonlinear functions applied to differencing schemes to produce non-oscillatory (or monotone solutions) with higher order accuracy. Generally in this context high-order is anything above first order. We have theory that confines non-oscillatory methods to first-order accuracy where upwind differencing is canonical. As a result the basic theory applies to second-order method where a linear basis is added to the piecewise constant basis the upwind method is based on. The result is the term “slope limiter” where the linear, slope, is modified by a nonlinear function. Peter Sweby produced a diagram to describe what successful limiters look like parametrically. The parameter is non-dimensionally described by the ratio of discrete gradients, $ r = \frac{(u(j+1) – u(j)}{u(j) – u(j-1)}$. The smoothed functions described here modify the adherence to this diagram. The classical diagram has a region where second-order accuracy can be expected. It is bounded by the function \mbox{minmod}(1,r) and twice the magnitude of this function.

We can now visualize the impact of the smoothed functions on this diagram. This produces systematic changes in the diagram that lead to deviations from the ideal behavior. Realize that the ideal diagram is always recovered in the limit as the functions recover the classical form. What we see is that the classical curves are converged upon from above or below, and produces wiggles in the overall functional evaluation. My illustrations all show the functions with the regularization chosen to be unrealistically small to exaggerate the impact of the smooth functions. A bigger and more important question is whether the functions impact the order of approximation.

To finish up this discussion I’m going to look at analyzing the truncation error of the methods. Our starting point is the classical scheme’s error, which provides a viewpoint on the nature of the nonlinearity associated with limiters. What is clear about a successful limiter is its ability to produce a valid approximation to a gradient with an ordered error of a least order h= \Delta x. The minmod limiter produces a truncation error of $ \mbox{minmod}(u(j)-u(j-1), u(j+1) – u(j)) = u_x – \frac{h}{2} \left| \frac{u_{xx}}{u_x} \right| u_x $. The results with different sorts of recipes for the smoothed sign function and its extension to softabs, softmin and softmax are surprising to say the least and a bit unexpected.

Here is a structured summary of the options as applied to a minmod limiter, \mbox{minmod}(a,b) = \mbox{sign}(a) \max\left[ 0, \min\left( \|a\|, \mbox{sign}(a) b\right) \right] :

 

  1. \mbox{softsign}(x) = \tanh(n x) and \mbox{softabs}(x) = x \tanh(n x). The gradient approximation is u_x \approx \tanh(n)\tanh(2 n) u_x + \mbox(giant mess) h u_{xx} + \cal{O}(h^2). The constant in front of the gradient approaches one very quickly as n grows.tanh-constant
  2. \mbox{softsign}(x) = \tanh(n x) and \mbox{softabs}(x) = x/ \tanh(n x). The gradient approximation is u_x \approx \frac{1}{2}\left(2 \coth(2 n)- \frac{1}{n} \right) \tanh(n) u_x + \mbox(giant mess) h u_{xx} + \cal{O}(h^2). The constant in front of the gradient approaches one very slowly as n grows. This is smoothing is unworkable for limiters.one-over-tanh-constant
  3. \mbox{softsign}(x) = x/(n+\|x\|) and \mbox{softabs}(x) = x^2/(n+\|x\|) putting a mesh dependence in the sign function results in inconsistent gradient approximations. The gradient approximation is u_x \approx \frac{2}{(n+1)(n+2)} u_x + \frac{3n+2n^2}{(1+n)^2(2+n)^2} h u_{xx} + \cal{O}(h^2). The leading constant goes slowly to one as n\rightarrow 0.
  4. \mbox{softsign}(x) = x/(n+\|x\|) and \mbox{softabs}(x) = (n h +\|x\|). The gradient approximation is u_x \approx u_x - n h u_x + \cal{O}(h^2).linear-constant
  5. \mbox{softsign}(x) = x/\sqrt{x^2 + n^2} and \mbox{softabs}(x)= x^2/\sqrt{x^2 + n^2}. putting a mesh dependence in the sign results in inconsistent gradient approximations. The gradient approximation is u_x \approx u_x +\mbox{giant unordered mess} + \cal{O}(h). This makes the approximation utterly useless in this context.
  6. \mbox{softsign}(x) = x/\sqrt{x^2 + n^2 h^2} and \mbox{softabs}(x) = \sqrt{x^2 + n^2 h^2}. The gradient approximation is $ u_x \approx u_x – \left[ u_x\sqrt{n^2 + \left(\frac{(u_{xx}}{u_x}} \right)^2 \right]h+ \cal{O}(h^2)$.quadratic-constant

Being useful to others is not the same thing as being equal.

― N.K. Jemisin

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21, no. 5 (1984): 995-1011.

Smoothed Operators

Everything that looks too perfect is too perfect to be perfect.

― Dejan Stojanovic

This post is going to delve directly into this blog’s name, how to regularize a singularity, but in this case we are talking about an artificial one. When one is writing a computer program to solve differential equations it is easy to introduce a discontinuity into how the program operates. This is particularly true when you’re implementing an elaborate model or numerical method. In the process of taking care of special cases or making a program robust for general use, logic is employed. Should a circumstance arise that causes the program to fail, it can be detected and avoided by making a logical change in the operation. The most common way to do this uses logic in the program through an “if” statement or some sort of switch. When the “if” triggers on a floating-point value, the impact on the solution can be subtle and creates a host of practical issues.

Ability to find the answers is more important than ability to know the answers.

― Amit Kalantri

As computer program development becomes more rigorous testing of various sorts becomes important and valuable to quality of the work. One form of the testing is regression testing. Here the program is run through a series of usually simple problems with well-defined answers. If the program’s solution changes in some way that is unexpected, the testing should pick it up and alert the development team. In addition this testing often runs across a bunch of different computers to make sure the answers are the same, and the program works properly on all of them. For basic quality assessment and control, regression testing is essential. It is one of the hallmarks of serious, professional code development. The logic and if statements introducing discontinuous behavior into the code based on floating point numbers can wreck havoc with the testing! We can end up with a situation where the tests produce much different results because of infinitesimal changes in the numbers at some point in the calculation.

You might be asking how this can happen? This all seems rather disturbing, and it is. It is simple matter whenever the logical decision is made on the basis of a floating-point number. Consider a simple, but common bit of (pseudo) computer code,

if (value > 0.0) then

newvalue = value1

else

newvalue = value2

endif

which is a very simple test that one might see with upwind finite differences. In some cases a logical switch like this might trigger an elaborate mathematical expression, or even call a very different function or subroutine. Consider the case where the “value” is very near zero, but not exactly. In this case small differences in the quantity “zero” will trigger completely different evaluations of the logic. For special values (especially zero) this happens all the time. If the solution is dependent upon a certain sequence any regression test depending on this test could differ based on inconsequential numerical differences. As programs become more complex these branches and differences explode. Code development teams relying upon regression testing end up chasing this sort of problem over and over. It becomes a huge drain on the productivity and quality.

These problems can be replicated with a set of standard functions. The logic above can be replaced by a single statement using a “sign” function,

newvalue = sign(value) * value1 + 0.5*(1.0 – sign(value)) * value2

which gives exactly the same result as the if test in the previous paragraph. It is also prone to exactly the same problems in practical testing. These issues are the tip of a proverbial iceberg. It isn’t just regression testing that suffers from these issues, if the method for solution involves solving a nonlinear equation that goes through the above logic, the solution can stall and stagnate causing solution accuracy to suffer. The same switches can produce breaks in symmetry or bifurcation of solutions near critical points. Next, I will describe ways of implementing the sign function to alleviate these problems. It turns out that there is a whole family of functions that can replace the discontinuous behavior with something continuous, and the sign function can be used to construct other functions with the same switching behavior built in. I’ve written about some of these functions before in a different context where discontinuous logical functions were replaced by differentiable functions for the purpose of conducting modified equation analysis that rely upon valid Taylor series expansions.

Here are a couple of times I’ve hit upon this topic before: https://wjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/, https://wjrider.wordpress.com/2015/08/17/evolution-equations-for-developing-improved-high-resolution-schemes-part-2/, https://wjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/. This post might be a good postscript to these because the techniques here can cure some of the practical ills remaining for these rather powerful methods. We see issues with solving nonlinear equations where limiters are used in discretizations, various symmetry breaking effects, and extreme sensitivity to initial conditions. As I200px-LimiterRegionwill touch upon at the very end of this post, Riemann solvers-numerical flux functions can also benefit from this, but some technicalities must be proactively dealt with.

Slow is smooth; smooth is fast.

― Jack Coughlin

Using the sign function we can systematically remove the switching behavior that plagues regression testing, nonlinear solutions, symmetry preservation, and extreme sensitivity to initial conditions.

For me, the first function to start this was the hyperbolic tangent function. The basic behavior of this function acts to create a switch between two states based on the argument and steepness of the transition, \mbox{softsign}(x) = \tanh(a x) as a becomes larger, the function approaches the idealized step function. It turns out that there are a number of smooth functional representations of the sign function including \mbox{softsign}(x) = \mbox{erf}(a x), \mbox{softsign}(x) = x/\left(a + \|x\| \right), and \mbox{softsign}(x) = x/\sqrt{a + x^2}. There are many others that can be derived as well as other more exotic functions. These functions are used in other fields to remove the discontinuity from a switching function sign(https://en.wikipedia.org/wiki/Sigmoid_function ).

These functions provide a flexible foundation to build upon. As an initial example take the definition of the absolute value, \|x\| = sign(x)x (https://en.wikipedia.org/wiki/Sign_function ). This can be rearranged in a number of useful forms, sign(x) = x/\|x\| = \|x\|/x . We can see that a simple smoothed version of the absolute value is \mbox{softabs}(x) = \mbox{softsign}(x)x. We can now build an entire family of softened or smoothed functions that can be differentiated (they are C_\infty). Each of the classical versions of these functions cannot be differentiated everywhere and create a host of problems in practical programs. Another common switching function are “min” and “max”. We can rewrite both functions as \min(a,b) = \frac{1}{2}(a+b) - \frac{1}{2}\|a-b\|, and \max(a,b) = \frac{1}{2}(a+b) + \frac{1}{2}\|a-b\|. The modified smooth versions are relatively obvious, \mbox{softmin}( (a,b) = \frac{1}{2}(a+b) - \frac{1}{2}softabs(a-b), and \mbox{softmax}(a,b) = \frac{1}{2}(a+b) + \frac{1}{2}\mbox{softabs}(a-b). From this basic set of function we can build the backbone of the limiters, the minmod function and the marvelous magical median function. What we have removed in the process is the discontinuous switching process that can wreck havoc with finite precision arithmetic.abs

We note that there is a separate version of the softmin and softmax functions used in ladders-or-a-tightropesome optimization solutions (https://www.johndcook.com/blog/2010/01/13/soft-maximum/, https://en.wikipedia.org/wiki/Softmax_function, ). This uses a combination of exponentials and logrithmns to provide a continuously differentiable way to take the maximum (or minimum) of a set of arguments. My naming convention “soft” comes from being introduced to the ideas in this blog post. This separates the idea from a “hard” max where the arguments switch based on the precision of the floating-point numbers as opposed to being continuous. For completeness the softmax uses the following expression \mbox{softmax}(a,b) = \log\left(\exp(n a) + \exp(n b) \right)/n, which may be expanded to additional arguments without complications. By the same token we can define a “softmin” as \mbox{softmin}(a,b) = -\log\left( \exp(-n a) + \exp(-n b) \right)/n that can be similarly be expanded to more arguments. In both cases the parameter n controls the sharpness of the smoothed version of the standard function, the larger the value the closer the function is to the standard function.

Using our previous definitions of “softmax” we can derive a new version of “softabs”.   We rearrange \mbox{softmax}(a,b) = \frac{1}{2}(a+b) +\frac{1}{2} \mbox{softabs}(a-b) to derive a \mbox{softabs}(a). We start with the observation that \mbox{softmax}(a,-a) =\frac{1}{2}(a-a) +\frac{1}{2} \mbox{softabs}(a+a) therefore \mbox{softabs}(a) = \mbox{softmax}(a,-a). We find that this version of the absolute value has much different properties than the previous softened version in some important ways. The key thing about this version of absolute value is always greater in value than the classical absolute value function. This would turn out be useful for use with Riemann solvers by not violating the entropy condition. With appropriate wavespeed estimates the entropy condition will be satisfied (wavespeed estimates are out of scope for this post). By the same token this absolute value is not valuable for limiters because of the same property!

Ultimately we want to understand whether these functions alter the basic accuracy-consistency or stability properties of the numerical methods based on using the classical functions. The answer to this question is subtle, but can be answered via analysis and numerical experiment. Not to belabor the details, but we use series expansions and discover that with appropriate regularization of the smoothed functions, we can use them to replace the classical functions and not undermine the accuracy of the discretization. This has been confirmed for the softened version of the “minmod” limiter. A downside of the softened limiters are small deviations from idealistic monotonticity preserving behavior.

Finally as eluded to early in the post, we can also use these functions to modify Riemann solvers. The first code example can form the logical basis for upwind bias with a finite difference by choosing a one-sided difference based upon the sign of the characteristic velocity. When Riemann solvers are examined we see that either “if” statements are used or when full flux functions are used absolute values (the flux is in a general sense a characteristic quantity multiplied by the characteristic velocity), the absolute value of the characteristic velocity introduces the sign convention the “if” statement provides.

The lingering problem with this approach is the concept of entropy violating sodxapproximations. This issue can easily be explained by looking at the smooth sign function compared with the standard form. Since the dissipation in the Riemann solver is proportional to the characteristic velocity, we can see that the smoothed sign function is everywhere less than the standard function resulting in less dissipation. This is a stability issue analogous to concerns around limiters where these smoothed functions are slightly more permissive. Using the exponential version of “softabs” where the value is always greater than the standard absolute value can modulate this permissive nature.

Let us study things that are no more. It is necessary to understand them, if only to avoid them.

― Victor Hugo

Reality can’t be substituted

 

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Bill’s corollary: It doesn’t matter how massive your calculation is … If it doesn’t agree with experiment, it’s wrong.

The real world is complex, dangerous and expensive. It is also where mystery lives and the source of knowledge. There seems to be some desire to use computers, modeling an
d simulation to replace our need for dealing with the real world. This is untenable from many different perspectives and misplaces the proper role of everything possible via The-most-powerful-Exascale-Computercomputing. Worse yet, computing can not be a replacement for reality, but rather is simply a tool for dealing with it better. In the final analysis the real world still needs to be in the center of the frame. Computing needs to be viewed in the proper context and this perspective should guide our actions in its proper use.

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

We see the confluence of many things in our attitudes toward computing. It is a new thing constantly unveiling new power and possible ways of changing our lives. Infig10_role many ways computing is driving enormous change societally and creating very real stress in the real world. These stresses are stoking fears and lots of irrational desire to control dangers and risks. All of this control is expensive, and drives an economy of fear. Fear is very expensive. Trust, confidence and surety are cheap and fast. One totally irrational way to control fear is ignore it, allowing reality to be replaced. For people who don’t deal with reality well, the online world can be a boon. Still the relief from a painful reality ultimately needs to translate to something tangible physically. We see this in an over-reliance on modeling and simulation in technical fields. We falsely believe that experiments and observations can be replaced. The needs of the human endeavor of communication can be done away with through electronic means. In the end reality must be respected, and people must be engaged in conversation. Computing only augments, but never replaces the real world, or real people, or real experience. This perspective is a key realization in making the best use of technology.

The real world is where the monsters are.

― Rick Riordan

In science we must always remember that understanding reality is the fundamental objective. Theory acts to explain what we see, but observation always rules supreme in defining the validity of knowledge and understanding. We must always remember that computing is a tool that augments theory. It never replaces theory, nor can it replace experiments or observation. A computational simulation can never be better than the model that theory has provided it. If the theory is lacking (and it always is), more computing cannot rescue it. No amount of computing can fill in the gap between what is and isn’t known. It is a new and powerful tool to be wielded with care and skill, but a tool. These perspectives seem to be lost on so many people who see computing as some sort of silver bullet that transcends these simple truths.

It is sometimes an appropriate response to reality to go insane.

― Philip K. Dick

While computing isn’t a silver bullet for making painful elements of reality go away, it is a powerful tool if wielded properly. Modeling and simulation serves as a powerful means of testing our knowledge and general capability to understand the world around us. When simulations are tested against reality and produce good results (that is they are validated), we feel that our grasp of the how’s and why’s of the real world are at hand. If we are grounded in this understanding, the modeling and simulation can aid our ability to examine the World around us. We can optimize our observations or design experiments to more effectively examine and measure various things. A successful model can serve a wonderful role in focusing our attention toward the most important aspects of reality, or ignoring what is not essential.Cielo rotator

More than simply assisting the design of better experiments and observations of reality, the use of modeling and simulation can provide a significant flywheel effect. All the models of reality we use are flawed at some level. In a similar vein, our observations of reality are always limited and flawed. In very good models these flaws are subtle and hard to expose. Good experiments need to be designed to expose and improve these models. We can achieve some stunning synergies if we utilize the models to design the most stringent tests of them. This is exactly the thing we can do with a well-designed program that collaborates effectively. If we examine the models we can find the parts of a physical system most sensitive to the impact of parts of the model. One way of proactively improving models is to identify where to make measurements, and what to measure to maximize the ability to prove, disprove or improve a given model. The key point is the oft-missed point that the models are always imperfect.

Reality is that which, when you stop believing in it, doesn’t go away.

― Philip K. Dick

These imperfections are rarely acknowledged in the current National dialog on high performance computing. Rather than state this rather powerful truth, we see a focus MRISB2on computer power coupled to an unchanging model as the recipe for progress. Focus and attention to improving modeling is almost completely absent in the modeling and simulation world. This ignores one of the greatest truths in computing that no amount of computer power can rescue an incorrect model. These truths do little to alter the approach although we can be sure that we will ultimately pay for the lack of attention to these basics. Reality cannot be ignored forever; it will make itself felt in the end. We could make it more important now to our great benefit, but eventually our lack of consideration will demand more attention.

A more profitable a proactive strategy would benefit everyone. Without attention many end up accommodating the model’s imperfections through heavy use of calibration. Ultimately the calibration hammer is lowered on imperfect models to render them useful and capable of influencing reality. In the wake of heavy-handed calibration we can achieve a great focus on localizing the modeling issues. In a deep sense the areas for crude calibration (often crude and very effective) are exactly the places for the greatest modeling improvement. Typically the calibration ends up merging multiple issues together. As a result one needs to carefully deconstruct the whole of the effects being accounted for in calibration. For example one may find a single calibration knob accounting for the effects of turbulence, inadequate constitutive relations and mesh resolution. To make progress these effects need to be separated and dealt with climate_modeling-ruddmanindependently. The proper decomposition of error allows the improvement of modeling in a principled manner.

The key to utilizing simulation effectively is the recognition of what it can and cannot do. While one can experiment with computations, these experiments can only unveil secrets of the models or computations themselves. The capacity of such unveiled secrets to be meaningful in reality always involves direct comparison with observations of the real world. If the secret seen computationally is also seen in reality then a true discovery can be made. In the process the model gains credibility and validity as well. In these cases simulation and modeling can tell us where to look, and if the secret is found, we know the model is valuable and correct. If it is not found, we know the model is deficient and must be improved. The observations may or may not be sufficient for improving the model in such a way that its predictions are validated by reality.

Successful modeling and simulation implies a level of understanding that empowers humanity. The implication of understanding goes to our ability to control reality effectively through human action. If reality can be modeled its effects can be affected or accommodated through design or mitigation. The definition of success is always through validation of the model’s results against observations of the world (including carefully designed experiments). If the model can be demonstrated via verification to be solvi8ng the model we believe we are using, the validation is powerful evidence. One must recognize that the degree of understanding is always relative to the precision of the questions being asked. The more precise the question being asked is, the more precise the model needs to be. This useful tension can help to drive science forward. Specifically the improving precision of observations can spur model improvement, and the improving precision of modeling can drive observation improvements, or at least the necessity of improvement. In this creative tension the accuracy of solution of models and computer power plays but a small role.

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

― Stephen Hawking

 

The truth hurts, but it is needed

Sometimes, these tribal affiliations push us to become better versions of ourselves. We take a long-term view, check our selfish impulses and work hard to meet the high standards of those around us.

– Seth Godin

icn.seths.headSometimes you read something that hits you hard. Yesterday was one of those moments while reading Seth Godin’s daily blog post (http://sethgodin.typepad.com/seths_blog/2017/03/the-best-of-us-the-worst-of-us.html). I’ve enjoyed Seth’s books and ideas finding them easy to grasp and connect to. Like a lot of things. The point of the post was simple. Our associations impact us. They can bring out the best or worst in us. When I reflected on this point, the quote above came into sharp focus. Looking at my current work the quote seemed almost cruel. It was completely opposite of everything driving me today. Such a circumstance is ultimately untenable.

Writers like Godin often speak of aspirations for better World, a better workplace that makes all of us better. My managers read these books all the time (Daniel Pink’s book “Drive” comes to mind). I’ve opined that the distance between the workplace espoused in these books and where I work is vast. The management seems to be actively working to make things worse and worse. On the other hand they are always reading these books or going to the Harvard Business Review for advise. Do they really think that they are applying anything to their actual work? It would seem to me that they are completely delusional if they think their actions follow from any of this advise.

I once worked somewhere that pushed me to be better. It was a wonderful place where I grew professionally every day. The people there were generous with their knowledge. Collaboration was encouraged. It was also a rough and tough place to work. The culture was aggressive and combative. There was plenty of bad behavior and conflict. Nonetheless it was an incubator for me. It changed me for the better and filled me with desire to improve. It was also a place that had run out of time so we systematically destroyed it. Perhaps it was a place that can’t exist in today’s world, but it would be good to create places like it that can. We should all aspire to create places that make us better, that help us grow into the best versions of ourselves.

urlI rewrote Godin’s quote to reflect how work is changing me (at the bottom of the post). It really says something needs to give. I worry about how many of us feel the same thing. Right now the workplace is making me a shittier version of myself. I feel that self-improvement is a constant struggle against my baser instincts. I’m thankful for a writer like Seth Godin who can push me to into a vital and much needed self-reflective “what the fuck” !

Sometimes, these tribal affiliations push us to become worse versions of ourselves. We take a short-term view, give into our selfish impulses and become lazy to meet the low standards of those around us.

We are the Over-Managed and Under-Led

 

Management is doing things right; leadership is doing the right things.

― Peter F. Drucker

It’s a really incredible time to be alive. The world is going through tremendous changes in many respects. Much of the change is driven by technology and scientific breakthroughs of the past century. One might reasonably argue that the upheavals we are witnessing today are the most important since the Renaissance and the Reformation. Weencryption-NSA-spying_SS_127879991_090613-617x416 are seeing cultural, economic, and political changes of epic proportions across the human world. With the Internet forming a backbone of immense interconnection, and globalization, the transformations to our society are stressing people resulting in fearful reactions. These are combining with genuine threats to humanity in the form of weapons of mass destruction, environmental damage, mass extinctions and climate change to form the basis of existential danger. We are not living on the cusp of history; we are living through the tidal wave of change. There are massive opportunities available, but the path is never clear or safe. As the news every day testifies, the present mostly kind of sucks. While I’d like to focus on the possibilities of making things better, the scales are tipped toward the negative backlash to all this change. The forces trying to stop the change in its tracks are strong and appear to be growing stronger.

People in any organization are always attached to the obsolete – the things that should have worked but did not, the things that once were productive and no longer are.

― Peter F. Drucker

maxresdefaultMany of our institutions are under continual assault by the realities of today. The changes we are experiencing are incompatible with many of our institutional structures such as the places I work. Increasingly this assault is met with fear. The evidence of the overwhelming fear is all around us. It finds its clearest articulation within the political world where fear-based policies abound with the rise of Nationalist anti-Globalization candidates everywhere. We see the rise of racism, religious tensions and protectionist attitudes all over the World. The religious tensions arise from an increased tendency to embrace traditional values as a hedge against change and the avalanche of social change accompanying technology, globalization and openness. Many embrace restrictions and prejudice as a solution to changes that make them fundamentally uncomfortable. This produces a backlash of racist, sexist, homophobic hatred that counters everything about modernity. In the workplace this mostly translates to a genuinely awful situation of virtual paralysis and creeping bureaucratic over-reach resulting in a workplace that is basically going no where fast. For someone like me who prizes true progress above all else, the workplace has become a continually disappointing experience.

All organizations are perfectly designed to get the results they are now getting. If we want different results, we must change the way we do things.

― Tom Northup

One of the most prevalent aspects of today’s world is the focus on appearances as opposed to substtinder-640x334ance. As we embrace online life and social media, we have gotten supremely fixated on superficial appearances and lost the ability to focus on substance. The way things look has become far more important than the actuality of anything. Having a reality show celebrity as the President seems like a rather emphatic exemplar of this trend. Someone who looks like a leader, but lacks most of the basic qualifications is acceptable to many people. People with actual qualifications are viewed as suspicious. The elite are rejected because they don’t relate to the common man. While this is obvious on a global scale through political upheaval, the same trends are impacting work. The superficial has become a dominant element in managing because the system demands lots of superficial input while losing any taste for anything of enduring depth. Basically, the system as a whole is mirroring society at large.

Management cares about only one thing. Paperwork. They will forgive almost anything else – cost overruns, gross incompetence, criminal indictments – as long as the paperwork’s filled out properly. And in on time.

― Connie Willis

There is nothing so useless as doing efficiently that which should not be done at all.

― Peter F. Drucker

Working within one of our “prized” National institutions has been an interesting, magical and initially empowering experience. Over the past decade or two, these institutions have been dragged down by the broader societal trends into the muck. It is no exaggeration to say that we are being slowly and surely strangled by overwhelming management oversight. The basic recipe for management of the Labs I’ve worked at is making lots and lots of rules to keep people from “fucking up”. The bottom line is that it’s fine if we really don’t accomplish anything as long as people just don’t ever fuck up. The maxim at work is don’t ever fuck up, which is the result of fear being the core motivation for everything. All of our most important institutions are withering under society-wide loss of faith and mistrust. This creates an environment where any scandal can be a direct threat to the future of the institution. This direct threat means that achievement and the very reason for the institution’s existence are lost.

The goal of management is to remove obstacles.

― Paul Orfalea

imagesThe prime institutional directive is survival and survival means no fuck ups, ever. We don’t have to do anything as long as no fuck ups happen. We are ruled completely by fear. There is no balance at all between fear-based motivations and the needs for innovation and progress. As a result our core operational principle for is compliance above all else. Productivity, innovation, progress and quality all fall by the wayside to empower compliance. Time and time again decisions are made to prize compliance over productivity, innovation, progress, quality, or efficiency. Basically the fear of fuck ups will engender a management action to remove that possibility. No risk is ever allowed. Without risk there can be no reward. Today no reward is sufficient to blunt the destructive power of fear.

Our management has become all about no fuck ups, and appearances. The actual, good, productive management work that should be done is virtually entirely absent. We don’t see managers trying to figure out how to clear barriers or enable people to get work done. We see enforced compliance. We hear lots of things about formality of operations and assurance of results. This all comes down to pervasive lack of trust and fear of failure. Increasingly we can fake progress and results. Increasingly bullshit has taken the place of actual results. Even better, bullshit results are safe and entail far less risk of fuck ups. They are mostly upside without the downside, plus bullshit is in vogue! It has the benefit of sounding better than anything we are likely to achieve, and doesn’t carry the risks of real work. The end result is deep-seated corrosive forces unleashed within our institutions that are eating away at them from the inside.

The over-management is joined at the hip with a lack of leadership and di112215_1728_theonlythin1rection. It is the twin force for professional drift and institutional destruction. Working at an under-led institution is like sleepwalking. Every day you go to work basically making great progress at accomplishing absolutely nothing of substance. Everything is make-work and nothing is really substantive you have lots to do because of management oversight and the no fuck up rules. You make up results and produce lots of spin to market the illusion of success, but there is damn little actual success or progress. The utter and complete lack of leadership and vision is understandable if you recognize the prime motivation of fear. To show leadership and vision requires risk, and risk cannot take place without failure and failure courts scandal. Risk requires trust and trust is one of the things in shortest supply today. Without the trust that allows a fuck up without dire consequences, risks are not taken. Management is now set up to completely control and remove the possibility of failure from the system.

Leadership and learning are indispensable to each other.

― John F. Kennedy

The capacity to achievejohn-f-kennedy-1 rewards and achievement without risk is incompatible with experience. Everyday I go to work with the very explicit mandate to do what I’m told. The clear message every day is never ever fuck up. Any fuck ups are punished. The real key is don’t fuck up, don’t point out fuckups and help produce lots of “alternative results” or “fake breakthroughs” to help sell our success. We all have lots of training to do so that we make sure that everyone thinks we are serious about all this shit. The one thing that is absolutely crystal clear is that getting our management stuff correct is far more important than every doing any real work. As long as this climate of fear and oversight is in place, the achievements and breakthroughs that made our institutions famous (or great) will be a thing of the past. Our institutions are all about survival and not about achievement. This trend is replicated across society as a whole; progress is something to be feared because it unleashes unknown forces potentially scaring everyone. The fear resulting in being scared undermines trust and without trust the whole cycle re-enforces itself.

Leaders must be close enough to relate to others, but far enough ahead to motivate them.

― John C. Maxwell

Along with progress, leadership is also sacrificed at the altar of fear. Anything out of the ordinary is completely suppressed in the current environment. The ordinary can be managed and controlled; it is a known quantity. Progress and innovation produces unusual things that might have unintended consequences making its management difficult. Something unusual is more likely to produce a fuck up and therefore it must be stopped to assure the survival imperative. Of course, innovation, progress, and the unusual can also be wonderful and produce the breakthroughs all of us celebrate. The problem is that this cannot take place without risk and the potential for things to get fucked up. This also holds for people, who also must be ordinary, the unusual that might lead us in new directions are to be feared and controlled. The unusual is dangerous and feared. Leaders are unusual, so they too are reviled.

Start with the end in mind.

― Stephen R. Covey

imagesA big piece of the puzzle is the role of money in perceived success. Instead of other measures of success, quality and achievement, money has become the one-size fits all measure of the goodness of everything. Money serves to provide the driving tool for management to execute its control and achieve broad-based compliance. You only work on exactly what you are supposed to be working on. There is no time to think or act on ideas, learn, or produce anything outside the contract you’ve made with you customers. Money acts like a straightjacket for everyone and serves to constrict any freedom of action. The money serves to control and constrain all efforts. A core truth of the modern environment is that all other principles are ruled by money. Duty to money subjugates all other responsibilities. No amount of commitment to professional duties, excellence, learning, and your fellow man can withstand the pull of money. If push comes to shove, money wins. The peer review issues I’ve written about are testimony to this problem; excellence is always trumped by money.

One of the things that are mostly acutely impacted by all of this is the ability for strategic thought, work or action. In the wake of the lack of trust and degree of control, the ability to do big things is almost completely lost. All work becomes unremittingly tactical in nature. Big ideas are lost and people can only envision committing to small things. Big ideas require a level of trust that cannot be summoned or supported. An element in this lack of trust is an obsession with reporting and careful attention to progress by the management. We see rather extensive draws of information from the depths of organizations to check on whether money is being spent properly. The entire management apparatus is engaged in getting information, but nothing is done with it. It is only used to check up on things, the whole of the management is devoted to attending to the trustworthiness of those working. The good that management might do is scarified, and leadership is completely absent. Without some symmetry of trust, the whole idea of leadership is vacant.

What the hell is to be done about all of this? How do we recapture progress and reject fear? How do we embrace leadership and harness management as a force for good rather than decline and decay?

I really don’t know the answer to any of these questions, but I can propose a few things that might resist these issues. Hopefully a large number of people will join together in prizing progress enough to reject fear as a prime societal motivator. The desire to live and grow will overthrow the fear of change. The forces of fear have the potential to undo so much of the good of the modern World. Those who prize modernity and the benefits of freedom and progress will reject fear as a motivator. Realizing that fear emboldens hatred and reduces the potential for good is a first step. We must recognize and reject our so-maxresdefault copycalled leaders who utilize fear as a prime motivation. Every time a leader uses fear to further their agenda, we take a step backward. One the biggest elements in this backwards march is thinking that fear and danger can be managed. Danger can only be pushed back, but never defeated. By controlling it in the explicit manner we attempt today, we only create a darker more fearsome danger in the future that will eventually overwhelm us. Instead we should face our normal fears as a requirement of the risk progress brings. If we want the benefits of modern life, we must accept risk and reject fear. We need actual leaders who encourage us to be bold and brave instead of using fear to control the masses. We need to quit falling for fear-based pitches and hold to our principles. Ultimately our principles need to act as a barrier to fear becoming the prevalent force in our decision-making.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

― Peter F. Drucker