Thinking about Flux Splitting for General Riemann Solvers

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

— Sir Arthur Stanley Eddington

AS USUAL WORD PRESS’ LaTeX is annoying.  I need to post and go to bed.

Rather than continue to talk about global issues I’ll get back in the weeds this week and get into a technical conversation. I don’t really know where this is going, so this is a bit of stream of consciousness in thinking about a topic and developing a new idea. The inspiration for this came from my talk at the Multimat 2017 meeting, and considering how to fix problems I’ve seen with rarefactions. As a reminder, I had seen that some solvers produce solutions with small, but consistent violations of the second law of thermodynamics in their solution of expansions (e.g. rarefactions). Nothing catastrophic is observed, but it is a troubling failure from nominally robust solvers. This study was itself motivated by the observation of a systematic failure of these solvers to produce convergent solutions to very strong rarefactions, and examine what sort of character the solutions have under more ideal circumstances.

A few points are worth making about the solvers used and how they have been tested in the past. Mostly I’ve worked with finite volume codes, sort of the gold standard of production codes for fluid dynamics. These codes are very reliable, and well understood.  For the most part, the focus of test problems has been shock waves where bad methods can result in catastrophic instability for the codes. Rarefactions are far less studied and tested because they are generally benign, and don’t threaten the stability of the code. As a result, the rarefaction focused test problems are mostly missing. We do know that expansions can produce unphysical solutions for Eulerian codes at critical points (where the characteristic speeds go to zero, and numerical dissipation may vanish). Bad solutions can arise with strong rarefactions, but no one has pointed out that these solutions actually violate the second law of thermodynamics before. The end result is a relative ignorance about shortcomings of the code, and a potentially important outlet for improvement of the methods.

Von Neumann told Shannon to call his measure entropy, since “no one knows what entropy is, so in a debate you will always have the advantage.

― Jeremy Campbell

One of my suggestions about examining this problem is examining the solution to these problems with a wider variety of codes. This would include codes that do not use pure finite volume methods.  One example are methods based on flux differencing where the code can go to formally high-order accuracy for nonlinear problems. Control volume codes are limited to second-order accuracy and the leading nonlinear truncation error can produce the entropy-condition violating energy transfer in expansions, C f_{uu} u_x u_{xx} . For almost every control volume code these terms are dissipative in shock waves, thus providing additional stability to the codes in this dangerous configuration. The opposing reaction in expansions can go unnoticed because any imperfections in the solution are modulated by the physics of the problem. For this reason, the failing has gone completely unnoticed for decades.  A reasonable question to explore is whether codes based on different design principles exhibit the same problems, or produce solutions that satisfy the second law of thermodynamics more uniformly.

An important technique in defining flux difference schemes of high-order accuracy is flux splitting (more than second-order accuracy). The core idea is that approximating the fluxes to high order can produce higher formal accuracy then the variables. The question is does this produce solutions of a fundamentally different character with respect to entropy. Simply put, a flux splitting is a decomposition of the fluxes being differenced into negative and positive moving contributions. These fluxes are then differenced and then recomposed into the total flux. The splitting techniques add a directionality to the approximation needed for numerical stability associated with upwinding the approximation. The flux splitting techniques are closely related to Riemann solvers, but here to fore only include a small number of simple linearized Riemann solutions. I’d like to explore some greater generalization of this concept including flux splittings based on exact Riemann solvers.

The Riemann problem is the exact solution to the interaction of two discontinuous states described by hyperbolic equations. This analytic information can be used to develop numerical methods that encode this physically relevant information into the solution. In terms of numerical methods, the Riemann solution is a generalization of the principle of upwinding, where the physical direction of propagation is taken into account. The first person to describe this approach to numerical methods was SK Godunov in 1959. Godunov’s method was first-order accurate and used the exact solution to the Riemann problem. It was soon realized that one only needed to approximate the Riemann solution. This became a key development in the methods over time and allowed great progress. Over time it was realized that it also allowed great flexibility too.

In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. Of course, you seldom, if ever, see the pure state.

– Richard Hamming

A simple Riemann solver can be defined by linearizing the problem, f(u_l,u_r) = \frac{1}{2} \left[ f_l + f_r \right]    - R | \lambda | L $ ( u_r – u_l ) $. The quantity f is the nonlinear flux, u_l, u_r are the states to the left and right of the interface. The dissipation is defined by the eigen-decomposition of the flux Jacobian, \partial_u f = A = \partial_u f = R \lambda L . This decomposition is contained of the right and left eigenvectors and the eigenvalues, \lambda. These eigenvalues are the characteristic velocities, which for gas dynamics are u-c, u, u+c being the velocities and the sound speeds, c. This basic decomposition is the basis of flux splitting techniques.

The basic flux splitting takes the flux and decomposes it into right and left moving pieces, f(u) = f(u)^- + f(u)^+ . One was to do this is choose a velocity, \alpha > 0 , and create contributions where f(u)^+ = \frac{1}{2} \left[ f(u) + \alpha u\right] and f(u)^- = \frac{1}{2}\left[ f(u) - \alpha u\right]. A simple choice of \alpha = \Delta x/\Delta t creates the Lax-Friedrichs flux, the simplest (and most dissipative) Riemann solver. For the general linearized Riemann solver the flux splitting is f(u)^+ = \frac{1}{2}\left[ f(u) + R |\lambda | L u\right] and f(u)^- = \frac{1}{2}\left[ f(u) - R  | \lambda | L u\right]. The choice of the left and right states to evaluate the flux Jacobian defines the flux splitting. For example, if the states are evaluated using Roe’s recipe, we get the Roe flux splitting. If we evaluate the eigenvalues in a bounding fashion we can get the local Lax-Friedrichs method.

Another approach to generating a flux splitting does not use the variables in the expression of splitting, and only uses the fluxes. In this case the expressions are developed in terms of the sign of the eigenvalues/characteristic velocity. The splitting then works only as a scaling by eigenvector decomposition of the flux Jacobian. The expressions are somewhat simplified, as f^+ = \frac{1}{2} \left(f + R  \mbox{sign}(\lambda) L \right) f and $ f^- = \frac{1}{2} ( f – R \mbox{sign(\lambda) L ) f$. We not in passing that the smooth or soft version of the sign function might be extremely useful in this type of splitting and introducing a continuously differentiable function (https://wjrider.wordpress.com/2017/03/24/smoothed-operators/ https://wjrider.wordpress.com/2017/03/24/smoothed-operators/). By the same token, the absolute value used in the usual flux splitting approach could also be smoothed to similar effect. We need to take care in our choices to assure that the accuracy of the resulting numerical method is not negatively impacted. We get into some very big problems when we want to generalize to other Riemann solvers. Examples of these solvers are the HLL family of solvers, and the most classical Riemann solver, the exact solver or close approximations to that approach (e.g., a single iteration of the nonlinear Newton’s method used in the exact solver). How can these important methods be utilized in flux splitting methods? For very strong wave interactions these classes of methods are extremely valuable and not presently possible to be used effectively in flux splitting.

 Nature not only suggests to us problems, she suggests their solution.

—Henri Poincare´

We can start with the simpler case of the HLL type of flux, which has an algebraic description. The HLL flux is defined using the space-time diagram by integrating the equations to derive a flux. The simplest form of the flux uses bounds for the wave speeds and neglecting all of the structure inside the Riemann fan resulting in a simple closed form expression for the flux, $f_{lr} = \left[a_r f_l – a_l f_r + a_l a_r \left( u_r – u_l \right)\right]/(a_r – a_l)$. The flux is quite simple, but dependent on the estimates for the smallest and largest wave speeds in the system. The left wave speed, a_l is the smallest wave speed and needs to be bounded at zero (i.e., it is negative). The right most wave speed is a_r and is bounded below by zero. The HLL flux has the benefit of reducing to simple upwind flux for the system if all the wave speeds are either negative or positive. For a flux splitting we need to take this apart into negative and positive moving pieces for the purposes of splitting nearby fluxes as we did with the Roe, or flavors of Lax-Friedrichs.

The flux splitting can be defined almost by inspection. The positive flux is $ f^+ = (a_r f – a_l a_r u) / (a_r – a_l) $. The negative flux is $ f^- = (- a_l f + a_l a_r u) / (a_r – a_l) $. This is a wonderfully simple result, and meets all the basic requirements for a flux splitting. Unfortunately, the HLL flux is extremely dissipative, thus lacking some degree of practical utility. Still we expect this flux splitting to be quite robust especially for strong waves with the proviso that the wave speed estimates bound the physical wave speeds. This is a much more delicate estimate than usually recognized. The case of a reflected wave can produce wave speeds nonlinearly that exceed the wave speeds in the initial data.

The harder case is the class of exact Riemann solvers that are defined algorithmically and do not have a closed form. After using an exact Riemann solver we do have a set of initial left and right states, and the resolved state at the centering point x/t=0. If we desire a flux splitting, it needs to be defined in terms of these variables. The trick in this endeavor is choosing an algebraic structure to help produce a workable flux splitting technique. We build upon the experience of the HLL flux partially because we can incorporate the knowledge arising from the exact solution into the algebraic structure to good effect. In particular, the nature of the one-sided differencing can be reproduced effectively. This requires the wave speed bounds to use the interior states of the solution.

The exact flux is different than the HLL flux, and this will be defined by changing the dissipation vector in the flux. Our chosen structure is a flux defined by $ f_{lr} = \(a_r f_l – a_l f_r – D  \left( u_r – u_l \right)) / (a_r – a_l) $. If we can derive the form for D our work will be done. The positive flux is $ f^+ = (a_r f + D u) / (a_r – a_l})$. The negative flux is $ f^- = (- a_l f – D u) / (a_r – a_l) $. Now we just have a little bit of algebra to arrive at our final expression. The math is nice and straightforward, $ D = (a_r f_l – a_l f_r – (a_r – a_l) f_{lr} ) / (u_r – u_l) $. A couple comments are needed at this point. When the states become equal, the solver becomes ill defined, u_l = u_r. Fortunately, this is exactly where the linearized flux splitting approaches or HLL would be ideal.

The secret to being wrong isn’t to avoid being wrong! The secret is being willing to be wrong. The secret is realizing that wrong isn’t fatal.

― Seth Godin

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Van Leer, Bram. “Flux-vector splitting for the Euler equations.” In Eighth international conference on numerical methods in fluid dynamics, pp. 507-512. Springer Berlin/Heidelberg, 1982.

Harten, Amiram, Peter D. Lax, and Bram Van Leer. “On upstream differencing and Godunov-type schemes for hyperbolic conservation laws.” In Upwind and High-Resolution Schemes, pp. 53-79. Springer Berlin Heidelberg, 1997

Shu, Chi-Wang, and Stanley Osher. “Efficient implementation of essentially non-oscillatory shock-capturing schemes, II.” Journal of Computational Physics 83, no. 1 (1989): 32-78..

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics126, no. 1 (1996): 202-228.

Quirk, James J. “A contribution to the great Riemann solver debate.” International Journal for Numerical Methods in Fluids 18, no. 6 (1994): 555-574.

Advertisements

Multimat 2017: Where did all the New Ideas go?

Science is what we have learned about how to keep from fooling ourselves.

― Richard Feynman

Last week was another trip to the Multimat conference, a biannual meeting of scientists who solve the equations of multiple material flows under highly energetic and compressible conditions. I’ve always greeted the meeting with great enthusiasm and returned to work bristling with new ideas and inspiration. In many ways this community is the tip of the intellectual spear for modeling and simulation capability.  It also marks an anniversary of sorts, four years of blogging.  My very first post here was about the San Francisco edition of the meeting that coincided with my 50th birthday (https://wjrider.wordpress.com/2013/09/13/thoughts-about-multimat2013/). Two years ago we had a wonderful meeting in Wurzburg Germanyfad6939fd02149c8aa33953ec9789f41 (https://wjrider.wordpress.com/2015/09/11/multimat2015-a-biannual-festival-on-computing-compressible-multiple-materials/). Every meeting was wonderful and this was no exception, except in one very important and perhaps essential regard; the meeting seemed devoid of the usual exciting intellectual inspiration. What happened to the big ideas that flowed so easily in every previous meeting? Is it my imagination, or have the exciting new ideas dried up?

Do stuff. be clenched, curious. Not waiting for inspiration’s shove or society’s kiss on your forehead. Pay attention. It’s all about paying attention. attention is vitality. It connects you with others. It makes you eager. stay eager.

― Susan Sontag

This assessment might seem rather harsh, but upon reflecting on the previous meetings, it holds up under considerable scrutiny. Each previous meeting was full of moments where you are watching someone else’s talk and thinking, “I wish I’d thought of that, this is brilliant”. This is exactly what makes conferences so wonderful and important to attend; you get to cherry pick people’s best ideas accumulated at great effort all at once. In the moment these ideas seem like Athena springing fully formed from Zeus’ brow! Your colleagues get to look like the geniuses they are and present their most creative and ingenious thoughts in an intellectual banquet (https://wjrider.wordpress.com/2014/06/13/why-do-scientists-need-to-attend-conferences/, https://wjrider.wordpress.com/2014/10/27/a-holistic-view-of-attending-conferences/). The reason for attending conferences isn’t to give talks; it is to learn new things taught by the smartest people you know. It is to meet and let ideas breed openly over laughter, food and drinks. You give talks as an act of repayment for the knowledge you are granted byimgres being in the audience. Giving talks is pretty low on the list of reasons, but not in the mind of our overlords, which starts to get at the problems I’ll discuss below. Given the track record of this meeting my expectations were sky-high, and the lack of inspiring ideas left me slightly despondent.

A few more thoughts about the meeting are worth pointing out before getting to the dialog about fresh ideas, their importance and postulates for their recent absence. The meeting is attended by a collection of computational scientists (mathematics, physics, engineering,…) dominated by the nuclear “club”. This means American, French and British with a smattering of Russians and Chinese – who couldn’t come this year for undisclosed reasons. These scientists for the most part work at their nation’s respective nuclear weapons’ labs. Occasional others attend like Israelis (an unacknowledged member of the club) along with a handful of Czechs, Italians, and Germans. As such the meeting serves as a proverbial checkup on the intellectual health of this important area of science at the West’s nuclear weapons Labs. This year’s checkup should give everyone pause, the state of health is declining. There is a real lack of creative energy surrounding the heart of our most important codes. Many important codes are built around a powerful hydro-solver that produces accurate, physically relevant solutions to the multi-material “hydrodynamic” equations. Previous meetings have seen a healthy resurgence of new ideas, but that upswing seems to have come to a staggering halt. These labs have also provided a deep well of inspired research that has benefited the broader scientific community including weather, climate, astrophysics and a broad swath of engineering use of computation.

In my opinion the reasons for this halt in creative energy are simple and straightforward. The foolhardy worldwide push for exascale computers is sucking the air out of the room. It is gobbling up all the resources and attention leaving nothing for new ideas. This completely obsessive and unwise focus on the hardware is attempting to continue – the already dead – Moore’s law. This push is draining the community of vitality, resources and focus. The reasons for the push are worth considering because they help define the increasingly hostile nature of the modern world toward science. The computers being build for the future are abysmal to use and the efforts to move our codes to them are sucking all the energy from the Labs. Nothing is left of creative work; nothing is left for new ideas. Simply put, the continued use of old ideas is hard enough if you add these generally unusable computers to the mix. The reason is simple; the new computers completely suck. They are true monstrosities (in the classic definition of the word) and complete pieces of shit as scientific hardware. They are exactly the computers we don’t want to use. The price of forcing them down our throats is the destruction of research that isn’t associated with simply making these awful computers work. Worse yet, the return on the massive investment of effort will be vanishingly small in terms of added modeling and simulation capability.

titan

As noted this whole direction is a foolish attempt to breathe life into the already rigid corpse of Moore’s law. Now dead at every scale of computing and already a decade deceased at the level of computer chips – note the death of Moore’s law and the ascendency of cell phones is strongly correlated, and that probably is not a coincidence. The truth of our real performance on computers is far more dire and damning of this entire effort. We have been getting an ever-lower proportion of the potential performance on our computers for 25 years. Each computer has a peak performance measured on silly hardware friendly benchmarks that no one gives a flying fuck about (the dense linear algebra LU decomposition, Linpac). This silly and useless benchmark is how we crown the fastest computer! Our actual code performance on these machines is truly atrocious and gets worse every year. The dirty little secret is that its been getting ever worse every year. It was god-awful 20 years ago, and it has just gotten worse. Zero is a pretty good approximation to the proportion of the performance we get – generally much less than one percent. We mindfully ignore the situation just like one would ignore a cancer threatening to devour our lives. The attitude is generally, “look away, nothing to see here”. The exascale program is that cancer metastasized.

Part of the discussion about exascale needs to acknowledge the nature of choices and priorities in research. In isolation, the exascale program is an unambiguous good; it would be genuinely awesome to have – usable – exascale computers (https://wjrider.wordpress.com/2014/09/19/what-would-we-actually-do-with-an-exascale-computer/). This good needs to be weighed in terms of its cost and the impact of alternatives. It needs to be viewed through the lens of reality too. If one looks at the raw cost, the opportunity cost and collateral damage, and under this examination we can see that the exascale program is a massively negative force in science (https://wjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/, https://wjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/, https://wjrider.wordpress.com/2016/10/19/why-china-is-kicking-our-ass-in-hpc/, ). In isolation without considering anything else, it is a clear positive. In the context of lost opportunities and effective use of available resources, the program is an unmitigated disaster. We will all be poorer for it as it lays waste to potential breakthroughs we will be denied in its wake. In today’s world we talk about things in isolation, free of nuance and trade spaces that would make for a more robust and responsible conversation. Our leaders are irresponsible in the extreme for taking down this path with no real discussion, or any debate taking place. The message in the trenches is “do what you’re paid to do and quit asking questions”.

dag006The really dirty secret is that chasing exascale as a route to scientific discovery is simply bullshit of the highest and most expensive order. We would be far better served by simply figuring out how to use the hardware we already have. Figuring out how to efficiently use hardware we have had for decades would be a difficult and worthy endeavor. The punch line is that we could get orders of magnitude in improved performance out of the hardware we’ve been using for decades. By simply figuring out how to get our codes working more efficiently on the computers already existing would meet most scientific goals without eviscerating the rest of computational science in the process. Instead we chase goals that are utterly meaningless. In the process we are destroying the research that has true and lasting value. The areas being ignored in the push for exascale have the capacity to provide far more scientific capability than even the most successful exascale program could possibly deliver. This brings me back to the meeting in Santa Fe and the lack of energy and exciting ideas. In the past the meeting has been a great survey of the active work from a creative and immensely talented group of people. As such this meeting is the proverbial canary in the coalmine. The ideas are dying right in front of our eyes.

1wakdnThis outcome is conflated with the general lack of intellectual vigor in any public discourse. The same lack of intellectual vigor has put this foolish exascale program in place. Ideas are viewed as counter-productive today in virtually every public square. Alarmingly, science is now suffering from the same ill. Experts and the intellectual elite are viewed unfavorably and their views held in suspicion. Their work is not supported, nor is projects and programs dependent on deep thinking, ideas or intellectual labor. The fingerprints of this systematic dumbing down of our work have reached computational science, and reaping a harvest of poisoned fruit. Another sign of the problem is the lack of engagement of our top scientists in driving new directions in research. Today, managers who do not have any active research define new directions. Every year our manager’s work gets further from any technical content. We have the blind leading the sighted and telling them to trust them, they can see where we are going. This problem highlights the core of the issue; the only thing that matters today is money. What we spend the money on, and the value of that work to advance science is essentially meaningless.

leland_taylor_320Effectively we are seeing the crisis that has infested our broader public sphere moving into science. The lack of intellectual thought and vitality pushing our public discourse to the lowest common denominator is now attacking science. Rather than integrate the best in scientific judgment into our decisions on research direction, it is ignored. The experts are simply told to get in line with the right answer or be silent. In addition, the programs defined through this process then feed back to the scientific community savaging the expertise further. The fact that this science is intimately connected to national and international security should provide a sharper point on the topic.  We are caught in a vicious cycle and we are seeing the evidence in the hollowing out of good work at this conference. If one is looking for a poster child for bad research directions, the exascale programs are a good place to look. I’m sure other areas of science are suffering through similar ills. This global effort is genuinely poorly thought through and lacks any sort of intellectual curiosity.

Moving our focus back to exascale provides a useful case study of what is going wrong. We see that programs are defined by “getting funding” rather than what needs to be done or what should be done. Arguments for funding need to be as simple as possible, and faster computers are naïve enough for unrefined people to buy into. It sounds good and technically unsophisticated people buy it hook line and sinker. Computers are big loud and have lots of flashing lights to impress managers, politicians and business people who know no better. Our scientists have been cowered into compliance and simply act happy to get money for doing something. A paycheck beats the alternative, and we should feel happy that we have that. The level of inspiration in the overall approach has basically fallen off a cliff, and new ideas are shunned because they just make things complicated. We are left with the least common denominator as the driving force. We have no stomach for nuance or subtlety.

mediocritydemotivatorPriority is placed on our existing codes working on the new super expensive computers. The up front cost of these computers is the tip of the proverbial cost iceberg. The explicit cost of the computers is their purchase price, their massive electrical bill and the cost of using these monstrosities. The computers are not the computers we want to use, they are the ones we are forced to use. As such the cost of developing codes on these computers is extreme. These new computers are immensely unproductive environments. They are a huge tax on everyone’s efforts. This sucks the creative air from the room and leads to a reduction in the ability to do anything else. Since all the things being suffocated by exascale are more useful for modeling and simulation, the ability to actually improve our computational modeling is hurt. The only things that benefit from the exascale program are trivial and already exist as well-defined modeling efforts.

Increasingly everything is run through disconnected projects that are myopic by construction. The ability to do truly unique and groundbreaking science is completely savaged by this approach to management. Breakthroughs are rarely “eureka” moments where someone simply invents something completely new. Instead, most good research is not made through connections to other good research. Conferences are great incubators for these connections. Well-defined and proven ideas are imported and redefined to make contributions to a new area. This requires people to work across discipline boundaries, and learn about new things in depth. People need to engage deeply with one another, which is similarly undermined today by project management and information security focus. The key thing is exposure to new and related areas of endeavor and basic learning. The breakthroughs come episodically and do not lend themselves to the sort of project management in vogue today.

It isn’t like I came back with nothing. There were a couple of new things that really fall into the category of following up. In one case there was a continuation of a discussion of verification of shock tube problems with someone from Los Alamos. The discussion started in Las Vegas at the ASME VVUQ meeting, and continued in Santa Fe. In a nutshell, we were trying to get cleaner verification results by dividing the problem into specific regions associated with a particular solution feature and the expectation of different rates of convergence for each. We found something unexpected in the process that doesn’t seem to follow theoretical expectations. It’s worth some significant follow-up.  A mysterious result is always something worth getting to the bottom of. The second bit of new intellectual blood came in direct response to my talk. I will also freely admit that my contribution wasn’t the best. I haven’t had any better luck with a good free energy at work to energize my work. The same exascale demon is sucking my intellectual lifeblood out. I simply reported on a here-to-fore unreported structural failing of solvers. In summary, we find systematic, but small violations of the second law of thermodynamics in rarefactions for modern and classical methods. This shouldn’t happen and violations of the second law lead to unphysical solutions. All of this stems from identifying a brutal problem (https://wjrider.wordpress.com/2017/06/09/brutal-problems-make-for-swift-progress/ ) that every general-purpose code fails at – what I call “Satan’s shock tube” with 12 order of magnitude jumps in density and pressure approximating propagation of waves into a vacuum.

We cannot live only for ourselves. A thousand fibers connect us with our fellow men; and among those fibers, as sympathetic threads, our actions run as causes, and they come back to us as effects.

― Herman Melville

Before closing I can say a thing or two about the meeting. None of the issues dulled the brilliance of the venue in Santa Fe, “the City Different”. While I was disappointed about not enjoying the meeting in some exotic European venue, Santa Fe is a fabulous place for a meeting. It is both old (by American standards), yet wonderfully cosmopolitan. There is genuine beauty in the area, and our hotel was nearly perfect. Santa Fe boasts exceptional weather in the fall, and the week didn’t disappoint. It has a vibrant art community including the impressive and psychedelic Meow Wolf. It was the Drury Plaza hotel placed in a remodeled (and supposedly haunted) old hospital. Two short blocks from the plaza, the hotel is enchanting and comfortable. We all shared two meals each day catered by the hotel’s exceptional restaurant. Having meals at the conference and together with the participants is optimal and makes for a much-improved meeting compared to going out to restaurants.

We had a marvelous reception on the hotel’s rooftop bar enjoying a typical and gorgeous New Mexico early autumn sunset with flowing drinks, old friends and incredibly stimulating conversation. American laws virtually prohibit government funds paying for alcohol, thus the drinks were courtesy of the British and French governments. One more idiotic prohibition on productivity and common sense that only undermines our collective efforts especially creatively and collaboratively. These laws have only gotten more prescriptive and limiting. We no longer can pay for meals for interview lunches and dinners, much less business meetings. None of this is reflective of best practice for any business. The power of breaking bread and enjoying a drink to lubricate human interactions is well known. We only hurt our productivity and capacity to produce valuable work by the restrictions. We are utterly delusional about the wisdom of these policies. All of this only serves to highlight the shortcomings in the creative energy evident from the rather low level of vibrancy exhibited by the lack of exciting new ideas.

Never underestimate the power of human stupidity.

– Robert A. Heinlein

 

Testing the limits of our knowledge

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

― Daniel J. Boorstin

All knowledge hurts.

― Cassandra Clare

Science is an important source and process for generating knowledge for humanity. Much of science is a core of well-known and well-tested knowledge about the univeunnamedrse. Most of the activity for working scientists is at the boundaries of our knowledge working to push back our current limits on what is known. The scientific method is there to provide structure and order to the expansion of knowledge. We have well chosen and understood ways to test proposed knowledge. A method of using and testing our theoretical knowledge in science is computational simulation. Within computational work the use of verification, validation with uncertainty quantification is basically the scientific method in action (https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/ ). One of the key activities in the accomplishment of our understanding of theory is the determination of uncertainties.

Unfortunately what we call “uncertainty quantification” is only a small piece of what uncertainty needs to be evaluated in testing knowledge. Too often people only do this narrow part of uncertainty quantification and falsely believe this is sufficient for science.

The basic premise is that we can test our knowledge via the systematic understanding of uncertainties. If we examine the uncertainty in an honest and complete manner, the limits of knowledge can be explored. Some uncertainty can be reduced via greater computational effort, some uncertainty can be reduced with improved modeling and some uncertainty is irreducible. It is absolutely essential to understand the nature of what we do and don’t know systematically. Careful and honest study is clear, and the strong tendency of people is to view all uncertainty as bad. This is not necessarily true. Uncertainty is completely unavoidable, and understanding the degree to which this is true can be a great unburdening. If an uncertainty is unavoidable, one can dispense with attempting to reduce it and simply figure out how to live with it.

Crays-Titan-SupercomputerIf the uncertainty is irreducible and unavoidable, the problem with not assessing uncertainty and taking an implied value of ZERO for uncertainty becomes truly dangerous (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In this case there is an uncertainty that should be there, and instead of building this knowledge into our work, we mindlessly ignore it. Sometimes it is actually mindfully ignored, which is utterly contemptible. This situation is so common as to be laughable, but actually provides the source for tragedy. Looking at weather and climate provides innumerable situations where ignorance of uncertainty pileofshitmay prove deadly in rather commonly encountered situations. As systems become more complex and energetic, chaotic character becomes more acute and common. This chaotic character leads to solutions that have natural variability. Understanding this natural variability is essential to understanding the system. Building this knowledge is the first step in moving to a capability to control and engineer it, and perhaps if wise, reduce it. If one does not possess the understanding of what the variability is, such variability cannot be addressed via systematic engineering or accommodation.

Courage doesn’t happen when you have all the answers. It happens when you are ready to face the questions you have been avoiding your whole life.

― Shannon L. Alder

This entire issue is engaged frequently. In the case of experiments for complex systems, the problem arises because lack of repeated experiments. Often such systems are complex and expensive, thus tests are carried out once. Even though any thoughtful examination of the situation would conclude that the results of the experiment are likely –almost certainly – highly variable, the experiment is treated as a unique event. Computational simulations of the experiment are viewed the same way; the calculation should try to reproduce this single experiment. This is potentially a recipe for disaster. A healthier point of view would be looking at this experiment as a single instance of drawn from a probability distribution. If the experiment were repeated there would be different results. A computational simulation if truly predictive would do exactly the same thing. Of course the simulations themselves are designed to compute the average response of such an experiment (based on mean field equations). All of this conspires to create big problems should the experiment actually draw from a low probability outcome (i.e., the tails of the distribution).5f282213e3d57606200fffd45374ecc5

To address this systemic issue we need to reframe both the experimental and theoretical practice. If an experiment fails to give repeatable results, we need to work to unveil the statistical nature of the results especially for complex, high consequence systems. Theoretical models need to have the same properties more mindfully rather than producing the variability by happenstance (when mean field models produce variability almost serendipitously). Computational simulations follow the theory by construction and great benefits to our knowledge and understanding by more structurally and mindfully building in the variability to the theory. Failure to address this issueostrich-head-in-sandsystematically is an ever-growing limit for science. We have a major scientific gap open in front of us and we are failing to acknowledge and attack it with our scientific tools. It is simply ignored almost by fiat. Changing our perspective would make a huge difference in experimental and theoretical science, and remove our collective heads from the sand about this matter.

Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.

― Marie Curie

At a deeper and more fundamental level the whole exploration of the true uncertainty is the search for the understanding. We seek to define our level of precision for the modeling of something by systematically examining the levels of certainty (by proxy of studying uncertainty). We need to understand the quality of our knowledge, and a complete survey of uncertainty is a path to this end. For computational modeling there are three broad categories for the uncertainty, the model itself, the model’s numerical solution, and the experimental fidelity used to grant confidence to the model. Each of these uncertainties can in turn be broken down into more detailed pieces. For

example we must compare to the model itself rather than the error in the solution of the model. It is important to choose some structure for the uncertainty and commit to an estimate of all portions of the structure. One should never take a zero magnitude for the uncertainty of any structural element by ignoring it.

Any fool can know. The point is to understand.

― Albert Einstein

One of the clearest ways to undermine this quest for understanding the boundaries of our knowledge is ignoring a key uncertainty. There are several usual suspects for the _12122_tex2html_wrap26willful uncertainty ignorance. Probably the most common uncertainty to be willfully ignorant of is numerical error. The key numerical error is discretization error that arises from the need to make a continuous problem, discrete and computable. The basic premise of computing is that more discrete degrees of freedom should produce a more accurate answer. Through examining the rate that this happens, the magnitude of the error can be estimated. Other estimates can be had though making some assumptions about the solution and relating the error the nature of the solution (like the magnitude of estimated derivatives). Other generally smaller numerical errors arise from solving systems of equations to a specified tolerance, parallel consistency error and round-off error. In most circumstances these are much smaller than discretization error, but are still non-zero.

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

A second category of uncertainty that is highly prone to being ignored is the experimental variability. Often this is the direct consequence of only doing a single experiment. Rather then know the physics of the problem well enough to make the conclusion that the experiment will be highly variable, this is ignored and we will endeavor to model the single experiment as a unique well-determined event. The result of this set of unjustified assumptions is wholesale ignorance of systematic and irreducible uncertainty. This is truly scientific malpractice. Addressing this shortcoming should be the focus of significant effort experimentally, as well as in modeling and its numerical solution. It is a very large and largely unaddressed issue in science.

In addition to ignoring the intrinsic variability in the experiments, the more classical and even less excusable uncertainty often ignored is measurement error. This error is always present even in cases where the experiment is well posed and utterly reproducible. Measurements are always finite precision and have some degree of error and uncertainty. This finite value should always be reported as part of the experimental comparison even when the uncertainty is small. In a deep sense the small uncertainty is more important because it lends credence to the sense that the data is high quality. Not reporting the uncertainty simply leaves this vague and undefined.

The-most-powerful-Exascale-ComputerThe last area of uncertainty is the modeling uncertainty. In the vast majority of cases this will be the largest source of uncertainty, but of course there will be exceptions. It has three major components, the choice of the overall discrete model, the choice of models or equations themselves, and the coefficients defining the specific model. The first two areas are usually the largest part of the uncertainty, and unfortunately the most commonly ignored in assessments. The last area is the most commonly addressed because it is amenable to automatic evaluation. Even in this case the work is generally incomplete and lacks full disclosure of the uncertainty.

Today in examining modeling uncertainty we most often attack the least important one systematically, and the more important modeling uncertainties are ignored. The “easy” uncertainty to attack is the coefficients in the model. This can be achieved using well-developed methods such as MCMC (Markov chain Monte Carlo). One can define a set of parameters to be varied and ranges for the variation. The calculations can bemcmc-samplingrepeated using values drawn to efficiently sample the probability space of the calculation and produce the uncertainty. This sampling is done for a very highly dimensional space, and carries significant errors. More often than not the degree of error associated with the under sampling is not included in the results. It most certainly should be.

The other two uncertainties are generally larger and thus more important to characterize. Unfortunately neither is amenable to the sort of turnkey black box approach the parametric uncertainty allows. As a result these uncertainties are usually completely ignored. These two areas of uncertainty are closely related. Any complex problems can be modeled in a variety of ways. An analyst ends up making innumerable assumptions and choices in the course of modeling a complex problem. One choice is the code used for the analysis and the specific settings within the code. Beyond this there are choices on how the problem is meshed, boundary conditions, initial conditions, submodels to close the equations, and analysis of results. Each and every one of these choices can produce changes in the results. It is very uncommon to see a problem solved in different or remotely independent ways. As a result the uncertainty from the modeling and analysis is usually completely hidden.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

To truly test our knowledge in a topic we must be open to a full and honest evaluation of the uncertainties. Knowledge and uncertainty are two sides of the same coin. If you don’t know the uncertainty, you don’t know the extent of your knowledge. Too often we only do the assessment that is easy. In this assessment strategy we also implicitly choose uncertainty estimates of ZERO for information we know is uncertain, but difficult to assess. This is a pernicious and dangerous practice. Due diligence and responsibility should dictate that some sort of uncertainty be estimated for all sources. If we cannot estimate these uncertainties in a credible and knowledgeable manner, we have no business in modeling and our experiments are not yielding their full value. The only responsible act in this case would be to produce work that would make such a bounding assessment possible.

Negative results are just what I want. They’re just as valuable to me as positive results. I can never find the thing that does the job best until I find the ones that don’t.

― Thomas A. Edison

The Inspirational Promise of TeD Talks

Your number-one mission as a speaker is to take something that matters deeply to you and to rebuild it inside the minds of your listeners. We’ll call that something an idea.
― Chris J. Anderson

janell-500x500Every September my wife and I attend the local TeDx event here in Albuquerque. It is a marvelous way to spend the day, and leaves a lasting impression on us. We immerse ourselves in inspiring, fresh ideas surrounded by like-minded people. It is empowering and wonderful to see the local community of progressive people together at once listening, interacting and absorbing a selection of some of the best ideas in our community. This year’s event was great and as always several talks stood out particularly including Jannell MacAulay (Lt.Olivia-GatwoodCol USAF) talking about applying mindfulness to work and life, or Olivia Gatwood inspiring poetry about the seeming mundane aspects of life that speaks to far deeper issues in society. The smallest details are illustrative of the biggest concerns. Both of these talks made me want to think deeply about applying these lessons in some fashion to myself and improving my life consequentially.

TED-Talks

That’s part of the point of TeD, the talks are part of the gospel of progress, part marketing of great ideas and part performance art. All of these things have a great use to society in lifting up and celebrating a drive to be better and progress toward a better future. Humanity has immense power to change the world around them for the better. We can look across the globe and witness the collective power of humanity to change their environment. A great deal of this change is harmful or thoughtless, but much of it is a source of wonder. Our understanding of the World around us and the worlds within us has changed our biological destiny.

main-event-2017-web-headerWe have transitioned from an animal fighting for survival during brief violent lives, to beings capable of higher thought and aspiration during unnaturally long and productive lives. We can think and invent new things instead of simply fighting to feed us and reproduce a new generation of humans to struggle in an identical manner. We also can produce work whose only value is beauty and wonder. TeD provides a beacon for human’s best characteristics along with a hopeful forward-looking community committed to positive common values. It is a powerful message that I’d like to take with me every day. I’d like to live out this promise with my actions, but the reality of work and life comes up short.

There was a speaker from my employer this year, and there always is. There wasn’t anyone from my former employer, the other major scientific Lab in our state (what was once one of the premier scientific institutions in the World, but that’s a thing of the past). Also noticeable is the lack of support for the local TeD organization by either employer. I’ll grant you that Los Alamos has supported it in the past, but no longer. There’s probably some petty and idiotic reason for the withdrawal of support. My employer, Sandia, doesn’t support it, and hasn’t ever. It looks like our local University doesn’t support it either. I know that Los Alamos did their own local TeD conference and perhaps they thought that was enough TeD for them. That’s the sad best-case scenario, and I don’t know what the full story is.

For Sandia it’s not particularly surprising as it’s not exactly a progressive, idea-centered place, and these days no place is anyway. The University should be, but the lack of financial support from the state could explain it (its a common characteristic of GOP governance to eviscerate universities). It is quite hard for me to express my level of disappointment in these institutions’ lack of civic support for progressive thought. It is stark testimony on the current state of affairs where two National Laboratories and a University cannot be supportive of a major source of progressive thought in the community they are embedded within. An active progressive and intellectual community in the areas where these institutions are located should be beneficial for recruiting and retention of progressive and intellectual staff. It is one sign that this sort of long view isn’t at work. It is a sign of the times.

download-1TeD talks are often the focus of criticism for their approach and general marketing nature strongly associated with the performance art nature. These critiques are valid and worth considering including the often-superficial nature of how difficult topics are covered. In many ways where research papers can be criticized increasingly as merely being the marketing of the actual work, TeD talks are simply the 30-second mass market advertisement of big ideas for big problems. Still the talks provide a deeply inspiring pitch for big ideas that one can follow up on and provide the entry to something much better. I find the talk is a perfect opening to learning or thinking more about a topic, or merely being exposed to something new.

Control leads to compliance; autonomy leads to engagement.

– Daniel H. Pink

One prime example is one of my favorite talks of all time by Daniel Pink (https://www.ted.com/talks/dan_pink_on_motivation). This talk is basically a pitch for the book “Drive” and touches only superficially on the topic. The book itself is a distillation of very complex topics. All of this is true, but none of this undermines the value in the ideas. TeD provides a platform to inspire people to do more and get closer to the actual application of the ideas to their lives (not just buy Pink’s book, the true cynics take on the purpose). Interestingly, the managers at work were also reading Pink’s book and discussing the ideas therein. The rub was the observation that I coulddownload-2 not identify a single thing recommended in Pink’s book that made it to the workplace. It seemed to me that the book simply inspired the management to a set of ideals that could not be realized. The managers aren’t really in charge; they are simply managing the corporate compliance instead of managing in a way that maximizes the performance of its people. The Lab isn’t about progress any more; it is about everything, but progress. Compliance and subservience has become the raison d’etre.

For artists, scientists, inventors, schoolchildren, and the rest of us, intrinsic motivation the drive do something because it is interesting, challenging, and absorbing is essential for high levels of creativity.

– Daniel H. Pink

download

Intrinsic motivation is conducive to creativity; controlling extrinsic motivation is detrimental to creativity.

–Daniel H. Pink

This deep frustration isn’t limited to TeD talks; it is almost every source of great advice or inspiration available. Almost every manager I know reads the Harvard Business Review. I read it too. It is full of wonderful ideas and approaches to improving the way we work. It is impossible to see anything ever done with all the great advice or inspiration. My workplace looks like all the “before” cases studies in HBR and more like it every day, not less. Nothing ever recommended happens at work, nothing is tried, nothing changes in the positive direction; its like we are committed to moving backwards. HBR download-1is progressive in terms of the business world. The problem is that the status quo and central organizing principle today is anti-progressive. Progress is something everyone is afraid of, and the future appears to be terrifying and worth putting off for as long as possible. We see genuinely horrible lurch toward an embrace of the past along with all its anger, bigotry, violence and fear. Fear is the driving force for avoiding anything that looks progressive.

Management isn’t about walking around and seeing if people are in their offices, he told me. It’s about creating conditions for people to do their best work.

– Daniel H. Pink

Now that I’ve firmly established the lack of relevance of TeD and progressive thought in my workplace, I can at least appreciate and apply it at a personal level. I’d love for work to reflect a place for genuine progress, but this seems a bridge too far today. Work is a big part of life and these observations are rather dismaying. Ideally, I’d like a workplace that reflects my own values. The truth of the matter is that this is nearly impossible for a progressive-minded person in America today. Even the bastions of progressive thought like Universities are not working well. Society at large seems to be at war with elites and progressive thought far more under siege than whites, or Christians. I can ask the serious question, how many atheists are in Congress? How much well proven and accepted science does our government reject already? Don’t get me started on our judicial system, or the war on drugs both of which focus far more on oppressing minorities than crime or drug abuse. The bottom line is the sense that we are in a societal backlash against change; so more progress seems to be impossible. We will be fighting to hold onto the progress we’ve already made.

maxresdefault-1Still I can offer a set of TeD talks that have both inspired me and impacted my life for the better. They have either encouraged me to learn more, or make a change, or simply change perspective. I’ll start with a recent one where David Baron gave us an incredibly inspiring call to see the total eclipse in its totality (https://www.ted.com/talks/david_baron_you_owe_it_to_yourself_to_experience_a_total_solar_eclipse). I saw the talk concluding that I simply had to go, and then I showed to my wife to convince her. It did! We hopped into the car at midnight the day of eclipse and drove eight hours to get from Northern Idaho to Eastern Oregon. We got off I-82 atmaxresdefaultDurkee finding a wonderful community center with a lawn and watched it with 50 people from all over the local area plus a couple from Berlin! The totality of the eclipse lasted only two minutes. It was part of a 22-hour day of driving over 800 miles, and it was totally and completely worth every second! Seeing the totality was one of the greatest experiences I can remember. My life was better for it, and my life was better for watching that TeD talk.

61431a6a6184b8f62bc9e27705d1b9f8429541cf_2880x1620Another recent talk really provoked me to think about my priorities. It is a deep consideration of what your priorities are in terms of your health. Are you better off going to the gym or going to party, or the bar? Conventional wisdom says the gym will extend your life the most, but perhaps not. Susan Pinker provides a compelling case that social connection is the key to longer life (https://www.ted.com/talks/susan_pinker_the_secret_to_living_longer_may_be_your_social_life ). This gets at the disparity between men and women since women tend to connect in long life affirming friendships with greater ease than men. The talk is backed up by data, and by visiting places where people live long lives. These people live in communities where they are entangled in each other’s lives almost by design. It gets to the priorities associated with health care and self care along with the benefit of actions. Focusing on your social life is a genuinely beneficial act to prolonging your life.

Our modern computing world is a marvel, but it also has some rather pronounced downsides. In many ways our cell phones are making us far unhappier people. The phones and their apps are designed to grab, demand our attention. They can become sources of deep and pervasive anxiety. This is exactly what they are designed to do. As Adam Alter explains, an entire industry is set up to get as much of our attention as possible because our attention equals money, big money (https://www.ted.com/talks/adam_alter_why_our_screens_make_us_less_happy). He also explains that it doesn’t have to be like this. The same social engineering that has gone into making the phones so demanding could be harnessed to help us be better. If we balanced the naked profit motive with some measure of social responsibility, we might turn this problem into a benefit. This is a wonderfully inspiring idea; it is also terribly progressive and dangerous to the unfettered capitalism fueling this growing societal crisis.

374906e7e2a0f3970763c48ce8e7cc28614444a0_2880x1620

Love rests on two pillars: surrender and autonomy. Our need for togetherness exists alongside our need for separateness.

– Esther Perel

The power of TeD extends to far deeper personal matters as well. A couple of talks by Esther Perel speak to reframing our love lives (https://www.ted.com/talks/esther_perel_the_secret_to_desire_in_a_long_term_relationship, https://www.ted.com/talks/esther_perel_rethinking_infidelity_a_talk_for_anyone_who_has_ever_loved ). Perel defies conventional thought on love, marriage and infidelity providing a counter theory to all these matters. Her first talk is an accompaniment to her first book and tackles the thorny issue of keeping your long-term relationship hot and steamy. It is a challenge many of us have tackled, and no doubt struggled with. This 27485struggle is for good reasons, and knowing the reasons provides insight to solutions. Perel powerfully explains the problem and speaks to working toward solutions.

The thornier issue of infidelity is the second talk (and her brand new book). Like before, she tackles the topic from a totally different perspective. Her approach is unconventional and utterly refreshing. The new perspectives provide an alternative narrative to handling this all too common human failing. Explaining and understanding the complex root of this all-to-common relationship problem can improve our lives. It is an alternative to the moral perspective that has failed to provide any solutions. Among the threads to concentrate on is the relatively new character of modern marriage in the history of humanity, and the consequences of the deep changes in the institution. One of the beauties of TeD is the exposure to fresh perspective on old ideas along side completely new ideas.

The very ingredients that nurture love mutuality, reciprocity, protection, worry, and responsibility for the other are sometimes the very ingredients that stifle desire.

– Esther Perel

Truth and courage aren’t always comfortable, but they’re never weakness.

– Brene Brown

The last talk I’ll highlight today is truly challenging to most of us. Brene Brown is a gifted and utterly approachable speaker presenting a topic that genuinely terrifies most of us, vulnerability (https://www.ted.com/talks/brene_brown_on_vulnerability). Begin vulnerable is an immensely valuable characteristic that almost everyone struggles with. Vulnerable often equates with being weak, but also open and honest. That openness and honesty is the key to being a better person and developing better relationships. In many cases the weakness and honesty is shared only with yourself. In either case vulnerability provides an avenue to connection and an embrace of humanity that both frees you and allows deeper relationships to flourish. The freedom you give yourself allows you to grow, learn and overcome bad experiences.

What would you be glad you did–even if you failed?

– Brene Brown

I always wish that I could focus on most of what I hear at a local TeD event, but one must make choices, time and effort are limited. While I do plan to more mindfully apply mindfulness to my life, right now I’ll hedge toward the artistic side of things, if for no Nature___Clouds_Clouds_in_the_moonlit_night_080130_other reason that I usually don’t. I will close by honoring the inspirational gift of Olivia Gatwood’s talk on poetry about seeking beauty and meaning in the mundane. I’ll write a narrative of a moment in my life that touched me deeply.

The Best Gift

A night of enchanting companionship was drawing to a close,

and I was longing for one last kiss before parting

Those early autumn nights are so welcoming,

 

the crisp nights promised, but not yet arrived,

summer still alive, but fading

I hadn’t even bothered to fully dress for the goodbye,

 

Conventions and neighbors be damned

It was a warm evening and my skin wanted to drink it in,

drink her in too, one last time

 

We slowly made our way out to my driveway

talking, still flirting, our banter unabated

The moon full, bright, and peeking between the gaps in the single cloud

 

adorning the sky as it illuminates the night

It will light her way home as a warm beacon

“Good,” I think, “you’ll be safe” on your long drive home

 

We draw close to each other, pressing hard while

savoring the time spent together fun and friendship

with a depth that was unexpected, but welcome

 

You ask, “What would you like for your birthday?”

My mind goes to my elaborate tattoo to adorn me soon,

“I’m already getting what I want for myself”

 

“I always ask for more time,” she said longingly

Her words cut me to the core,

of course, what else would she want?

 

My head spins with the truth revealed by her breathtaking honesty,

with words failing me for a breath or two, … or three

My mind opens with the realization of her precious offering

 

“I just want good memories”

Realization washes over me, she just gave me the best gift I could have hoped for

We kiss deeply and parted until we next renew making good memories

 

 

You are not special; you are replaceable

You are not special. You’re not a beautiful and unique snowflake. You’re the same decaying organic matter as everything else. We’re all part of the same compost heap. We’re all singing, all dancing crap of the world.

– Chuck Palahniuk

This post was inspired by twin events: a comment from a dear friend, and watching the fight-club-postermovie “Fight Club” again. This is my 300th blog post here. Its been an amazing experience thanks for reading.

If you consider the prospect of retirement and you feel that your place of work does not need you and would not suffer from you departure, you aren’t alone. This is an increasing trend for work today. You are an imminently replaceable cog in the machine, which can be interchanged with another person without any loss to the workplace. Your personal imprint on the products of work is not essential and someone else could do exactly what you do. If you work in one of the many service industry jobs, or provide the basic execution of tasks, the work is highly prescribed and you versus someone else doesn’t matter much. If you are reliable, show up and work hard, you are a good worker, but someone else with all the same characteristics is just as good.

What’s measured improves

–Peter F. Drucker

I didn’t used to feel this way, but times have changed. I felt this way when I worked at potentialdemotivator_largeMcDonalds for my first job. I was a hard worker, and a kick ass grill man, opener, closer, and whatever else I did. I became a manager and ultimately the #2 man at a store. Still I was 100% replaceable and in no way essential, the store worked just fine without me. I was interchangeable with another hard working person. It isn’t really the best feeling; you’d like to be a person whose imprint on the World means something. This is an aspiration worth having, and when your work is truly creative, you add value in a way that no one else can replicate.

When I started working almost 30 years ago at Los Alamos, this dynamic felt a lot different. People mattered a lot, and an individual was important. Every individual was important, unique and worth the effort. As a person you felt the warm embrace Los_Alamos_colloquiumof an incubator for aspiring scientists. You were encouraged to think of the big picture, and the long term while learning and growing. The Lab was a warm and welcoming place where people were generous with knowledge, expertise and time. It was still hard work and incredibly demanding, but all in the spirit of service and work with value. I repaid the generosity through learning and growing as a professional. It was an amazing place to work, an incredible place to be, an environment to be treasured, and made me who I am today.

Never attribute to malevolence what is merely due to incompetence

–Arthur C. Clark

It was also a place that was out of time. It was a relic. The modern World came to Los Alamos and destroyed it, creating a shadow of its former greatness. The sort of values that made it such a National treasure and one of the greatest institutions could not coexist with today’s culture. The individuals so treasured and empowered by th8286049510_dd79681555_ce scientific culture there were relabeled as “butthead cowboys,” troublemakers, and failures. The culture that was generous, long term in thought, viewing the big picture and focused on National service was haphazardly dismantled. Empowerment was ripped away from the scientists and replaced with control. Caution replaced boldness, management removed generosity, all in the name of formality of operations that removes anything unforeseen in outcomes. The modern world wants assured performance. Today Los Alamos is mere shadow of itself, stumbling forward toward the abyss of mediocrity. Witnessing this happen was one of the greatest tragedies of my life.

People who don’t take risks generally make about two big mistakes a year. People who do take risks generally make about two big mistakes a year.

–Peter F. Drucker

Along with assured performance we lose serendipity and discovery. We lose learning and surprises, good and bad. We lose the value in the individual, and the ability to have one person make a positive difference. All of this is to keep one person from making a negative difference or to avoid mistakes and failures. The removal of mistakes and failures removes the engine of learning and real scientific discovery from table as well. Each and every one of these steps is directly related to the fear of the bad things happening. Every good is a flip side of a bad thing, and when you can’t accept the bad, you can’t have the good either. In the process the individual has been removed from SONY DSCimportance. Everything is process today and anything bad can be managed out of existence. No one looks at the downside to this, and the downside is sinister to the quality of the workplace.

Let’s be clear about what I’m talking about. This isn’t about being cavalier and careless. It isn’t an invitation to be dangerous or thoughtless. This is about making a best earnest effort at something, and still failing. This is about doing difficult things that may not succeed, putting your best effort forward even if it falls short. In many ways we have lost the ability to distinguish between the good and bad failure with all failure viewed as bad, and punished. We have made the workplace an obsessively cautious and risk adverse place that lacks the soul it once embraced. We have lost the wonder and power of the supremely talented person in the prime of their creative powers to create game changing things or knowledge.

The core problem is the willingness to deal with the inevitable risks and failures with empowering people. Instead of seeing the risks and failures and a necessary element in enabling success, we have fallen victim to the fiction that we can manage the risk and failure out of existence, all while assuring productivity. This is utterly foolish and antithetical to reality. The risks are necessary to strive to achieve difficult and potentially great things. If one is working at the limit of their capability the result is frequently failure, and the ensemble of failures paves the way for success. It tells us clearly what does not work, and provides the hard lessons that educate us. Somehow we have allowed the delusion that achievement can be had without risk and failure to creep into our collective consciousness.

mediocritydemotivatorInstead of encouraging and empowering our people to take risks while tolerating and learning from failure, we do the opposite. We steer people away from doing risky work, punish failure and discourage lesson learning. It is as if we had suddenly become believers in the “free lunch”. True achievement is extremely difficult, and true achievement is powered by the ability to try to do risky almost impossible things. If failure is not used as an opportunity to learn, people will become disempowered and avoid the risks. This in turn will kill achievement before it can even be thought of. The entire system would seem to be designed to disempower people, and lower their potential for achievement.

The other aspect of this truly viscous cycle is the dismantling of expertise. Expertise is built on the back of years and years of failure. Of course this happens only if the failures are actively engaged as educational opportunities that empower the expert to engage in more thoughtful risks. These thoughtfully engaged in risks still need to fail and perhaps fail most of the time. Gradually the failures of today begin to look like the achievements of yesterday. What we see as a failure today would be a monumental achievement a decade ago. This is completely built on the back of seeing the failures of yesterday in the right light, and learning the lessons available from the experience.

When we empower people to take risks and grow them into experts, they also provide Unknown-3the knowledge necessary to mentor others. This was a key aspect of my early career experience at Los Alamos. At that time the Lab was teeming with experts who were generous with their time and knowledge. All you had to do was reach out and ask, and people helped you. The experts were eager to share their experience and knowledge with others in a spirit of collective generosity. Today we are managed to completely avoid this with managed time and managed focus. We are trained to not be generous because that generosity would rob our “customers” of our effort and time. The flywheel of the experts of today helping to create the experts of tomorrow is being undone. People are trained to neither ask, nor provide expertise freely.

What we are moving toward is a system that is less than the sum of its parts. What I started with was a system that added great value to every person, and effectively was far greater than the sum of its parts. The generosity that characterized my early career added immense value to every hour spent at work. Today this entire way of working is being torn apart by how we are managed. People can’t be generous if they have to account for all their time and charge it to a specific customer. The room for serendipity, discovery and the addition of personal value to activities is being removed to satisfy bean counters and small-mined people. We have allowed an irrational fear of one misspent dollar to waste billions of dollars and the productive potential of people’s lives. Worse yet, the whole apparatus erected to produce formal operations are ripping the creative force from the workplace and replacing it with soulless conformity. It matters less and less who we are each day; we are simply replaceable parts in a mindless machine.

I might be temped to simply end the discussion here, but this conclusion is rather dismal. imagesIt is where we find ourselves today. We also know that the state of affairs can be significantly better. How can we get there from here? The first step would be some sort of collective decision that the current system isn’t working. From my perspective, the malaise and lack effectiveness of our current system is so pervasive and evident that action to correct it is overdue. On the other hand, the current system serves the purposes of those in control quite well, and they are not predisposed to be agents of change. As such, the impetus for change is almost invariably external. It is usually extremely painful because the status quo does not want to be rooted out unless it is forced to. The circumstances need to demand performance that current system cannot produce, and as systems degrade this becomes ever more likely.

At the time, my life just seemed too complete, and maybe we have to break everything to make something better out of ourselves.

–Chuck Palahniuk

The current system is thoroughly disempowering and oriented toward explicit control of people’s actions. Keeping order and people in line while avoiding risk and failure are the core principles. The key to any change is enabling trust for the individual to move to centrality in the system. The upside to the trust is the degree of efficiency and effectiveness that is born from trust; the downside is the possibility of failure, poor performance and various human failings. The system needs to be resilient to these inevitable problems with people. The negative impact of trying to control and manage these failings results in destroying most of the positive things individuals can provide. Empowerment needs to trump control and allow people’s natural inclination toward success to be central to organizational design.

In most cases being a good boss means hiring talented people and then getting out of their way.

–Tina Fey

We need to completely let go of the belief that we can manage all the bad things away Steve_Jobs_Headshot_2010-CROPand not lose all the good things in the process. Bad things, bad outcomes and bad behavior happen, and perhaps need to happen to have all the good (in other words “shit happens”). Today we are gripped with a belief that negative outcomes can be managed away. In the process of managing away bad outcomes, we destroy the foundation of everything good. To put it differently we need to value the good and accept the bad as a necessary condition for enabling good outcomes. If one looks at failure as the engine of learning, we begin to realize that the bad is the foundation of the good. If we do not allow the bad things to happen, let people fuck things up, we can’t have really good things either. One requires the other and our attempts to control bad outcomes, removes a lot of good or even great outcomes at the same time.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

– Werner Heisenberg

So to sum up, let’s trust people again. Let’s let them fail, fuck up and do bad things. Let’s let people learn from these failures, fuck up’s and painful experiences. These people will learn a lot, including very painful lessons and get hurt deeply in the process. They will become wise, strong, and truly experts at things. People who are entrusted are empowered and love their work. They are efficient, productive and effective. They have passion for what they do, and give their work great loyalty. They will take risks in a fearless manner. They will be allowed to fail spectacularly because spectacular success and breakthroughs only come from these fearlessly taken risks.

May I never be complete. May I never be content. May I never be perfect.

–Chuck Palahniuk

 

 

If you don’t know uncertainty, bounding is the first step to estimating it

Sometimes the hardest thing any of us can hope for is finding the courage to be honest with ourselves.

― Kira Saito

Today I’m writing about dealing with the unfortunate practice of failing to address uncertainty, and implicitly setting its value to zero, the smallest possible value. This approach is pernicious, and ubiquitous in computational science (and a lot of other science). It is a direct threat to progress and far too acceptable in practice.  I wrote about this at length decrying this standard practice, but it remains the most common practice in uncertainty quantification (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In a nutshell when someone doesn’t know what the uncertainty is they simply assign a value of zero to it. We can do something better, but first this needs to be recognized for what it is, systematic and accepted ignorance.

chart-with-huge-error-barsThe reasons for not estimating uncertainties are legion. Sometimes it is just too hard (or people are lazy). Sometimes the way of examining a problem is constructed to ignore the uncertainty by construction (a common route to ignore experimental variability and numerical error). In other cases the uncertainty is large and it is far more comfortable to be delusional about its size. Smaller uncertainty is comforting and implies a level of mastery that exudes confidence. Large uncertainty is worrying and implies a lack of control. For this reason getting away with choosing a zero uncertainty is a source of false confidence and unfounded comfort, but a deeply common human trait.

mistakesdemotivator_largeIf we can manage to overcome the multitude of human failings underpinning the choice of the default zero uncertainty, we are still left with the task of doing something better. To be clear, the major impediment is recognizing that the zero estimate of uncertainty is not acceptable (most “customers” like the zero estimate because it seems better even though its assuredly not!). Most of the time we have a complete absence of information to base uncertainty estimates upon. In some cases we can avoid zero uncertainty estimates by being more disciplined and industrious, in other cases we can think about the estimation from the beginning of the study and build the estimation into the work. In many cases we only have expert judgment to rely upon for estimation. In this case we need to employ a very simple and well-defined technique to providing an estimate.

Learning is not the accumulation of knowledge, but rather, one thing only: understanding

― Donna Jo Napoli

The best way to explore estimates is using the time-honored approach of bounding the uncertainty. One should be able to provide clear evidence that the uncertainty is both smaller and larger than certain known values. This provides bounds for the magnitude of uncertainty.  Depending on the purpose of the study, these magnitudes can be used to more appropriately use the results. This can then be used to provide some sort of reasonable and evidence based uncertainty to energize progress and underpin credibility. If the estimate of the smallest possible uncertainty is that ubiquitous zero, the estimate should be rejected out of hand. The uncertainty is never ZERO, not ever. Nothing is known with absolute certainty. If the uncertainty is very small there should be very strong evidence to support the bold assertion. We do know some things extremely well like Planck’s constant, but it still has an uncertainty of a finite size.

The flip side to the lower bound is the upper bound for the uncertainty. Generally b7cb11b1c07d55e2165046959098156aspeaking, there will be a worst case to consider or something more severe than the scenario at hand. Such large uncertainties are likely to be quite uncomfortable to those engaging in the work. This should be uncomfortable if we are doing things right. The goal of this exercise is not to minimize uncertainties, but get things right. If such bounding uncertainties are unavailable, one does not have the right to do high consequence decision-making with results. This is the unpleasant aspect of the process; this needs to be the delivery of the worst case. To be more concrete in the need for this part of the bounding exercise, if you don’t know how bad the uncertainty is you have no business using the results for anything serious. As stated before the bounding process needs to be evidence based, the assignment of lower and upper bounds for uncertainty should have a specific and defensible basis.

Belief can be manipulated. Only knowledge is dangerous.

― Frank Herbert

Once the bounds for the uncertainty are established along with associated evidence, some choices need to be made to use the information. To a large extent the most conservative choice is the easiest to defend meaning that the upper bound for uncertainty should be used. If the work is being engaged in an honest sense this would be a pessimistic perhaps in the extreme. If one thinks about things in a probabilistic sense, the bounds should establish an interval for the potential uncertainty. This interval is most likely to be defensibly treated with a uniform distribution. For most cases using a midpoint averaging the lower and upper bounds is a reasonable choice. If the application associated with the decision-making is extremely important, the upper bound or something skewed in that direction is probably advisable.

imagesTo some extent this is a rather easy lift intellectually. Cultural difficulty is another thing altogether. The indefensible optimism associated with the default zero uncertainty is extremely appealing.  It provides the user with a feeling that the results are good. People tend to feel that there is a single correct answer. The smaller the uncertainty is the better they feel about the answer. Large uncertainty is associated with lack of knowledge and associated with low achievement. The precision usually communicated with the default, standard approach is highly seductive. It takes a great deal of courage to take on the full depth of uncertainty along with the honest admission of how much is not known. It is far easier to simply do nothing and assert far greater knowledge while providing no evidence for the assertion.

Uncertainty is a discomforting concept for people. Certainty is easy and comfortable while uncertainty is difficult and carries doubt. It is problematic to consider the role of chance in events, and the fickle nature of reality. A great many important events occur largely by chance and could have quite easily turned out quite differently. Consider how often you encounter a near miss in life, something where danger seemed far to close and just missed you. When these events turn out disastrously they can be tragedies. How often have similar tragedies been barely averted? This same dynamic plays out in experiments that are repeated. An attempt is made to make the experiment reproducible. Occasionally something completely different unfolds. The repeated effects are never exactly the same; there is a small variation. These variations are the uncertainty and depending on the experiment, they have a magnitude.

What happens when you do the experiment exactly once? The simplest thing to do is First-Time-Measurements-of-Turbulent-Mixingconsider this experiment to be a completely determined event with no uncertainty at all. This is the knee jerk response of people is the consideration of this single event as being utterly and completely deterministic with no variation at all. If the experiment were repeated with every attempt to make it as perfect as possible, it would turn out slightly differently. This comes from the myriad of details associated with the experiment that determine the outcome. Generally the more complex and energetic the phenomenon of being examined is, the greater the variation (unless there are powerful forces attracting a very specific solution). There is always a variation, the only question is how large it is; it is never, ever identically zero. The choice to view the experiment as perfectly repeatable is usually an unconscious choice that has no credible basis. It is an incorrect and unjustified assumption that is usually made without a second thought. As such the choice is unquestionably bad for science or engineering. In many cases this unconscious choice is dangerous, and represents nothing more than wishful thinking.

to hope was to expect

― Jane Austen

 

 

 

Doing Work that Is Worthwhile

The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.

― Ralph Waldo Emerson

As one gets older and enters into the heart of mid-life, it is natural to contemplate ones place in the World. I’m deep into such contemplation. I’ve been blessed with work with meaning for much of my adult life, but that meaning seems to have leaked away recently. Part of my thinking is the decision of whether this is a local or global conditiodownload-3n. Are things worse where I am, or better than the average? For most of my adult life, I’ve had far better conditions than average, and been able to find great meaning in my work. Is the steady erosion of the quality of the work environment a consequence of issues local to my institution or organization? Or is it part of the massive systemic dysfunction our society is experiencing?

If the problem is local, I could leave for another organization, or another institution that is functioning better. If it’s a global issue, then its not something I can likely influence (much), and its time to ride the storm out the best I can. Right now my money is on the issue being global, and we ought to all be ready for the shit to hit the fan. My guess it is already happened, the shit storm is in effect and we are headed into deep trouble as a Nation and the World. We have repugnantly dysfunctional National white_rally-620x412.jpggovernment, led by an incompetent narcissistic conman without a perceptible moral compass. Racial tensions, and a variety of white supremacist/right wing ultra-Nationalists are walking the streets. Left wing and anarchist groups are waking up as well. Open warfare may soon be upon us making us long for the days where sporadic terrorist attacks were our biggest worry. A shit storm is actually a severe understatement; this is a fucking waking nightmare. I hope this is wrong and I could simply find a better place to work at and feel value in my labors. I wish the problem was simple and local with a simple job change fixing things.

ooxdjduWork is an important part of life for a variety of reasons. It is how we spend a substantial portion of our time, and much of our efforts go into it. In work we contribute to society and assist in the collective efforts of mankind. As I noted earlier, I’ve been fortunate for most of my life, but things have changed. Part of the issue is a relative change in the degree of self-determination in work. The degree of self-determination has decreased over time. An aspect of this is the natural growth in scope of work as a person matures. As a person grows in work and is promoted, the scope of the work increases, and the degree of freedom in work decreases. Again this is only a part of the problem as the system is working to strangle the self-determination out of people. This is control, fear of failure and generic lack of trust in people. In this environment work isn’t satisfying because the system is falling apart, and the easiest way to resist this is controlling the little guy. My work becomes more of a job and a route to a paycheck every day. Earning a living and supporting your family is a noble achievement these days, and aspiring to more simply a waking dream contracting in the rear view mirror of life.

Creative autonomy is essential for the work I do to be satisfying. It is essential to the work being effective. I can’t be an effective problem solver if most of my best options are off the table. We exist in a system where the solutions are dictated in advance. No one is trusted to solve any real problems, just work toward the solutions that have been pre-ordained. Autonomy is threatening to the system because the trust in people is so intrinsically low. The result is the leaking away of meaning in work. The control that exists only calms the deep fears of a system that is failing. Inside the model where we are teetering on the edge of a societal shit storm, the attempt to control makes sense. The system is desperately trying to hold onto whatever control it has, fearing the unraveling about to unfold. Fear makes us do stupid things, and the fear is simply throwing fuel onto the fire by making everyone simply hate life.

The purpose of life is a life of purpose.

― Robert Bryne

I’m trying to grapple with what is happening in my own experience through the lens of the bigger picture. We see a contraction of the trust and autonomy necessary for me to enjoy work. This is in direct reaction to the fears unleashed by the changes in society, and the terror these changes have induced in much of the population. The old world is coming to an end, but not without a fight. People are genuinely frightened by change, and for most people the most comfortable place is the past. They are holding onto the past with a fervent passion, but the future is unstoppable. In between the two is conflict and pain. For someone like myself who demands work that makes progress, I might have to take a break and simply resign myself to defending the progress that has already been made. No new progress can happen without trust in an environment dominated by fear. We are simply trying to maintain the progress that has already been won.

Imagine what our story would look like if, rather than succumbing to the insistent voices of family or culture, we determined that our vocation was to be a better human.

― James Hollis, Ph.D.

The domination of fear has an extremely large impact on the appetite for risk; there isn’t any. Part of the fearful environment is the inability to accept anything that looks, smells or even hints at failure. Without failure you don’t have learning or achievement. Research depends on failure because research is basically learning in its rawest form. Let me be clear that I’m talking about good failure where you try your best, making a best effort and coming up short. Most of the time a failure leads to learning something new. You tweak your approach or knowledge on the basis of the experience and grow. Without failure you short-circuit expertise. We need to energize failure in many small things to engage success in big things. All of this requires the sort of deep trust that our current World is almost devoid of. A combination of courage and trusis-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360t can unleash people’s full potential through allowing them to fail spectacularly and then fully support the next step forward. Today, cowardice and mistrust dominate and even marginal failure results in punishment. It is corroding the foundation of achievement. It makes work simply a job and life more survival than living.

Since our current World is so deeply arrayed against personal success and growth, it might be wise to seek other avenues of fulfillment. Perhaps work is most healthily viewed as simply a task of mere survival. The current environment is so rife with fear, and patently incompetent that no one can really reach their potential. This isn’t a conclusion I like reaching, but the evidence seems overwhelming. Fear and mistrust have led to overarching control issues that remove any degree of personal control for achievement, or at least control while staying inside the rules. If one is willing to completely ignore the rules, success can be had. If one plays by the rules, success is absolutely impossible. The rules of the game are written to avoid all of the acts necessary for success because these involve risk and danger. Fundamental to success is trust, and trusting someone is beyond our collective ken.

The purpose of life is to contribute in some way to making things better.

― Robert F. Kennedy

 

The Culture of Computation

We seldom realize, for example that our most private thoughts and emotions are not actually our own. For we think in terms of languages and images which we did not invent, but which were given to us by our society.

― Alan W. Watts

Culture pervades our lives as people and shapes how we connect to our World, Nation, Community, Jobs, and personal lives. Much of what we do is deeply influenced by a web of cultures our lives are embedded within. All of this highlights the importance of understanding how culture influence computation as culture often defines what is georgewashingtoncomfortable and automatic. In many cases culture is the permanent habits of our social constructs, and often defines practices that impede progress. Accepted cultural practices are usually done without thinking and applied almost mindlessly. If these practices are wrong, they are difficult to dislodge or improve upon.

The imagination is the goal of history. I see culture as an effort to literally realize our collective dreams.

― Terence McKenna

Culture is a powerful thing. It defines so much about the collective activity of groups of people. Culture defines a set of beliefs, practices and habits that are naturally accepted and reinforced by the collective action of the people. Some cultures are driven by biologyRonaldRaygunor fundamental human needs, but most are constructs to help regulate the structures that our collective actions are organized about. The fundamental values, moral code and behaviors of people are heavily defined by culture. If the culture is positive, the effect is resonant and amplifies the actions of people toward much greater achievements. If the culture is negative, the effect can undo and overwhelm much of the best that people are capable of. Invariably cultures are a mixture of positive and negative. Cultures persist for extremely long times and outlive those who set the cultural tone for groups. Cultures are set or can change slowly unless the group is subjected to an existential crisis. When a crisis is successfully navigated the culture that arose in its resolution is enshrined, and tends to persist without change until a new crisis is engaged.

Every culture has its southerners — people who work as little as they can, preferring to dance, drink, sing brawl, kill their unfaithful spouses; who have livelier gestures, more lustrous eyes, more colorful garments, more fancifully decorated vehicles, a wonderful sense of rhythm, and charm, charm, charm; unambitious, no, lazy, ignorant, superstitious, uninhibited people, never on time, conspicuously poorer (how could it be otherwise, say the northerners); who for all their poverty and squalor lead enviable lives — envied, that is, by work-driven, sensually inhibted, less corruptly governed northerners. We are superior to them, say the northerners, clearly superior. We do not shirk our duties or tell lies as a matter of course, we work hard, we are punctual, we keep reliable accounts. But they have more fun than we do … They caution[ed] themselves as people do who know they are part of a superior culture: we mustn’t let ourselves go, mustn’t descend to the level of the … jungle, street, bush, bog, hills, outback (take your pick). For if you start dancing on tables, fanning yourself, feeling sleepy when you pick up a book, developing a sense of rhythm, making love whenever you feel like it — then you know. The south has got you.

― Susan Sontag

download.jpgWe see all sorts of examples of the persistence of culture. The United States is still defined by the North-South divide that fractured during the Civil War. The same friction and hate that defined that war 150 years ago dominate our politics today. The culture of slavery persists in systematic racism and oppression. The white and black divide remains unhealed even though none of the people who enslaved or who were enslaved are still alive with many generations having passed. The United States is still defined by the Anglo-Saxon Protestant beliefs of the founding fathers. Their culture is dominant even after being overwhelmed in numbers of people and centuries of history. The dominant culture was formed in the crucible of history by the originating crisis for the Nation, the Revolutionary war. Companies and Laboratories are shaped by their original cultures and these habits and practices persist long after their originators have left, retired or died.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

We all exist within a broad array of cultures all the way from our family to the entirety of humanity. Our culture is set by our biology, history and arc through life. This web of cultures connects together and runs much of our lives. We all have free will, but the decision to go against the culture tends to carry high costs to us personally. There are a number of things that influence culture including events, technology and new modes of engagement. Some events are part of natural world, such as disasters Unknown copy 26(earthquakes, floods, hurricanes, famines, …). These events can stress people and existing cultures providing the sorts of crises that shape the future to be more resilient to future disasters. Human events such as wars, trade, and general political events provide both the impact of culture in causing or navigating events, as well as producing crises that shape cultural responses and evolution. We can continue down this line of thinking to ever-smaller cultures such as organizations and businesses are influenced by crises induced by the larger systems (natural or political). This web of culture continues to smaller and smaller scale all the way to communities (towns, regions, schools, families) each having a culture shaped heavily by other cultures or events. In every case a crisis is almost invariably necessary to induce change, cultures are resistant to change unless something painful provides direct evidence of the incapacity of existing culture to succeed.

Men build too many walls and not enough bridges.

― Joseph Fort Newton

The culture emerging in the World today is deeply stressing may subcultures. A combination of demographic changes, ethnic conflict, technology and economic systems are all spiraling toward crisis. People across the World sense the depth of the impending changes to the structure of society. In many cases the combination of demographics and economic changes is stressing large populations of people to an extent that they exude a wholesale revolt against existing cultures and systems. When this population is large enough it becomes a movement, and starts driving other populations toward crisis. These movements ultimately create an environment where other events are triggered such as wars. These in turn are a crisis that ultimately must be resolved, and induce enough pain that people willingly overthrow existing cultures and embrace new cultures that enable successful resolution. We may be spiraling toward this cascade of crises that are almost necessary for our cultures to adapt to the reality of today.

One of the most effective ways to learn about oneself is by taking seriously the cultures of others. It forces you to pay attention to those details of life which differentiate them from you.

― Edward T. Hall

Before plunging into the specifics of the culture of computation, we should discuss theshapeimage_1culture of the broader scientific community. This culture exists within the broader network of cultures in society with give-and-take between them. In the past science has provided deep challenges to prevailing culture, and induced changes societal culture. Today the changes in main societal culture are challenging science. One key aspect of today’s culture wars is lack of support for expertise. One of the key rifts in society is mistrust of the elite and educated. The broader society is attacking and undermining educational institutions across the board. Scientific laboratories are similar in makeup and similarly under assault. Much of this broader assault is related to a general lack of trust. Some of this is a reaction to the surplus of trust granted science in the wake of its massive contributions to the resolution of World War 2 and the Cold War. These successes are waning in memory and science is now contracting for a distinguished role societally.

I work in a National Laboratory, and I have worked at a National Laboratory for my entire career. These Labs have strong cultures shaped by their history and work. Both Los Alamos and Sandia were born in the crucible of World War 2 and the Manhattan Project’s pursuit of an atomic bomb. The genesis of the third weapons’ Lab, Lawrence Livermore was also present albeit in an unexpressed form. During that war Los Alamos contained the origins of all three Labs. Los Alamos of course was the core of this revolving around Oppenheimer’s scientists pursuing the nuclear explosive part of the bomb. Sandia was contained within the engineering portion of Los Alamos that remained under military control. These identities are still evident in the Lab’s cultures today. At Los Alamos there is a scientific identity and habit that colors all engagements. Conversely the engineering character of Sandia is evident as is the shadow of General Groves’ priorities and approach on how the institution works today. Lawrence Livermore’s genesis was contained within a deep controversy associated with the follow-on to the atomic bomb, the hydrogen bomb. Many at Los Alamos opposed the hydrogen bomb, but Edward Teller was committed to this and ultimately created a Laboratory to pursue his vision. This adversarial, political and controversial nature still defines that Laboratory today.

The first step – especially for young people with energy and drive and talent, but not money – the first step to controlling your world is to control your culture. To model and demonstrate the kind of world you demand to live in. To write the books. Make the music. Shoot the films. Paint the art.

― Chuck Palahniuk

Each of these identities firmly expresses itself in the scientific-technical cultures of the Labs. Los Alamos has a core identity as an experimental physics Laboratory. Engineering and computational approaches are also taken at Los Alamos, but the experimental approach is the most natural and favored by the culture. Livermore is more favorable toward a theoretical-computational approach within its basic culture. Experimental approaches are also seriously engaged, but in particular computation is more naturally supported by its culture. Sandia is an engineering culture, and borrowing from Los Alamos, a predominantly test-based culture being a compliment to experiments. As such theory, design and computation is a hard sell and culturally discouraged. None of these approaches is the “right” approach and the cultures all move them toward a certain approach to be favored over others.

These characters graft themselves onto how computation is accomplished at each Lab. The computational side of things is favored at Lawrence Livermore yielding better support from the institution. This comes in the form of support for research and prestige for those doing computation. At the same time the validation of computatio220px-Foster_John_S_Jrns suffers a bit relative to the other Labs, as does the rigor of computed results. Los Alamos was the birthplace of all three labs and computational work, but always puts computation in a subservient role compared to experiments. This leads to a mighty struggle between validation and calibration. Often calibration wins out so that computed results are sufficiently close to experiment. Sandia excels at process and rigor in the conduct of calculations, but struggles at other aspects (at least in a relative sense). The whole verification and validation approach to simulation quality comes from Sandia reflecting the rigor. At the same time institutional support and emphasis are weaker leading to long-term effects.

B61-12All this texture is useful to think about because it manifests itself in every place computational science is done today. The scientific culture of any institution is reflected in its emphasis, and approach to the conduct of science. The culture produces a natural set of priorities that define investments and acceptable quality. We can speak volumes about how computational work should be done, but the specific acuity to the message is related to preconceived notions about the aspects. For example, some places are more prone to focus on computing hardware as an investment. In terms of the competition for resources, the purchase of hardware is a priority, and a typical route for enhancement. This becomes important when trying to move into new “hot” areas. If the opportunity falls in line with the culture, investments flow and if it is out of line the institution will miss it.

Cleland_taylor_320omputational science is a relatively new area of endeavor. It is at most 70 years old as practiced at Los Alamos; it is a new area of focus in most places. Sometime it is practiced at an institution and added to the repertoire as a new innovative way of doing work. In all these cases the computational work adopts the basic culture of the institution it exists within. It then differentiates based on the local conditions usually dominated by whatever the first acknowledged success is. One of the key aspects of a culture is origin stories or mythological achievements. Origins are almost invariably fraught situations with elements of crisis. These stories pervade the culture and define what success looks like and how investments in the future are focused.

downloadWhere I work at Sandia, the origin story is dominated by early success with massively parallel computers. The greatest success was the delivery of a computer, Red Storm. As a result the culture is obsessed with computer hardware. The path to glory and success runs through hardware; a focus on hardware is culturally accepted and natural for the organization. It is a strong predisposition. At Lawrence Livermore the early stages of the Laboratory were full of danger and uncertainty. Early in the history of the Lab there was a huge breakthrough in weapons design. It used computational modeling, and the lead person in the work went on to huge professional success (Lab Director). This early success became a blueprint for others and an expected myth to be repeated. A computational study and focus was always expected and accepted by the Lab. At Los Alamos all roads culturally lead to the Manhattan Project. The success in that endeavor has defined the Laboratory ever since. The manner of operation and approach to science adopted then is blueprint for success at that Laboratory. The multitude of crises starting with the end of the Cold War, spying, fires, and scandal have all weakened the prevailing culture, and undermined the future.

In each case the myths and legends of past success provide the basis for the culture and the means of understanding why a place is what the place it is. Old myths and legends have to be replaced to change the culture, and this can only happen in a crisis of sufficient magnitude to challenge the existing culture. We can’t usually manage to think about what culture arises from the resolution of a crisis, we are too busy surviving to make the best use of the opportunity.

Without culture, and the relative freedom it implies, society, even when perfect, is but a jungle. This is why any authentic creation is a gift to the future.

― Albert Camus

 

Credibility and Confidence

Leadership is fundamentally about credibility.

― Rick Crossland

Under the best of circumstances we would like to confidently project credibility for the modeling and simulation we do. Under the worst of circumstances we would have confidence in modeling and simulation without credibility. This is common. Quite often the confidence is the product of arrogance or ignorance instead of humility download-2and knowledge. This always manifests itself with a lack of questioning in the execution of work. Both of these issues are profoundly difficult to deal with and potentially fatal to meaningful impact of modeling and simulation. These issues are seen quite frequently. Environments with weak peer review contribute to allowing confidence with credibility to persist. The biggest part of the problem is a lack of pragmatic acceptance of modeling and simulation’s intrinsic limitations. Instead we have inflated promises and expectations delivered by over confidence and personality rather than hard nosed technical work.

blog-image-pursuit-of-excellenceWhen confidence and credibility are both in evidence, modeling and simulation is empowered to be impactful. It will be used appropriately with deference to what is and is not possible and known. When modeling and simulation is executed with excellence and professionalism along with hard-nosed assessment of uncertainties, using comprehensive verification and validation, the confidence is well grounded in evidence. If someone questions a simulations result, answers can be provided with well-vetted evidence. This produces confidence in the results because questions are engaged actively. In addition the limitations of the credibility are well established, and confidently be explained. Ultimately, credibility is a deeply evidence-based exercise. Properly executed and delivered, the degree of credibility depends on honest assessment and complete articulation of the basis and limits of the modeling.

When you distort the truth, you weaken your credibility.

― Frank Sonnenberg

One of the dangers of hard-nosed assessment is the tendency for those engaged in it to lose confidence in the work. Those who aggressively pursue credibility assessment tend to be cynics and doubters. They are prone to pessimism. They usually project doubt and focus on limitations of the modeling instead of confidence where it may be used. One of the hardest tricks of credibility assessment is pairing excellence in the execution of the work with an appropriate projection of confidence. The result is a mixed message where confidence is projected without credibility, and credibility is projected without confidence. Neither serves the purpose of progress in the impact of modeling and simulation.

The_Unknown_Known_posterOne of the major sins of over-confidence is flawed or unexamined assumptions. This can be articulated as “unknown knowns” in the famously incomplete taxonomy forwarded by Donald Rumsfeld in his infamous quote. He didn’t state this part of the issue even though it was the fatal flaw in the logic of the Iraqi war in the aftermath of 9/11. There were basic assumptions about Hussein’s regime in Iraq that were utterly false, and these skewed the intelligence assessment leading to war. They only looked at information that supported the conclusions they had already drawn or wanted to be true. The same faulty assumptions are always present in modeling. Far too many simulation professionals ignore the foundational and unfounded assumptions in their work. In many cases assumptions are employed without thought or question. They are assumptions that the community has made for as long as anyone can remember and simply cannot be questioned. This can include anything from the equations solved, to the various modeling paradigms applied as a matter of course. Usually these are unquestioned and completely unexamined for validity in most credibility assessments.

This is an immensely tricky thing to execute. The standard assumptions are essential to managing complexity and making progress. That said, it is a remarkably difficult and important task to detect when the assumptions become limiting. More succinctly put, the limitations of the standard assumptions need to be thought-through and tested. Usually these assumptions can only be tested through removing everything else from the field and doing very hard work. It is so much easier to simply downloadstay the course and make standard assumptions. In many cases the models have been significantly calibrated to match existing data, and new experiments or significantly more accurate measurements are needed to overturn or expose modeling limitations. Moreover the standard assumptions are usually unquestioned by peers. Questions are often met with ridicule. A deeply questioning assessment requires bravery and fortitude usually completely lacking from working scientists and utterly unsupported by our institutions.

Another manner for all of this to unfold is unwarranted confidence. Often this is couched in the form of arrogant perspectives where the proof of credibility is driven by download-1personality. This proof by authority is incredibly common and troubling to dislodge. In many cases personal relationships to consumers of simulations are used to provide confidence. People are entrusted with the credibility and learn how to give their customer what they want. Credibility by personality is cheap and requires so much less work plus it doesn’t raise any pesky doubts. This circumstance creates an equilibrium that is often immune to scientific examination. It is easier to bullshit the consumers of modeling and simulation results than level with them about the true quality of the work.

The credibility of the teller is the ultimate test of the truth of a proposition.

― Neil Postman

More often than not honest and technically deep peer review is avoided like a plague. If it is imposed on those practicing this form of credibility, the defense of simulations takes the personal form of attacking the peer reviewers themselves. This sort of confidence is a cancer on quality and undermines any progress. It is a systematic threat to excellence in simulation, and must be controlled. It is dangerous because it is effective in providing support for modeling and simulation along with the appearance of real World impact.

images-3One of the biggest threats to credibility is the generation of the lack of confidence honesty has. Engaging deeply and honestly in assessment of credibility is excellent at undermining confidence. Almost invariably the accumulation of evidence regarding credibility endows the recipients of this knowledge with doubt. These doubts are healthy and often the most confident people are utterly ignorant of the shortcomings. The accumulation of evidence regarding the credibility should have a benefit for the confidence in how simulation is used. This is a problem when those selling simulation oversell what it can do. The promise of simulation has been touted widely as transformative. The problem with modeling and simulation is its tangency to reality. The credibility of simulations is grounded by reality, but the uncertainty comes from both modeling, but also the measured and sensed uncertainty with our knowledge of reality.

The dynamic and tension with confidence and credibility should be deeply examined. When confidence is present without evidence, people should be deeply suspicious. A strong culture of (independent) peer review is an antidote to this. Too often these days the peer review is heavily polluted by implicit conflicts of interest. The honesty of peer review is hampered by an unwillingness to deal with problems particularly with respect to modification of the expectations. Invariably modeling and simulation has beedownload-5n oversold and any assessment will provide bad news. In today’s World we see a lot of bad news rejected, or repackaged (spun) to sound like good news. We are in the midst of a broader crisis of credibility with respect to information (i.e. fake news), so the issues with modeling and simulation shouldn’t be too surprising. We would all be well served by a different perspective and approach to this. The starting point is a re-centering of expectations, but so much money has been spent using grossly inflated claims.

Belief gives knowledge credibility.

― Steven Redhead

So what should we expect from modeling and simulation?

Modeling and simulation is a part of the scientific process and subject to its limits and rules. There is nothing magic about simulation that unleashes modeling from its normal bullshit_everywhere-e1345505471862limitations. The difference that simulation makes is the ability to remove the limitations of analytical model solution. Far more elaborate and accurate modeling decisions are available, but carry other difficulties due to the approximate nature of numerical solutions. The tug-of-war intellectually is the balance between modeling flexibility, nonlinearity and generality with effects of numerical solution. The bottom line is the necessity of assessing the uncertainties that arise from these realities. Nothing releases the modeling from its fundamental connection to validity grounded in real world observations. One of the key things to recognize is that models are limited and approximate in and of themselves. Models are wrong, and under a sufficiently resolved examination will be invalid. For this reason an infinitely powerful computer will ultimately be useless because the model will become invalid at some resolution. Ultimately progress in modeling and simulation is based on improving the model. This fact is ignored by computational science today and will result wasting valuable time, effort and money chasing quality that is impossible to achieve.

Bullshit is a greater enemy of the truth than lies are.

—Harry Frankfurt

In principle the issue of credibility and confidence in modeling and simulation should be based on evidence. Ideally this evidence should be quantitative with key indicators of its quality included. Ideally, the presence of the evidence should bolster credibility. Instead, paradoxically, evidence associated with the credibility of modeling and simulation seems to undermine credibility. This is a strong indicator that claims about the predictive power of modeling and simulation has been over-stated. 03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0aeThis is a nice way of saying this is usually a sign that the quality is actually complete bullshit! We can move a long way toward better practice by simply recalibrating our expectations about what we can and can’t predict. We should be in a state where greater knowledge about the quality, errors and uncertainty in modeling and simulation work improves our confidence.

If you can’t dazzle them with brilliance, baffle them with bullshit!

– W.C. Fields

Part of the issue is the tendency for the consumers of modeling and simulation work to not demand evidence to support confidence. This evidence should always be present and available for scrutiny. If claims of predictive power are made without evidence, the default condition should be suspicion. The various sources of error and uncertainty should be drawn out, and quantified. There should be estimates based on concrete evidence for the value of uncertainty for all sources. Any uncertainty that is declared to be zero or negligible must have very specific evidence to support this assertion. Even more important any claims of this nature should receive focused and heavy scrutiny because they are likely to be based on wishful thinking, and often lack any evidentiary basis.

incompetencedemotivatorOne of the issues of increasing gravity in this entire enterprise is the consumption of results using modeling and simulation by people unqualified to judge the quality of the work. The whole enterprise is judged to be extremely technical and complex. This inhibits those using the results from asking key questions regarding the quality of the work. With the people producing modeling and simulation results largely driven by money rather than technical excellence, we have the recipe for disaster. Increasingly, false confidence accompanies results and snows the naïve consumers into accepting the work. Often the consumers of computational results don’t know what questions to ask. We are left with quality being determined more by flashy graphics and claims about massive computer use than any evidence of prediction. This whole cycle perpetuates an attitude that starts to allow viewing reality as more of a video game and less like a valid scientific enterprise. Over inflated claims of capability are met with money to provide more flashy graphics and quality without evidence. We are left with a field that has vastly over-promised and provided the recipe for disaster.

We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.

― Ta-Nahisi Coates

A Good Question Makes Everything Better

Judge a man by his questions rather than by his answers.

― Voltaire

In thinking about what makes work good for me, I explored an element of the creative process for me revolving around answering questions. If one doesn’t have the right question, the work isn’t framed correctly and progress will stall. A thing to consider in this frame of reference is what makes a good question? This itself is an excellent question! The quality of the question makes a great difference in framing the whole scientific enterprise, and can either lead to bad places of “knowledge cul-de-sacs” or open stunning vistas of understanding. Where you end up depends on the quality of the question you answer. Success depends far more on asking the right question than answering the question originally put to you (or you put to yourself).

truth, like gold, is to be obtained not by its growth, but by washing away from it all that is not gold.

― Leo Tolstoy

Feynman_RichardA great question is an achievement in itself although rarely viewed as such. More often than not little of the process of work goes into asking the right question. Often the questions we ask are highly dependent upon foundational assumptions that are never questioned. While assumptions about existing knowledge are essential, finding the weak or invalid assumptions is often the key to progress. These assumptions are wonderful for simplifying work, but also inhibit progress. Challenging assumptions is one of the most valuable things to do. Heretical ideas are fundamental to progress; all orthodoxy began as heresy. If the existing assumptions hold up under the fire of intense scrutiny they gain greater credibility and value. If they fall, new horizons are opened up to active exploration.

If we have no heretics we must invent them, for heresy is essential to health and growth.

― Yevgeny Zamyatin

It goes without saying that important questions are good ones. Defining importance is tricky business. There are plenty of important questions that lead nowhere “what’s the meaning of life?” or we simply can’t answer using existing knowledge, “is faster than light travel possible?” On the other hand we might do well to break these questions down to something more manageable that might be attacked, “is the second law of thermodynamics responsible for life?” or “what do subatomic particles tell us about the speed of light?” Part of the key to good scientific progress is threading the proverbial needle of important, worthy and possible to answer. When we manage to ask an important, but manageable question, we serve progress well. Easy questions are not valuable, but are attractive due to their lack of risk and susceptibility to management and planning. Sometimes the hardest part of the process is asking the question, and a well-defined and chosen problem can be amenable to trivial resolution. It turns out to be an immensely difficult task with lots of hard work to get to that point.

I have benefited mightily from asking some really great questions in the past. These _12122_tex2html_wrap26questions have led to the best research, and most satisfying professional work I’ve done. I would love to recapture this spirit of work again, with good questioning work feeling almost quaint in today’s highly over-managed climate. One simple question occurred in my study of efficient methods for solving the equations of incompressible flow. I was using a pressure projection scheme, which involves solving a Poisson equation at least once, if not more than once a time step. The most efficient way to do this involved using the multigrid method because of its algorithmic scaling being linear. The Poisson equation involves solving a large sparse system of linear equations, and the solution of linear equations scales with powers of the number of equations. Multigrid methods have the best scaling thought to be possible (I’d love to see this assumption challenged and sublinear methods discovered, I think they might well be possible).

As problems with incompressible flows become more challenging such as involving large density jumps, the multigrid method begins to become fragile. Sometimes the optimal scaling breaks down, or the method fails altogether. I encountered these problems, but found that other methods like conjugate gradient could still solve the problems. The issue is that the conjugate gradient method is less efficient in its scaling than multigrid.cycles As a result as problems become larger, the proportion of the solution time spent solving linear equations grows ever larger (the same thing is happening now to multigrid because of the cost of communication on modern computers). I posed the question of whether I could get the best of both methods, the efficiency with the robustness? Others were working on the same class of problems, and all of us found the solution. Combine the two methods together, effectively using a multigrid method to precondition the conjugate gradient method. It worked like a charm; it was both simple and stunningly effective. This approach has become so standard now that people don’t even think about it, its just the status quo.

At this point it is useful to back up and discuss a key aspect of the question-making process essential to refining a question into something productive. My original question was much different, “how can I fix multigrid?” was the starting point. I was working from the premise that multigrid was optimal and fast for easier problems, and conjugate gradient was robust, but slower. A key part of the process was a reframing the question. The question I ended up attacking was “can I get the positive attributes of both algorithms?” This changed the entire approach to solving the problem. At first, I tried switching between the two methods depending on the nature of the linear problem. This was difficult to achieve because the issues with the linear system are not apparent under inspection.

The key was moving from considering the algorithms as different options whole cloth, to combining them. The solution involved putting one algorithm inside the other. As it turns out the most reasonable and powerful way to do this is make multigrid a preconditioner for conjugate gradient. The success of the method is fully dependent on the characteristics of both algorithms. When multigrid is effective by itself, the conjugate gradient method is effectively innocuous. When multigrid breaks down, the conjugate gradient method picks up the pieces, and delivers robustness along with the linear scaling of multigrid. A key aspect of the whole development is embracing an assault on a philosophical constraint in solving linear systems. At the outset of this work these two methods were viewed as competitors. One worked on one or the other, and the two communities do not collaborate, or even talk to each other. They don’t like each other. They have different meetings, or different sessions at the same meeting. Changing the question allows progress, and is predicated on changing assumptions. Ultimately, the results win and the former feud fades into memory. In the process I helped create something wonderful and useful plus learned a huge amount of numerical (and analytical) linear algebra.

imagesThe second great question I’ll point to involved the study of modeling turbulent flows with what has become known as implicit large eddy simulation. Starting in the early 1990’s there was a stunning proposition that certain numerical methods seem to automatically (auto-magically) model aspects of turbulent flows. While working at Los Alamos and learning all about a broad class of nonlinearly stable methods, the claim that they could model turbulence caught my eye (I digested it, but fled in terror from turbulence!). Fast forward a few years and combine this observation with a new found interest in modeling turbulence, and a question begins to form. In learning about turbulence I digested a huge amount of theory regarding the physics, and our approaches to modeling it. I found large eddy simulation to be extremely interesting although aspects of the modeling were distressing. The models that worked well were performed poorly on the structural details of turbulence, and the models that matched the structure of turbulence were generally unstable. Numerical methods for solving large eddy simulation were generally based on principles vastly different than those I worked on, which were useful for solving Los Alamos’ problems.

Having methods I worked on for codes that do solve our problems also model turbulence is tremendously attractive. The problem is the seemingly magical nature of this modeling. Being magical does not endow the modeling with confidence. The question that we constructed a research program around was “can we explain the magical capability of numerical methods with nonlinear stability to model turbulence?” We combined the observation that a broad class of methods seemed to provide effective turbulence modeling (or the universal inertial range physics). Basically the aspects of turbulence associated with the large-scale hyperbolic parts of the physics were captured. We found that it is useful to think of this as physics-capturing as an extension of shock-capturing. The explanation is technical, but astoundingly simple.

Upon study of the origins of large eddy simulation we discovered that the origins of the method were the same as shock capturing methods. Once the method was developed it evolved into its own subfield with its own distinct philosophy, and underlying assumptions. These assumptions had become limiting and predicated on a certain point-of-view. Shock capturing had also evolved in a different direction. Each field focused on different foundational principles and philosophy becoming significantly differentiated. For the most part they spoke different scientific languages. It was important to realize that their origins were identical with the first shock capturing method being precisely the first subgrid model for large eddy simulation. A big part of our research was bridging the divides that had developed over almost five decades and learn to translate from one language to the other.

We performed basic numerical analysis of nonlinearly stable schemes using a technique that produced the nonlinear truncation error. A nonlinear analysis is vital here. This uses a technique known as modified equation analysis. The core property of the methods empirically known to be successful in capturing the physics of turbulence is conservation (control volume schemes). It turns out that the nonlinear truncation error for a control volume method for a quadratic nonlinearity produces the fundamental scaling seen in turbulent flows (and shocks for that matter). This truncation error can be destabilizing for certain flow configurations, effectively being anti-dissipative. The nonlinear stability method keeps the anti-dissipative terms under control, producing physically relevant solutions (e.g., entropy-solutions).

A key observation makes this process more reasoned and connected to the traditional large eddy simulation community. The control volume term matches the large eddy simulation models that produce good structural simulations of turbulence (the so-called scale similarity model). The scale similarity model is unstable with classical numerical methods. Nonlinear stability fixes this problem with aplomb. We use as much scale similarity as possible without producing unphysical or unstable results. This helps explain why a disparate set of principles used to produce nonlinear stability provides effective turbulence modeling. Our analysis also shows why some methods are ineffective for turbulence modeling. If the dissipative stabilizing effects are too large and competitive with the scale similarity term, the nonlinear stability is ineffective as a turbulence model.

It is dangerous to be right in matters on which the established authorities are wrong.

― Voltaire

sankaran_fig1_360I should spend some time on some bad questions as examples of what shouldn’t be pursued. One prime example is offered as a seemingly wonderful question, the existence of solutions to the incompressible Navier-Stokes equations. The impetus for this question is the bigger question of can we explain, predict or understand fluid turbulence? This problem is touted as a fundamental building block in this noble endeavor. The problem is the almost axiomatic belief that turbulence is contained within this model. The key term is incompressible, which renders the equations unphysical on several key accounts: it gives the system infinite speed of propagation, and divorces the equations from thermodynamics. Both features sever the ties of the equations from the physical universe. The arguing point is whether these two aspects disqualify it from addressing turbulence. I believe the answer is yes.

In my opinion this question should have been rejected long ago based on the available evidence. Given that our turbulence theory is predicated on the existence of singularities in ideal flows, and the clear absence of such singularities in the incompressible Navier-Stokes equations, we should reject the notion that turbulence is contained in them. Despite this evidence, the notion that turbulence is contained whole cloth in these unphysical equations remains unabated. It is treated as axiomatic. This is an example of an assumption that has out lived its usefulness. It will eventually be tossed out, and progress will bloom the path of its departure. One of the key things missing from turbulence is a connection to thermodynamics. Thermodynamics is such a powerful scientific concept and for it to be so absent from turbulence is a huge gap. Turbulence is a fundamental dissipative process and the second law is grounded on dissipation. The two should be joined into a coherent whole allowing unity and understanding to reign where confusion is supreme today.

Unknown-2Another poorly crafted question revolves around the current efforts for exascale class computers for scientific computing. There is little doubt that an exascale computer would be useful for scientific computing. A better question is what is the most beneficial way to push scientific computing forward? How can we make scientific computing more impactful in the real world? Can the revolution of mobile computing be brought to science? How can we make computing (really modeling and simulation) more effective in impacting scientific progress? Our current direction is an example of crafting an obvious question, with an obvious answer, but failing to ask a more cutting and discerning question. The consequence of our unquestioning approach to science will be wasted money and stunted progress.

Trust is equal parts character and competence… You can look at any leadership failure, and it’s always a failure of one or the other.

― Stephen M.R. Covey

This gets at a core issue with how science is managed today. Science has never been managed more tightly and becoming more structurally mismanaged. The tight management of science as exemplified by the exascale computing efforts is driven by an overwhelming lack of trust in those doing science. Rather than ask people open-ended questions subject to refinement through learning, we ask scientists to work on narrowly defined programs with preconceived outcomes. The reality is that any breakthrough, or progress for that matter will take a form not envisioned at the outset of the work. Any work that pushes mankind forward will take a form not foreseeable. By managing so tightly and constraining work, we are predestining the outcomes to be stunted and generally unworthy of the effort put into them.

Whether you’re on a sports team, in an office or a member of a family, if you can’t trust one another there’s going to be trouble.

― Stephen M.R. Covey

This is seeded by an overwhelming lack of timagexsrust in people and science. Trust is a powerful concept and its departure from science has been disruptive and expensive. Today’s scientists are every bit as talented and capable as those of past generations, but society has withdrawn its faith in science. Science was once seen as a noble endeavor that embodied the best in humanity, but generally not so today. Progress in the state of human knowledge produced vast benefits for everyone and created the foundation for a better future. There was a sense of an endless frontier constantly pushing out and providing wonder and potential for everyone. This view was a bit naïve and overlooked the maxim that human endeavors in science are neither good or bad, producing outcomes dependent upon the manner of their use. For a variety of reasons, some embedded within the scientific community, the view of society changed and the empowering trust was withdrawn. It has been replaced with suspicion and stultifying oversight.

When I take a look at the emphasis in currently funded work, we see narrow vistas. There is a generally myopic and tactical view of everything. Long-term prospects, career development and broad objectives are obscured by management discipline and formality. Any sense of investment in the long-term is counter to the current climate. Nothing speaks more greatly to the overwhelming myopia is the attitude toward learning and personal development. It is only upon realizing that learning and research are essentially the same thing does it start to become clear how deeply we are hurting the scientific community. We have embraced a culture that is largely unquestioning with a well-scripted orthodoxy. Questions are seen as heresy against the established powers and punished. For most, learning is the acquisition of existing knowledge and skills. Research is learning new knowledge and skills. Generally speaking, those who have achieved mastery of their fields execute research. Since learning and deep career development is so hamstrung by our lack of trust, fewer people actually achieve the sort of mastery needed for research. The consequences for society are profound because we can expect progress to be thwarted.

Curiosity is more important than knowledge.

― Albert Einstein

One clear way to energize learning, and research is encouraging questioning. After encouraging a questioning attitude and approach to conducting work, we need to teach people to ask good questions, going back and refining questions, as better understanding is available. We need to identify and overcome assumptions subjecting them to unyielding scrutiny. The learning, research and development environment is equivalent to a questioning environment. By creating an unquestioning environment we short-circuit everything leading to progress, and ultimately cause much of the creative engine of humanity to stall. We would be well served by embracing the fundamental character of humanity as a creative, progressive and questioning species. These characteristics are parts of the best that people have to offer and allow each of us to contribute to the arc of history productively.

Curiosity is the engine of achievement.

― Ken Robinson

Brandt, Achi. “Multi-level adaptive solutions to boundary-value problems.” Mathematics of computation 31, no. 138 (1977): 333-390.

Briggs, William L., Van Emden Henson, and Steve F. McCormick. A multigrid tutorial. Society for Industrial and Applied Mathematics, 2000.

Kershaw, David S. “The incomplete Cholesky—conjugate gradient method for the iterative solution of systems of linear equations.” Journal of Computational Physics 26, no. 1 (1978): 43-65.

Melson, N. Duane, T. A. Manteuffel, and S. F. Mccormick. “The Sixth Copper Mountain Conference on Multigrid Methods, part 1.” (1993).

Puckett, Elbridge Gerry, Ann S. Almgren, John B. Bell, Daniel L. Marcus, and William J. Rider. “A high-order projection method for tracking fluid interfaces in variable density incompressible flows.” Journal of Computational Physics130, no. 2 (1997): 269-282.

Boris, J. P., F. F. Grinstein, E. S. Oran, and R. L. Kolbe. “New insights into large eddy simulation.” Fluid dynamics research 10, no. 4-6 (1992): 199-228.

Porter, David H., Paul R. Woodward, and Annick Pouquet. “Inertial range structures in decaying compressible turbulent flows.” Physics of Fluids 10, no. 1 (1998): 237-245.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Fefferman, Charles L. “Existence and smoothness of the Navier-Stokes equation.” The millennium prize problems (2006): 57-67.