The Piecewise Parabolic Method (PPM)

A method which can solve this problem well should be able to handle just about anything which can arise in one-dimensional pure hydrodynamic flow. PPM is such a scheme.

– P.R. Woodward

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

This is one of the most important methods in the early history of the revolutionaryfig9 developments for solving hyperbolic PDEs in the 1980’s. For a long time this was one of the best methods available to solve the Euler equations. It still outperforms most of the methods in common use today. For astrophysics, it is the method of choice, and also made major inroads to the weather and climate modeling communities. In spite of having over 4000 citations, I can’t help but think that this paper wasn’t as influential as it could have been. This is saying a lot, but I think this is completely true. This partly due to its style, and relative difficulty as a read. In other words, the paper is not as pedagogically effective as it could have been. The most complex and difficult to understand version of the method is presented in the paper. The paper could have used a different approach to great effect by perhaps providing a simplified version to introduce the reader and deliver the more complex approach as a specific instance. Nonetheless, the paper was a massive milestone in the field.

It was certainly clear that high-order schemes were not necessarily bringing greater accuracy so physics would have to step in to shore up the failing numerics.

– Jay Boris

Part of the problem with the paper is the concise and compact introduction to the two methods used in the accompanying review article, PPMLR and PPMDE. The LR stands for Lagrange-Remap where the solution is solved on a Lagrangian grid and then remapped back to the original grid for an utterly Eulerian solution. Both the Lagrangian and Eulerian grids are unevenly spaced, and this results in far more elaborate formulas. As a result it is hard to recognize the simpler core method lurking inside the pages of the paper. The DE stands for direct Eulerian, which can be very simple for the basic discretization. Unfortunately, the complication for the DE flavor of PPM comes with the Riemann solver, which is far more complex in the Eulerian frame. The Largangian frame Riemann solver is very simple and easy to evaluate numerically. Not so for the Eulerian version, which has many special cases and requires some exceedingly complex evaluations of the analytical structure of the fig1_350Riemann solution. Advances that occurred later greatly simplified and clarified this presentation. This is a specific difficulty of being an early adopter of methods, the clarity of presentation and understanding is dimmed by purely narrative effects. Many of these shortcomings have been addressed in the recent literature discussed below.

The development of the PPM gas dynamics scheme grew out of earlier work in the mid 1970s with Bram van Leer on the MUSCL scheme. The work of Godunov inspired essential aspects of MUSCL.

– Paul R. Woodward

The paper had a host of interesting and important sub-techniques for solving hyperbolic PDEs. Many of these “bells” and “whistles” are not part of the repertoire for most methods today. The field actually suffers from some extent by not adopting most of these strategies for attacking difficult problems. It is useful to list the special approaches along with a description and context that might make them easier to adopt more broadly (https://wjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://wjrider.wordpress.com/2017/06/30/tricks-of-the-trade-making-a-method-robust/, https://wjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/). The paper is written in such a way that these algorithms seem specifically tailored to PPM, but they are far broader in utility. Generalizing their use more broadly would serve the quality of numerical solutions immensely. To a large extent Phil Colella extended many of these techniques to piecewise linear methods that form the standard approach in production codes today.ParabolicExtrap

  • Shock flattening – Shocks are known to be horrifically nonlinear and difficult both forgiving and brutal. This technique acknowledges this issue by blending a bit of safe first order method with the nonlinearly adaptive high-order methods when a strong shock is encountered. The principle is to use a bit more first-order when the shock is strong because oscillations can escape. For weak shocks this is unnecessary. Rather than penalize the solution everywhere the method is made locally more dissipative where the danger is the greatest.
  • Contact steepening – contact discontinuities will smear out without limit if dissipation is applied to them. In other words, errors made in their solution are with you forever. To keep this from happening, the amount of dissipation applied at these waves is minimized. This sort of technique must be applied with great caution because at a shock wave this is exceedingly dangerous. Additionally, the method to limit the dissipation can produce a very good interface tracking method that is far simpler than the elaborate methodology using interface geometry. It is a useful pragmatic way to move interfaces with little dissipation along with relative simplicity. This basic approach is the actual interface tracking method in many production codes today although few use methods as elaborate or as high quality as that used in the original PPM.
  • Extra dissipation – Monotonicity preservation and Riemann solvers are two 41598_2017_13484_Fig11_HTMLelaborate ways of producing dissipation while achieving high quality. For very nonlinear problems this is not enough. The paper describes several ways of adding a little bit more, one of these is the shock flattening, and another is an artificial viscosity. Rather than use the classical Von Neumann-Richtmyer approach (that really is more like the Riemann solver), they add a small amount of viscosity using a technique developed by Lapidus appropriate for conservation form solvers. There are other techniques such as grid-jiggling that only really work with PPMLR and may not have any broader utility. Nonetheless, there may be aspects of the thought process that may be useful.
  • High-order edges – One of PPM’s greatest virtues is the use of formally higher order principles in the method. Classic PPM uses fourth-order approximations for its edge values. As a result, as the Courant number goes to zero, the method becomes formally fourth-order accurate. This is a really powerful aspect of the method. It is also one of the clear points where the method can be generalized. We can use whatever high-order edge value we like for PPM. One of the maxims to take from this approach is the power of including very high-order discretizations even with otherwise lower order approximation methods. The impact of the high-order is profoundly positive.
  • Steepened edge values – For horrible nonlinear problems, the simple use of high-order differencing is not advisable. The nature of the high-order approximation can be decomposed into several pieces, and the approximation can be built more carefully and appropriately for complex problems. In this way, the high order edge values are a bit hierarchical. This is partially elaboration, but also reflects a commitment to quality that is imminently laudable.

Generalized Monotonicity – PPM uses a parabola and as a result the limiters so well-known don’t work to provide monotone results. As a result, the limiter for PPM takes two steps instead of the single step needed for a linear profile. I don’t like the original presentation in the paper and recast the limiter into an equivalent algorithm that uses two applications of the median function per edge. The first step makes sure the edge value being used is bounded by the cell averages adjacent to it. The second step asks whether the parabola is monotone in the cell and limits it to one that is by construction should it not be (https://wjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/, https://wjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/ https://wjrider.wordpress.com/2015/08/06/a-simple-general-purpose-limiter/https://wjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/, https://wjrider.wordpress.com/2015/08/07/edge-or-face-values-are-the-path-to-method-variety-and-performance/ ).

Before launching into a systematic description of the PPM algorithm, it is worthwhile to first explain the goals and constraints that have influenced its design. These are:

  1. Directional operator splitting.
  2. Robustness for problems involving very strong shocks.
  3. Contact discontinuity steepening.
  4. Fundamental data in the form of cell averages only.
  5. Minimal dissipation.
  6. Numerical errors nevertheless dominated by dissipation, as opposed to dispersion.
  7. Preservation of signals, if possible, even if their shapes are modified, so long as they travel at roughly the right speeds.
  1. Minimal degradation of accuracy as the Courant number decreases toward 0.

– Paul R. Woodward

Over time PPM has mostly been interpreted monolithically as opposed to some basic principles. PPM is really a wonderful foundation with the paper only providing a single instantiation of a panoply of powerful methods. This aspect has come to the fore more recently, but would have served the community better far earlier. Some of these comments are the gift of 2020 hindsight. A great deal of the pedagogical clarity with regard to Godunov-type methods is the result of its success, and only came to common use in the late 1980’s, if not the 1990’s. For example, the language to describe 1-s2.0-S0021999109003830-gr17Riemann solvers with clarity and refinement hadn’t been developed by 1984. Nevertheless, the monolithic implementation of PPM has been a workhorse method for computational science. Through Paul Woodward’s efforts it is often the first real method to be applied to brand new supercomputers, and generates the first scientific results of note on them.

The paper served as a companion to the adjacent paper that reviewed the performance of numerical methods for strong shocks. The review was as needed as it was controversial. The field of numerical methods for shock waves as set to explode into importance and creative energy. The authors Phil Colella and Paul Woodward would both play key roles in the coming revolution in methods. Woodward had already made a huge difference by spending time in Europe with Bram van Leer. Paul helped Bram with implementing advanced numerical methods using methodologies Paul learned at the Livermore Labs. Bram exposed Paul to his revolutionary ideas about numerical methods chronicled in Bram’s famous series of papers (https://wjrider.wordpress.com/2014/01/11/designing-new-schemes-based-on-van-leers-ideas/, https://wjrider.wordpress.com/2014/01/06/van-leers-1977-paper-paper-iv-in-the-quest-for-the-ultimate/, https://wjrider.wordpress.com/2014/01/05/review-of-the-analysis-of-van-leers-six-schemes/). The ideas therein were immensely influential in changing how hyperbolic equations were solved.

One of the great successes in numerical methods for hyperbolic conservation laws has been the use of nonlinear hybridization techniques, known as limiters, to maintain positivity and monotonicity in the presence of discontinuities and underresolved gradients.

– Michael Sekora and Phil Collela

Bram’s ideas created a genuine successor to Godunov’s method. The methods he created were novel in producing a nonlinearly adaptive numerical method where the method would adapt locally to the nature of the solution. This overcame the limitations of Godunov’s theorem regarding the accuracy of numerical methods for hyperbolic equations. Bram’s ideas were geometric in nature, and reflected the approach of the physicist. Paul being a physicist gravitated into the same view, and added a genuine does of pragmatism. Bram also wasn’t the first person to overcome Godunov’s theorem. He may have actually been the third (or fourth). The first is most likely to have been Jay Boris who invented the flux-corrected transport (FCT) method in 1971. In addition, Kolgan in the Soviet Union and Ami Harten might lay claims to overcoming Godunov’s barrier theorem. Some of these different methods played a role in the comparison in the review article by Woodward and Colella. In the light of history many of the differences in the results were more due to the approaches to systems of equations and related difficulties than the nonlinearly adaptive principles in the methods.

The strong, fluid-dynamic shock problem had become the number one computational roadblock by the fall of 1970 so I was urged to concentrate on the problem full time, finally developing the FCT convection algorithm in the winter.

– Jay Boris

In totality, the methods developed by three or four men in the early 1970’s set the stage for revolutionary gains in method performance. At the time of the developments, the differences in the methods were fiercely debated and hotly contested. The reviews of the papers were contentious and resulted in bitter feelings. Looking back with the virtues of time and perspective several things stand out. All the methods represented a quantum leap in performance, and behavior over the methods available prior.  The competition and ideas so hotly contested probably helped to spur developments, but ultimately became counter-productive as the field matured. It seems clear that the time was ripe for the breakthrough. There was a combination of computers, mathematics and applications that seeded the developments. For the same basic idea to arise independently in a short period of time means the ideas were dangling just out of reach. The foundations for the breakthrough were common and well-known.

Paul Woodward is an astrophysicist, and PPM found its most common and greatest use in his field. For a long time the nature of PPM’s description meant that the exact versions of the method described in the canonical 1984 paper were the exact method used in other codes. Part of this results from PPM being a highly tuned, high-performance method with a delicate balance between high-resolution methodology and various safety measures needed for difficult highly nonlinear problems. In a manner of speaking it is a recipe that produces really great results. Imagine PPM as something akin to the Toll House chocolate chip cookie recipe. The cookies you get by following the package exactly are really, really good. At the same time, you can modify the recipe to produce something even better while staying true to the basic framework. The basic cookies will get you far, but with some modification you might just win contests or simply impress your friends. PPM is just like that.

At this point I’ve said quite little about the method itself. The core of the method is a parabolic representation of the solution locally in a cell. The method is totally one-dimensional in nature. This parabola is determined by the integral average in a cell and the point values of the quantity at the edge of the cell. What is not so widely appreciated is the connection of PPM to the fifth scheme in Van Leer’s 1977 paper. This method is interesting because the method evolves both cell averages like any finite volume code, and the point values at the cell boundary. It is compact and quite supremely accurate compared with other third-order methods. The PPM is a way of getting some of the nice properties of this method from a finite volume scheme. Rather than evolve the point values on the edge, they are recovered from the finite volumes.

Rather than belabor the technical details of PPM, I’ll point to the recent trends that have extended the method beyond its classical form. One of the original authors has used the parabola to represent valid extrema in the solution rather than damping them by forcing monotonicity. I did the same thing in my own work largely paralleling Phil’s work. In addition, the change in the high-order edge reconstruction has been recognized and implemented to good effect by both Phil, Paul, myself and others. The connection of Riemann solvers has also been generalized. All of this reflects the true power of the method when projected onto the vast body of work that arose after the publication of this paper.  Even today PPM remains one of the very best methods in existence especially with the modifications recently introduced.

Personally, I’ve come to know both Phil and Paul personally and professionally. In the numerical solution of hyperbolic PDEs both men have played a significant personal role and witnessed history being made. They helped make CFD what it is today. It’s always an interesting experience to read someone’s work then come to know the person. A big part of a deeper appreciation is finding out the underlying truths of the paper. You start to realize that the written, published record is a poor reflection of the real story. Some of this comes through the hard work of reading and re-reading a paper, then deriving everything in it for yourself. A deeper appreciation came from expressing the same method in my own language and mathematics. Finally taking each of these expressions into conversations with the authors who clarified most of the remaining questions. The academic literature is a scrubbed and largely white-washed reflection of reality. What we are allowed to read and see is not the truth, but an agreed upon distortion.

When the numerics fails, substitute the physics.

– Steve Zalesak

the scientists who use such algorithms must have both input to and knowledge of their design. There may come a day when we no longer hold to this view, when the design of such algorithms can be left to expert numerical analysts alone, but that day has not yet arrived.

– Steve Zalesak

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Carpenter Jr, Richard L., Kelvin K. Droegemeier, Paul R. Woodward, and Carl E. Hane. “Application of the piecewise parabolic method (PPM) to meteorological modeling.” Monthly Weather Review 118, no. 3 (1990): 586-612.

Woodward, Paul R. “Piecewise-parabolic methods for astrophysical fluid dynamics.” In Astrophysical Radiation Hydrodynamics, pp. 245-326. Springer Netherlands, 1986.

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Plewa, Tomasz, and Ewald Mueller. “The consistent multi-fluid advection method.” arXiv preprint astro-ph/9807241 (1998).

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection.” Journal of computational physics 23, no. 3 (1977): 276-299.

Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Rider, William J. “Reconsidering remap methods.” International Journal for Numerical Methods in Fluids 76, no. 9 (2014): 587-610.

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

J. P. Boris “A Fluid Transport Algorithm That Works,” Proceedings of the seminar course on computing as a language of physics, 2-20 August 1971, InternationalCentre for Theoretical Physics, Triest, Italy.

 

 

Advertisements

We are all responsible for this mess; It is everyone’s fault

Never attribute to malevolence what is merely due to incompetence

― Arthur C. Clarke

shutterstock_318051176-e1466434794601-800x430A year ago, I sat in one of my manager’s office seething in anger. After Trump’s election victory, my emotions shifted from despair to anger seamlessly. At that particular moment, it was anger that I felt. How could the United States possibly have elected this awful man President? Was the United States so completely broken that Donald Trump was a remotely plausible candidate, much less victor.

Is ours a government of the people, by the people, for the people, or a kakistocracy rather, for the benefit of knaves at the cost of fools?

― Thomas Love Peacock

fig10_roleApparently, the answer is yes, the United States is that broken. I said something to the effect that we too are to blame for this horrible moment in history. I knew that both of us voted for Clinton, but felt that we played our own role in the election of our reigning moron-in-chief. Today a year into this national nightmare, the nature of our actions leading to this unfolding national and global tragedy is taking shape. We have grown to accept outright incompetence in many things, and now we have a genuinely incompetent manager as President. Lots of incompetence is accepted daily without even blinking, I see it every single day. We have a system that increasingly renders, the competent, incompetent by brutish compliance with directives born of broad-based societal dysfunction.

In a hierarchy, every employee tends to rise to his level of incompetence.

― Laurence J. Peter

What does the “Peter Principle” say about the United States? The President is incompetent. Not just a little bit, he is utterly and completely unfit for the job he has. He is the living caricature of a leader, not actually one. His whole shtick is loudly and brashly sounding like what a large segment of the population thinks a leader should be. Under his leadership, our government has descended into the theatre of the absurd. He doesn’t remotely understand our system of government, economics, foreign policy, maxresdefaultscience, or really anything other than marketing himself. His is an utterly self-absorbed anti-intellectual completely lacking empathy and the basic knowledge we should expect him to have. The societal destruction wrought by this buffoon-in-chief is profound. Our most important institutions are being savaged. Divisions in society are being magnified and we stand on the brink of disaster. The worst thing is that this disaster is virtually everyone’s fault whether you stand on the right or the left, you are to blame. The United States was in a weakened state and the Trump virus was poised to infect us. Our immune system was seriously compromised and failed to reject this harmful organism.

I love the poorly educated.

– Donald Trump

Sorry losers and haters, but my I.Q. is one of the highest -and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.

– Donald Trump

Trump is making everything worse. One of the keys to understanding the damage being done to the United States is seeing the poor condition of Democracy prior to the election. A country doesn’t just lurch toward such a catastrophic decision overnight, we were already damaged. In a sense, the body politic was already weakened and ripe for infection. We have gone through a period of more than 20 years of massive dysfunction led by the dismantling of government as a force for good in society. The Republican party is committed to small government, and part of their approach is to attack it. Government is viewed as an absolute evil. Part of the impact of this is the loss of competence in governing. Any governmental incompetence supports their imagesargument about the need to diminish it. The result has been a steady march toward dysfunction and poor performance along with deep seated mistrust, if not outright distain.

All of this stems from deeper wounds left in our history. The deepest wound is the Civil War and the original national sin of slavery. The perpetuation of institutional racism is one of the clearest forces driving our politics. We failed to heal the wounds of this war, and continue to wage a war against blacks. First through the scourge of Jim Crow laws, and now with the war on drugs with its mass incarceration. Our massive prison population is driven by our absurd and ineffective efforts to combat drug abuse. We actively avoid taking actions that would be effective in battling drug addiction. While it is a complete failure as a public health effort, it is a massively effective tool of racial oppression. More recent wounds were left by the combination of the Vietnam war and Civil rights movement in the 1960’s along with Watergate and Nixon’s corruption. The Reagan revolution and the GOP attacks on the Clinton’s were their revenge for progress. In a very real way the country has been simmering in action and reaction for the last 50 years. Trump’s election was the culmination of this legacy and our inability to keep the past as history.

Government exists to protect us from each other. Where government has gone beyond its limits is in deciding to protect us from ourselves.

― Ronald Reagan

Part of the hardest aspect of accepting what is going on comes in understanding how Trump’s opposition led to his victory. The entire body politic is ailing. The Republican party is completely inept at leading, unable to govern. This shouldn’t come as any surprise; the entire philosophy of the right is that government is bad. When your a priori assumption is that government is inherently bad, the nature of your governance is half-hearted. A natural outgrowth of this philosophy is rampant incompetence in governance. Couple this to a natural tendency toward greed as a core value, and you have the seeds of corruption. Corruption and incompetence is an apt description of the Republican party. The second part of this toxic stew is hate and fear. The party has spent decades stoking racial and religious hatred, and using fear of crime and terrorism to build their base. The result is a governing coalition that cannot govern at all. They are utterly incompetent, and no one more embodies their incompetence than the current President.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The Democrats are no better other than some basic human capacity for empathy. For example, the Clintons were quite competent, but competence is something we as a nation don’t need any more, or even believe in. Americans chose the incompetent candidate for President over the competent one. At the same time the Democrats feed into the greedy and corrupt nature of modern governance with a fervor only exceeded by the Republicans. They are what my dad called “limousine liberals” and really cater to the rich and powerful first and foremost while appealing to some elements of compassion (it is still better than “limousine douchebags” on the right). As a result the Democratic party ends up being only slightly less corrupt than the Republican while offering none of the cultural red meat that drives the conservative culture warriors to the polls.

In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule.

― Friedrich Nietzsche

The thing that sets the Democratic party back is a complete lack unity or discipline. They are fractious union of special interests that can barely tolerate one another. They cannot unify to help each other, and each faction is single issue group that can’t be bothered to form an effective coalition. The result is a party that is losing despite holding a majority of the votes. Many of the Democratic voters can’t be bothered to even vote. This losing coalition has let GOP driven fear and hate win along with a systematic attack on our core values as a democratic republic (vast sums of money in politics, voter rights, voter suppression, and gerrymandering). They are countered by a Republican party that is unified and supporting of their factions. The different factions work together to form a winning coalition in large part through accepting each other’s extreme views as part of their rubric of beliefs.\

maxresdefault copyWhile both parties cater to the greedy needs of the rich and powerful, the differences in the approach is completely seen in the approach to social issues. The Republicans appeal to traditional values along with enough fear and hate to bring the voters out. They stand in the way of scary progress and the future as the guardians of the past. They are the force that defends American values, which means white people and Christian values. With the Republicans, you can be sure that the Nation will treat those we fear and hate with violence and righteous anger without regard to effectiveness. We will have a criminal justice system that exacts vengeance on the guilty, but does nothing to reform or treat criminals. The same forces provide just enough racially biased policy to make the racists in the Republican ranks happy.

The Democrats stand for a progressive and empathic future that is represented by many different groups each with their own specific grievances. One of the biggest problems on both sides is intolerance. This might be expected on the right, after all white supremacy is hardly a tolerant world view. The left helps the right out by being even less tolerant. The left’s factions cannot tolerate any dissent, on any topic. We hear endless whining about micro-aggressions, and cultural appropriation along with demands for politicalblamedemotivator correctness. They are indeed “snowflakes” who are incapable of debate and standing up for their beliefs. When they don’t like what someone has to say, they attack them and completely oppose the right to speak. The lack of tolerance on the left is one of the forces that powered Trump to the White House. It did this through a loss of any moral high ground, and the production of a divided and ineffective liberal movement. The left has science, progress, empathy and basic human decency on their side yet continue to lose. A big part of their losing strategy is the failure to support each other, and engage in an active dialog on the issues they care so much about.

A dying culture invariably exhibits personal rudeness. Bad manners. Lack of consideration for others in minor matters. A loss of politeness, of gentle manners, is more significant than is a riot.

― Robert A. Heinlein

The biggest element in Trump’s ascension to the Presidency is our acceptance of incompetence in our leaders. We accept incompetence too easily; incompetence is promoted across society. We have lost the ability to value and reward expertise and competence. Part of this can be blamed on the current culture where marketing is more important than substance. Trump is pure marketing. His entire brand is himself, sold to people who have lost the ability to smell the con. A big part of the appeal of Trump was the incompetence of governing that permeates the Republican view.

This is where the incompetence and blame comes to work. Success at work depends little on technical success because technical success can be faked. What has become essential at work is compliance with rules and control of our actions. Work is not managed, our compliance with rules is managed.  Increasingly the incompetence of the government is breeding incompetence at my work. The government agency that primarily runs my Lab is a complete disaster. We have no leadership either management orimages science. Both are wrought by the destructive tendency of the Republican party that makes governing impossible. They are a party of destruction, not creation. When Republicans are put in power they can’t do anything, their entire being is devoted to taking things apart. The Democrats are no better because of their devotion to compliance, regulation and compulsive rule following without thought. This tendency is paired with the liberal’s inability to tolerate any discussion or debate over a litany of politically correct talking points.

The management incompetence has been brewing for years. Our entire management construct is based lack of trust. The Lab itself is not to be trusted. The employees are not to be trusted. We are not trusted by the left or the right albeit for different reasons. The net result of all of this lack of trust is competence being subservient to lack-of-trust-based compliance with oversight. We are made to comply and heel to the will of the government. This is the will of a government that is increasingly completely incompetent and unfit to run anything, much less a nuclear weapons enterprise! The management of the Lab is mostly there to launder money and drive the workforce into a state of compliance with all directives. The actual accomplishment of high quality technical work is the least important thing we do. Compliance is the main thing. We want to be managed to never ever fuck up, ever. Ipeter_nanosf you are doing anything of real substance and performing at a high level, fuck ups are inevitable. The real key to the operation is the ability of technical competence to be faked. Our false confidence in the competent execution of our work is a localized harbinger of “fake news”.

Fox treats me well, it’s that Fox is the most accurate.

– Donald Trump

We have non-existent peer review and this leads to slack standards. Our agency tells us that we cannot fail (really, we effectively have to succeed 100% of the time). The way to not fail is lower our standards, which we have done in response. We aid our lower standards by castrating the peer review we ought to depend on. We now have Labs that cannot stand to have an honest critical peer review because of the consequences. In addition, we have adopted foolish financial incentives for executive management to compound problems. Since the executive bonuses are predicated on successful review, reviews have become laughable. Reviewers don’t dare raise difficult issues unless they never want to be invited back. We are now graded on a scale where everyone gets an “A” without regard to actual performance. Our excellence has become a local version of “fake news”.

At the very time that we need to raise our standards, we are allowing them to plummet lower and lower. Our reviews have become focused on spin and marketing of the work. Rather than show good work, provide challenges, and receive honest feedback, we form a message focused on “everything is great, and there is nothing to worry about”. Let’s be clear, the task of caring for nuclear weapons without testing them is incredibly challenging. To do this task correctly we need to be focused5064 on raising our level of excellence across the board in science and engineering. Our technical standards should be higher than ever because of the difficulty and importance of this enterprise. Requiring 100% success might seem to be a way to do this, but it isn’t.

If you are succeeding 100% of the time, you are not applying yourself. When one is working at a place where you are mostly succeeding, but occasionally failing (and learning/growing), the outcomes are optimal. This is true in sports, business, science and engineering. Organizations are no different to do the best work possible, you need to fail and be working on the edge of failure. Ideally, we should be doing our work in a mode where we succeed 70-80% of the time. Our incompetent governance and leadership does not understand how badly they are undermining the performance of this vital enterprise. So, the opposite has happened, and the people leading us in the government are too fucking stupid to realize it. Our national leadership has become more obsessed with appearances than substance. All they see is the 100% scores and they conclude everything is awesome while our technical superiority is crumbling. Greatness in America today is defined by declaring greatness and refusing to accept evidence to the contrary.

Look at the F-35 as an example of our current ability to execute a big program. This aircraft is a completely corrupt massive shit storm. It is a giant, hyper-expensive fuckup. Rather than a working aircraft the F-35 was a delivery vehicle for pork barrel spending. God knows how much bullshitting went into the greenlighting of the program over the years. The bottom line is that the F-35 costs a huge amount of money, while being a complete failure as a weapon’s system. My concern that the F-35 is an excellent representative of our current technical capability. If it is, we are in deep trouble. We are expensive, corrupt and incompetent (sounds like a description of the President!). I’m very glad that we never ask our weapon’s lab to fly. Given our actual ability, we can guess the result.

160908_pol_trump-forum-jpg-crop-promo-xlarge2-1This is the place where we get to the core of the accent of Trump. When we lower our standards on leadership we get someone like Trump. The lowering of standards has taken place across the breadth of society. This is not simply National leadership, but corporate and social leadership. Greedy, corrupt and incompetent leaders are increasingly tolerated at all levels of society. At the Labs where I work, the leadership has to say yes to the government, no matter how moronic the direction is. If you don’t say yes, you are removed and punished. We now have leadership that is incapable of engaging in active discussion about how to succeed in our enterprise. The result are labs that simply take the money and execute whatever work they are given without regard for the wisdom of the direction. We now have the blind leading the spineless, and the blind are walking us right over the cliff. Our dysfunctional political system has finally shit the bed and put a moron in the White House. Everyone knows it, and yet a large portion of the population is completely fooled (or simply to foolish or naïve to understand how bad the situations is).

We are a paper tiger; a real opponent may simply destroy us. Our national superiority militarily and technically may already be gone. We are vastly confident of our outright superiority. This superiority requires our nation to continually bring their best to the table. We have almost systematically undermined our ability to apply our best to anything. We’ve already been attacked and defeated in the cyber-realm by Russia. Our society and democracy was assaulted by the Russians, and we were routed. Our incompetent governance has done virtually nothing. The seeds of our defeat have been sown for years all across our society. We are too incompetent to even realize how vulnerable we are.

I will admit that this whole line of thought might be wrong. The Labs where I work might be local hotbeds of incompetent management. What we see locally is not indicative of broader national trends. This seems very unlikely. What is more terrifying is the prospect that the places where I work are well managed comparatively. If this is true then it is completely plausible for us to have an incompetent President. So, the reality we have is stark incompetence across society that has set the stage for national tragedy. Our institutions and broad societal norms are under siege. Every single day of the Trumptrump_fired_tw-865x452 administration lessens the United States’ prestige. The World had counted on the United States for decades, but cannot any longer. We have made a decision as a nation that disqualifies us from a position of leadership. The Republican party has the greatest responsibility for this, but the Democrats are not blameless. Our institutional leadership shares the blame too. Places like the Labs where I work are being destroyed one incompetent step at a time. All of us need to fix this.

We have a walking, talking, tweeting example of our incompetence leading us, and it is everyone’s fault. We all let this happen. We are all responsible. We own this.

Ask not what your country can do for you; ask what you can do for your country.

― John F. Kennedy

 

 

 

 

 

How to properly use direct numerical simulations (DNS)

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

Nothing stokes the imagination for the power of computing to shape scientific discovery like direct numerical simulation (DNS). Imagine using the magic of the computer to unveil the secrets of the universe. We simply solve the mathematical equations that describe nature accurately at immense precision, and magically truth comes out the other end. DNS also stokes the demand for computing power, the bigger the Unknown-2computer, the better the science and discovery. As an added bonus the visualizations of the results are stunning almost Hollywood-quality and special effect appealing. It provides the perfect sales pitch for the acquisition of the new supercomputer and everything that goes with it. With a faster computer, we can just turn it loose and let the understanding flow like water bursting through a dam. With the power of DNS, the secrets of the universe will simply submit to our mastery!

If science were only that easy.  It is not and this sort of thing is a marketing illusion for the naïve and foolish.

vyxvbzwxThe saddest thing about DNS is the tendency for scientist’s brains to almost audibly click into the off position when its invoked. All one has to say is that their calculation is a DNS and almost any question or doubt leaves the room. No need to look deeper, or think about the results, we are solving the fundamental laws of physics with stunning accuracy! It must be right! They will assert, “this is a first principles” calculation, and predictive at that. Simply marvel at the truths waiting to be unveiled in the sea of bits. Add a bit of machine learning, or artificial intelligence to navigate the massive dataset produced by DNS, (the datasets are so fucking massive, they must have something good! Right?) and you have the recipe for the perfect bullshit sandwich. How dare some infidel cast doubt, or uncertainty on the results! Current DNS practice is a religion within the scientific community, and brings an intellectual rot into the core computational science. DNS reflects some of the worst wishful thinking in the field where the desire for truth, and understanding overwhelms good sense. A more damning assessment would be a tendency to submit to intellectual laziness when pressed by expediency, or difficulty in progress.

mellado_turb_mixing_01Let’s unpack this issue a bit and get to the core of the problems. First, I will submit that DNS is an unambiguously valuable scientific tool. A large body of work valuable to a broad swath of science can benefit from DNS. We can study our understanding of the universe in myriad ways at phenomenal detail. On the other hand, DNS is not ever a substitute for observations. We do not know the fundamental laws of the universe with such certainty that the solutions provide an absolute truth. The laws we know are models plain and simple. They will always be models. As models, they are approximate and incomplete by their basic nature. This is how science works, we have a theory that explains the universe, and we test that theory (i.e., model) against what we observe. If the model produces the observations with high precision, the model is confirmed. This model confirmation is always tentative and subject to being tested with new or more accurate observations. Solving a model does not replace observations, ever, and some uses of DNS are masking laziness or limitations in observational (experimental) science.

To acquire knowledge, one must study;

but to acquire wisdom, one must observe.

― Marilyn Vos Savant

One place where the issue of DNS comes to a head is validation. In validation, a code (i.e., model) is compared with experimental data for the purposes of assessing the model’s ability to describe nature. In DNS, we assume that nature is so well understood that our model can describe it in detail, the leap too far is saying that the model can replace observing nature. This presumes that the model is completely and totally validated. I find this to be an utterly ludicrous prospect. All models are tentative descriptions of reality, and intrinsically limited in some regard. The George Box maxim immediately comes to mind “all models are wrong”. This is axiomatically true, and being wrong, models cannot be used to validate. With DNS, this is suggested as a course of action violating the core principles of the scientific method for the sake of convenience. We should not allow this practice for the sake of scientific progress. It is anathema to the scientific method.

dag006This does not say that DNS is not useful. DNS can produce scientific results that may be used in a variety of ways where experimental or observational results are not available. This is a way of overcoming a limitation of what we can tease out of nature. Realizing this limitation should always come with the proviso that this is expedient, and used in the absence of observational data. Observational evidence should always be sought and the models should always be subjected to tests of validity. The results come from assuming the model is very good and provides value, but cannot be used to validate the model. DNS is always second best to observation. Turbulence is a core example of this principle, we do not understand turbulence; it is an unsolved problem. DNS as a model has not yielded understanding sufficient to unveil the secrets of the universe. They are still shrouded. Part of the issue is the limitations of the model itself. In turbulence DNS almost always utilizes an unphysical model to describe fluid dynamics with a lack of thermodynamics and infinitely fast acoustic waves. Being unphysical in its fundamental character, how can we possibly consider it a replacement for reality? Yet in a violation of common sense driven by frustration of lack of progress, we do this all the time.

One of the worst aspects of the entire DNS enterprise is the tendency to do no assessment of uncertainty with its results. Quite often the results of DNS are delivered without any uncertainty of approximation or the model. Most often no uncertainty at all is included, estimated or even alluded to. The results of DNS are still numerical approximations with approximation error. The models while detailed and accurate are always approximations and idealizations of reality. This aspect of the modeling must be included for the work to be used for high consequence work. If one is going to use DNS as a stand-in for experiment, this is the very least that must be done. The uncertainty assessment should also include the warning that the validation is artificial and not based on reality. If there isn’t an actual observation available to augment the DNS in the validation, the reader should be suspicious, and the smell of bullshit should alert one to deception.

Some of our models are extremely reliable, and have withstood immense scrutiny. These models are typically the subject of DNS. A couple of equations are worth discussing in depth, Schrödinger’s equations for quantum physics, molecular & atomic dynamics and the Navier-Stokes equations for turbulence. These models are frequent topics of DNS investigations, and all of them are not reality. The equations are mathematics and a logical constructive language of science, but not actual reality. These equations are unequal in terms of their closeness to fundamentality, but our judgment should be the same. The closeness to “first principles” should be reflected in the assessment of uncertainty, which also reflects the problem being solved by the DNS. None of these equations will yield truths so fundamental as to not be questioned or free of uncertainty.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

― Arthur C. Clarke

Another massive problem with DNS is the general lack of uncertainty assessment. It is extremely uncommon to see any sort of uncertainty assessment accompanying DNS. If we accept the faulty premise that DNS can replace experimental data, the uncertainty associated with these “measurements” must be included. This almost universally shitty practice further undermines the case of using DNS as a replacement for experiment. Of course, we are accepting far too many experimental results without their own error bars these days. Even if we make the false premise that the model being solved DNS is true to the actual fundamental laws, the solution is still 2621-fluid-dynamics-look-two-giantsapproximate. The approximate solution is never free of numerical error. In DNS, the estimate of the magnitude of approximation error is almost universally lacking from results.

Let’s be clear, even when used properly DNS results must come with an uncertainty assessment. Even when DNS is used simply as a high-fidelity solution of a model, the uncertainty of the solution is needed for assessment of the utility of the results. This utility is ultimately determined by some comparison with observations with phenomena seen in reality. We may use DNS to measure the power of a simpler model to provide consistency with the more fundamental model included in DNS. This sort of utility is widespread in turbulence, material science or constitutive modeling, but credibility of the use must always be determined with experimental data. The observational data always has primacy and DNS should always be subservient to realities results.

Cielo rotatorUnfortunately, we also need to address an even more deplorable DNS practice. Sometimes people simply declare that their calculation is a DNS without any evidence to support this assertion. Usually this means the calculation is really, really, really, super fucking huge and produces some spectacular graphics with movies and color (rendered in super groovy ways). Sometimes the models being solved are themselves extremely crude or approximate. For example, the Euler equations are being solved with or without turbulence models instead of Navier-Stokes in cases where turbulence is certainly present.  This practice is so abominable as to be almost a cartoon of credibility. This is the use of proof by overwhelming force. Claims of DNS should always be taken with a grain of salt. When the claims take the form of marketing they should be met with extreme doubt since it is a form of bullshitting that tarnishes those working to practice scientific integrity.

The world is full of magic things, patiently waiting for our senses to grow sharper.

― W.B. Yeats

logoPart of doing science correctly is honesty about challenges. Progress can be made with careful consideration of the limitations of our current knowledge. Some of these limits are utterly intrinsic. We can observe reality, but various challenges limit the fidelity and certainty of what we can sense. We can model reality, but these models are always approximate. The models encode simplifications and assumptions. Progress is made by putting these two forms of understanding into tension. Do our models predict or reproduce the observations to within their certainty? If so, we need to work on improving the observations until they challenge the models. If not, the models need to be improved, so that the observations are produced. The current use of DNS short-circuits this tension and acts to undermine progress. It wrongly puts modeling in the place of reality, which only works to derail necessary work on improving models, or work to improve observation. As such, poor DNS practices are actually stalling scientific progress.

I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.

― Isaac Asimov

Verification and Numerical Analysis are Inseparable

Tiny details imperceptible to us decide everything!

― W.G. Sebald

downloadThe standards of practice in verification of computer codes and applied calculations are generally appalling. Most of the time when I encounter work, I’m just happy to see anything at all done to verify a code. Put differently, most of the published literature accepts a slip shod practice in terms of verification. In some areas like shock physics, the viewgraph norm still reigns supreme. It actually rules supreme in a far broader swath of science, but you talk about what you know. The missing element in most of the literature is the lack of quantitative analysis of results. Even when the work is better and includes detailed quantitative analysis, the work usually lacks a deep connection with numerical analysis results. The typical best practice in verification only includes the comparison of the observed rate of convergence with the theoretical rate of convergence. Worse yet, the result is asymptotic and codes are rarely practically used with asymptotic meshes. Thus, standard practice is largely superficial, and only scratches the surface of the connections with numerical analysis.

The Devil is in the details, but so is salvation.

― Hyman G. Rickover

The generic problem is that it rarely occurs at all much less being practiced well, then we might want to do it with genuine excellence. Thus, the first step to take is regular pedestrian application of standard analysis. Thus, what masquerades as excellence today is quite threadbare. We verify order of convergence in code verification under circumstances that usually don’t meet the conditions where they formally apply. The theoretical order of convergence only applies in the limit where the mesh is asymptotically fine. Today, the finite size of the discretization is not taken directly into account. This can be done, I’ll show you how below. Beyond this rather great leap of faith, verification does not usually focus on the magnitude of error, numerical stability, or the nature of the problem being solved. All of these are available results through competent numerical analysis, in many cases via utterly classical techniques.

A maxim of verification that is important to emphasize is that the results are a combination of theoretical expectations, the finite resolution and the nature of the problem being solved. All of these factors should be considered in interpreting results.

Before I highlight all of the ways we might make verification a deeper and more valuable investigation, a few other points are worth making about the standards of practice. The first thing to note is the texture within verification, and its two flavors. Code verification is used to investigate the correctness of a code’s implementation. This is accomplished by solving problems with an analytical (exact or nearly-exact) solution. The key is to connect the properties of the method defined by analysis with the observed behavior in the code. The “gold standard” is verifying that the order of convergence observed matches that expected from analysis.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The second flavor of verification is solution (calculation) verification. In solution verification, the objective is to estimate the error in the numerical solution of an applied problem. The error estimate is for the numerical component in the overall error separated from modeling errors. It is an important component in the overall uncertainty estimate for a calculation. The numerical uncertainty is usually derived from the numerical error estimate. The rate or order of convergence is usually available as an auxiliary output of the process. Properly practiced the rate of convergence provides context for the overall exercise.

imagesOne of things to understand is that code verification also contains a complete accounting of the numerical error. This error can be used to compare methods with “identical” orders of accuracy for levels of numerical error, which can be useful in making decisions about code options. By the same token solution verification provides information about the observed order of accuracy. Because the applied problems are not analytical or smooth enough, they generally can’t be expected to provide the theoretical order of convergence. The rate of convergence is then an auxiliary result of the solution verification exercise just as the error is an auxiliary result for code verification. It contains useful information on the solution, but it is subservient to the error estimate. Conversely, the error provided in code verification is subservient to the order of accuracy. Nonetheless, the current practice simply scratches the surface of what could be done via verification and its unambiguous ties to numerical analysis.

Little details have special talents in creating big problems!

― Mehmet Murat ildan

If one looks at the fundamental (or equivalence( theorem of numerical analysis, the two aspects of theorem are stability and consistency implying convergence (https://wjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/ ). Verification usually uses a combination of error estimation and convergence testing to imply consistency. Stability is merely assumed. This all highlights the relatively superficial nature of the current practice. The result being tested is completely asymptotic, and the stability is merely assumed and never really strictly tested. Some methods are unconditionally stable, which might also be tested. In all cases the lack of stress testing the results of numerical analysis is short-sighted.

One of the most important results in numerical analysis is the stability of the approximation. Failures of stability are one of the most horrific things to encounter in practice. Stability results should be easy and revealing to explore via verification. It also offers the ability to explore what failure of a method looks like, and the sharpness of the estimates of stability. Tests could be devised to examine the stability of a method and confirm this rather fundamental aspect of a numerical method. In addition to confirming this rather fundamental behavior, the character of instability will be made clear if it should arise. Generally, one would expect calculations to diverge under mesh refinement and the instability to manifest itself earlier and earlier as the mesh is refined.  I might suggest that stability could be examined via mesh refinement, and observing the conditions where the convergence character changes.

One of the most unpleasant issues with verification is the deviations of the observed rate of convergence from what is expected theoretically. No one seems to have a good answer to how close, is close enough? Sometimes we can observe that we systematically get closer and closer as the mesh is refined. This is quite typical, but systematic

feuler

The expected convergence rate for a single time step using forward Euler for a linear ODE

deviations are common. As I will show, the deviations are expected and may be predicted by detailed numerical analysis. The key is to realize that the effects of finite resolution can be included in the analysis. As such for simple problems we can predict the rate of convergence observed and its deviations for the asymptotic rate. Beyond the ability to predict the rate of convergence, this analysis provides a systematic explanation for this oft-seen results.

This can be done very easily using classical methods for numerical analysis (see previous blog post https://wjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/). We can start with the knowledge that detailed numerical analysis uses an analytical solution to the equations as its basis. We can then analyze the deviations from the analytical and their precise character including the finite resolution. As noted in that previous post, the order of accuracy is examined via a series expansion in the limit where the step size or mesh is vanishingly small. We also know that this limit is only approached and never actually reached in any practical calculation.

For the simple problems amenable to these classical analyses, we can derive the exact rate of convergence for a given step size (this result is limited to the ideal problem central to the analysis). The key part of this approach is using the exact solution to the model equation and the numerical symbol providing an error estimate. Consider the forward Euler method for ODE’s, u^{n+1} = u^n + h \lambda u^n, the error is E(h) = |1 + h \lambda - \exp(h \lambda) |. We can now estimate the error for any step size and analytically estimate the convergence rate we would observe in practice. If we employ the relatively standard practice of mesh halving for verification, we get the estimate of the rate of convergence, n(h) = \log\left[E(h)/E(h/2)\right]/\log(2). A key point to remember is that the solution with the halved time step takes twice the number of steps. Using this methodology, we can easily see the impact of finite resolution. For the forward Euler method, we can see that steps larger than zero raise the rate of convergence above the theoretical value of one. This is exactly what we see in practice.

feuler10

The expected convergence rate for a ten time steps using forward Euler for a linear ODE

When one starts to examine what we expect through analysis, a number of interesting things can be observed. If the coarsest step size is slightly unstable, the method will exhibit very large rates of convergence. Remarkably, we see this all the time. Sometimes results of verification produces seemingly absurdly high rates of convergence. Rather than being indicative of everything being great, it is an indication that the calculation is highly suspect. The naïve practitioner will often celebrate the absurd result as being triumphant when it is actually a symptom of problems requiring greater attention. With the addition of a refined analysis, this sort of result can be seen as pathological.

feuler100

The expected convergence rate for one hundred time steps using forward Euler for a linear ODE

Immediately recognize that we have yielded a significant result with the analysis of perhaps the simplest numerical method in existence. Think of the untapped capacity for explaining the behavior observed in computational practice. Moreover, this significant result explains a serious and far pernicious problem in verification, the misreading of results. Even where the verification practice is quite good, the issue of deviation of convergence rates from the theoretical rates is pervasive. We can easily see that this is a completely expected behavior that falls utterly in line with expectations. This ought to “bait the hook” to conducting more analysis, and connecting it to the verification results.

There is a lot more that could be done here, I’ve merely scratched the surface.

The truth of the story lies in the details.

― Paul Auster

 

Our Silence is Their Real Power

When truth is replaced by silence, the silence is a lie.

― Yevgeny Yevtushenko

We all live in a golden age for the abuse of power. Examples abound and touch everyone’s life. We see profound abuses in politics daily. Recently the decades of abuse committed by Harvey Weinstein have come to light. Apparently, it barely exist100_weinsteinted in the shadows for years and years as one of Hollywood’s worst kept secrets. Weinstein preyed on women with virtual impunity with his power and prestige acting to keep his actions in the dark. The promise and threat of his power in that industry gave him virtual license to act. The silence of the myriad of insiders who knew about the pattern of abuse allowed the crimes to continue unabated. Only after the abuse came to light broadly and outside the movie industry did the unacceptability arise. When the abuse stayed in the shadows, and its knowledge limited to industry insiders, it continued.

Cowardice rightly understood begins with selfishness and ends with shame.

― José Rizal

The power of the online world and our free press should be a boon to exposing and bringing these monsters down. People can be alerted to unacceptable behavior and demand action to remove these abominations from power. It is not working out this way. Instead the power of information has been turned on its head, and the modern world of information has empowered them to new heights. We only need to look at the occupant of the Oval Office for proof. People in power have access to resources and talent not available to others. This power can be turned to marketing and shaping the message to allow them to retain power. Power has its privileges and among these are access to wealth, information and (sexual) favors, most of us can’t even dare to dream of. The abusers turn all of this into a machine that retains and even grows their power. The modern world of interconnection is the latest tool in their arsenal of power. The powerful have largely controlled the media forever, but this control has taken on a new character with Facebook and Twitter.

People follow leaders by choice. Without trust, at best you get compliance.

― Jesse Lyn Stoner

I see it at work in small ways. Sometimes it’s the desire of those in power to keep their poor leadership or stewardship from being open to criticism, i.e., an honest peer review. More recently we have seen ourselves subjected to training on information security that was merely an excuse to be threatened by the United States Attorney General, Jeff Sessions. It was a gross waste of resources to provide a platform for abuse of power (at the cost many millions of dollars to threaten people, and help crush morale). Ostensibly the training was to highlight the importance of protecting sensitive and classified information. This is a topic that we are already trained heavily on, and we are acutely aware of in our daily work. Given our ongoing knowledge of the topic, the whole training was provided to silence the critics of the administration, who will now misuse information control to hide their crimes.

Compliance” is just a subset of “governance” and not the other way around.

― Pearl Zhu

The United States has gone through a virtual orgy of classification since 9/11. This is an extremely serious issue and its tendrils permeate this entire topic. I’ve written in the past about our problems in this regard. Our government, related organizations, and contractors are running wild classifying everything in sight. Increasingly the classification is used to hide and bury information. Quite often things are labeled with the “official use only” category because it is basically unregulated. There is no standard and the tendency is to simply hide as much as possible. It is primed for wide scale abuse. I’m fairly certain that the abuse is already happening on an enormous scale. It is quite often used to keep embarrassing, criminal or simply slip-shod work out of sight and away from scrutiny. It is exactly the sort of thing the current (and past) administrations would use to hide stuff from view. Of course, higher levels of classification have been used similarly. The prime example is the cover-up of the slaughter of innocents by the military in Iraq central to the whole Chelsea Manning case. It wasn’t classified, it was criminal and embarrassing, yet classification was used to attempt to bury the evidence.

Those who lack the courage will always find a philosophy to justify it.

― Albert Camus

DMgfsliWkAAzZ_-Our current President is serial abuser of power whether it be the legal system, women, business associates or the American people, his entire life is constructed around abuse of power and the privileges of wealth. Many people are his enablers, and nothing enables it more than silence. Like Weinstein, his sexual misconducts are many and well known, yet routinely go unpunished. Others either remain silence or ignore and excuse the abuse a being completely normal.

I better use some Tic Tacs just in case I start kissing her. You know, I’m automatically attracted to beautiful — I just start kissing them. It’s like a magnet. Just kiss. I don’t even wait. And when you’re a star, they let you do it. You can do anything…. Grab ’em by the pussy. You can do anything.

– President Donald Trump

We are all enablers. We knew what kind of monster we were electing to the highest office, and people stood by silent. Worse yet they gave him their tacit approval. Weinstein is no different. Sessions is no different either, no sexual misconduct known there, but ignorance and racism is a clear part of his repertoire. In addition, he has direct conflicts of interests with vast prison stocks in his portfolio, and the power to improve his stock holdings through his office and his direct actins. Since his boss is a walking monument to conflict of interest, thus nothing will happen. He will abuse the power of his office with impunity. At this point the entire Nation has forgiven their crimes through willful ignorance. The mandated threat masquerading as training is simply the latest, and by no means the biggest violation of standards of conduct. The threat is designed to silence any documentation of violations and assist in their continued violation through our continued silence and tacit acceptance of their power.

The standards of conduct under the Trump Administration is headed straight to hell. The only thing that they are opposed to is threats to their pdownloadower and ability to abuse it. They are an entire collection of champion power abusers. Like all abusers, they maintain their power through the cowering masses below them. When we are silent their power is maintained. They are moving the squash all resistance. My training was pointed at the inside of the institutions and instruments of government where they can use “legal” threats to shut us up. They have waged an all-out assault against the news media. Anything they don’t like is labeled as “fake news” and attacked. The legitimacy of facts has been destroyed, providing the foundation for their power. We are now being threatened to cut off the supply of facts to base resistance upon. This training was the act of people wanting to rule like dictators in an authoritarian manner.

I am personally quite concerned about how easily we accept this authoritarian approach to leadership. We seem all too willing and able to simply salute and accept the commands of corrupt overlords. We are threatened with extreme consequences, and those in power can do as they please with virtual impunity. For those abusing power, trump_fired_tw-865x452the set-up is perfect. They are the wolves and we, the sheep, are primed for slaughter. Recent years have witnessed an explosion in the amount of information deemed classified or sensitive. Much of this information is controlled because it is embarrassing or uncomfortable for those in power. Increasingly, information is simply hidden based on non-existent standards. This is a situation that is primed for abuse of power. People is positions of power can hide anything they don’t like. For example, something bad or embarrassing can be deemed to be proprietary or business-sensitive, and buried from view. Here the threats come in handy to make sure that everyone keeps their mouths shut. Various abuses of power can now run free within the system without risk of exposure. Add a weakened free press and you’ve created the perfect storm.

The mantle of National security and patriotism works to compliment the systematic abuse of power. One of the primary forms of abuse is financial gain. The decision making behind the flow of money is typically hidden. No one benefiting from the flow of money is too keen on the details of who got the money and why getting out. All one has to do is look at the AG’s finances. He benefits greatly through other people’s misery. More and longer prison sentences raise the stock prices he holds and enriches download-1him. No one even asks the question, and the abuse of power goes unchecked. Worse yet, it becomes the “way things are done”. This takes us full circle to the whole Harvey Weinstein scandal. It is a textbook example of unchecked power, and the “way we do things”.

The real division is not between conservatives and revolutionaries but between authoritarians and libertarians.

― George Orwell

The only way to rid a system of these abuses is the exposure to clear view. When people see the way that those in power abuse the system, the abusers need to change their way or lose their power. This is the exact reason why the abusers are so keen to squash any disclosures (i.e., leaks). They like being able to run their shadow empires with impunity. Without the risk of exposure, the abusers can simply take their abuses to new levels. For systems needs genuine information control and security, the threat of abuse of power is extreme. If the system is not run with the highest level of ethics and integrity, the abuse of power, and the genuine need for secrecy are on a collision course. In taking my training, the threats from someone completely lacking any visible ethics or integrity is chilling. Simply put, the Attorney General is proven racist, white supremacist apologist, and serial abuser of power. He has no ethical basis to issue his threats, only unbridled power. He has been given free reign purely on ideological grounds.

woody-allen-harvey-weinstein-gty-jt-171015_31x13_992

Democracy can exist only in the countries where people are brave! Coward nations always live under the authoritarian regimes!

― Mehmet Murat ildan

This value system is in complete and total collision with the values that the United States is supposedly based on. This value system is unfortunately consistent with the actual values in the United States today. We are struggling between who we should be and who we are. At work this runs headlong into the fundamental character of the institutions and the people employed. We have a generally docile workforce who are easily cowered by power. We have installed authoritarian monsters in positions of power who are more than willing to abuse this power. So the sheep bow their heads to the wolves and simply welcome the slaughter. Our institutions are similarly cowered by money. The federal government pays the bills, and with that buys what is moral and ethical. A good and logical question is where does this end. What is the point where we rise up and say “NO!”? What is the point where the actions by those in power are too much to tolerate? We aren’t there yet, and I shudder thinking of where that line is. Worse yet, I’m not entirely sure there is a line that our employees, management or the institutions themselves would enforce. This is truly a terrifying prospect.

The strategic adversary is fascism… the fascism in us all, in our heads and in our everyday behavior, the fascism that causes us to love power, to desire the very thing that dominates and exploits us.

― Michel Foucault

No matter the reasons for the training I took, the tone and approach set by those governing my workplace should be chilling to contemplate. We are creating an environment where unethical conduct and rampant abuse of power go unchecked. Abusers can thrive and run free. We can be sure that their power will grow and extend to other areas. Without the checks and balances that exposure brings, the abuse of power is free to run wild and grow. We have installed an utterly repugnant serial abuser as President. He has a history of financial, personal and sexual ethics violations. He now has more power than ever, and is actively destroying the vehicles for oversight. He has surrounded himself with people of similar morality and ethics, or parasitic enablers who trade their own morality and ethics for power (like VP Pence, who like the religious right make their bed with this devil freely).

A fascist is one whose lust for money or power is combined with such an intensity of intolerance toward those of other races, parties, classes, religions, cultures, regions or nations as to make him ruthless in his use of deceit or violence to attain his ends.

― Henry A. Wallace

This sort of deal making isn’t simply for the rampant misogyny of the right wing to answer for. Those on the left have their own crimes to answer for as the Weinstein scandal makes clear. All those people standing up for Women’s rights in the World, but tolerating his private abuse of women are no better. All of their public stands for liberal causes are made shallow through the acts of private cowardice. They are equally guilty and no better than the so-called Christians embracing Trump. Some things are not acceptable, no matter who does them or their belief system. If you are a liberal Hollywood elite and you stood by while Weinstein abused and assaulted women, your morality is for sale. If you’re an evangelical Christian who voted for Trump, you are no better. Both men are monstrous abusers and morally unacceptable. Tmaxresdefaultoo often we make the case that their misdeeds are acceptable because of the power they grant to your causes through their position. This is exactly the bargain Trump makes with the right wing, and Weinstein made with the left.

Of course, I ask myself, am I really any better? I take my paycheck with the assumption that the terms of my employment mean they own me. What is the price of my silence? I have duty to my loved ones and my support for them. This keeps me as compliant as I am. I need to ask myself what too far looks like? I’ve asked what the limits for my employers are, and I fear there is no limit; I fear they will comply to almost anything. We are rapidly approaching a moral chasm if we haven’t already gone over the edge. Will we simply fall in, and let our nation become a kleptocracy with a wink and a nod toward our standards, ethics and morality while standing by and letting the abusers run wild. For the greater part, I think that we are already there. It is terrifying to think about how much worse it’s going to get.

Terror is a powerful means of policy and one would have to be a hypocrite not to understand this.

― Leon Trotsky

 

 

 

 

 

 

 

 

I am 9 of 13. Resistance is Futile.

Those who deny freedom to others, deserve it not for themselves

― Abraham Lincoln

ec8b47d56535c9d7b4cc3aa068943e7b001f3fbd0f07e5449db27f76381c4809I’d like to be independent empowered and passionate about work, and I definitely used to be. Instead I find that I’m generally disempowered compliant and despondent these days. The actions that manage us have this effect; sending the clear message that we are not in control; we are to be controlled, and our destiny is determined by our subservience. With the National environment headed in this direction, institutions like our National Labs cannot serve their important purpose. The situation is getting steadily worse, but as I’ve seen there is always somewhere worse. By the standards of most people I still have a good job with lots of perks and benefits. Most might tell me that I’ve got it good, and I do, but I’ve never been satisfied with such mediocrity. The standard of “it could be worse” is simply an appalling way to live. The truth is that I’m in a velvet cage. This is said with the stark realization that the same forces are dragging all of us down. Just because I’m relatively fortunate doesn’t mean that the situation is tolerable. The quip that things could be worse is simply a way of accepting the intolerable.

What is going on, and how did we get here?

When you read management theory, and I do, you would think that good management would do the opposite. I certainly like the feeling of being empowered and valued, it makes me feel like coming to work and doing my best. It is good to feel a real sense of purpose and value in work. I have had this experience and it is incredible. When I am creative, my voice is heard and my actions lead to positive outcomes, work is a real joy. I have definitely experienced this, but not recently. If working well, a management system would strive to engage people in this manner. The current management approach acts are pretty much completely opposite to this end. The entire system seems to be geared to putting us into a confined and controlled role. Our actions are limited because of fear, and the risk that something “bad” might happen. We are simply members of a collective and our individuality is more of a threat than a resource.

Today as always, men fall into two groups: slaves and free men. Whoever does not have two-thirds of his day for himself, is a slave, whatever he may be: a statesman, a businessman, an official, or a scholar.

― Friedrich Nietzsche

Why are we managed in such an inherently destructive short-sighted and inhumane manner?

Star Trek introduced us to the Borg, a race of semi-robotic aliens who simply absorbhqdefault beings (people) into a hive where their basic humanity and individuality is lost. Everything is controlled and managed for the good of the collective. Science Fiction is an allegory for society, and the forces of depersonalized control embodied by the Borg have only intensified in our world. Even people working in my chosen profession are under the thrall of a mindless collective. Most of the time it is my maturity and experience as an adult that is called upon. My expertise and knowledge should be my most valuable commodity as a professional, yet they go unused and languishing. They come to play in an almost haphazard catch-what-catch-can manner. Most of the time it happens when I engage with someone external. It is never planned or systematic. My management is much more concerned about me being up on my compliance training than productively employing my talents. The end result is the loss of identity and sense of purpose, so that now I am simply the ninth member of the bottom unit of the collective, 9 of 13.

Disobedience is the true foundation of liberty. The obedient must be slaves.

― Henry David Thoreau

Increasingly, the forces that have demoralized the blue collar working class world and propelled White Nationalism to the forefront of American politics have worked their way to me. The forces doing this are relentless and bipartisan both the right and left are doing this, but in different ways. Conservatives prize control and order with a horrid authoritarian streak naturally leading to Trump. Fear is a political tool wielded like a scalpel, enslaving the population to the security state. Generally speaking, the rise of the management class at work comes from this side of the spectrum along with their devaluation of the rank and file people. We see a tendency toward command-driven management, and being told what to do. Workers are simply meaningless details interchangeable and disposable. The management class is the heart of importance, and value. The rest of us aren’t really worth much effort simply being necessary cogs to get the work done.images-1

The left has their own crimes to answer for. Much of the right-ward movement is a reaction to the systematic over-reach of the bureaucratic state. Political correctness and the thought police also serve to undermine societal confidence and tolerance in the “elite”. Management is part of this elite so derided to today and each subgroup within the elite has their own axe to grind. The big crime of the left is that they seem to think that every ill and danger can be regulated out of existence. Little or no thought is put into the cost of the regulation or the opportunity lost in the process. This is similar to the behavior of the right with respect to the National Security state. In the process, the individual is lost; the individual is not valued; the individual is not trusted. The value of work and the dignity of labors toward support of the family and the good of society is not honored. Work becomes a complete waste of time. Productivity and meaning in work ceases to be prioritized. Life is too precocious to waste doing this.

A big part of the overall problem is the value of my time. For every single thing I do, I trade it against doing something else. Increasingly, my time is spend doing unproductive and useless things. Every useless thing I have to do displaces something else. Time is a valuable resource, and today my management, my institutions treat my time with flagrant disregard and outright disrespect. This is the rotten core of the problem, the disregard for the cost of making me do stupid useless things. We engage it pointless, wasteful box checking exercises rather than reject pointless activities. It is not the stupid things as much as the valuable things they displace. Almost all the time at work I could spend doing something more valuable, or more gratifying, or more satisfying, or simply something that brings me happiness and joy. I could create, solve problems, learn and find meaning and value. Instead I am enslaved to someone’s idea of what I should do. I am saddled with numerous “terms of employment” based tasks that have no value or meaning. Those saddling me always have the excuse of “it is a good idea to do this”. This sentiment is valid except it completely and utterly discounts what that time could be spent doing that is better.

The difference between technology and slavery is that slaves are fully aware that they are not free

― Nassim Nicholas Taleb

I spend complete days doing nothing, but bullshit. It is mostly what other people think is a good idea, or worse yet some sort of ass covering exercise. I can spend an entire day doing nothing productive at all, and yet I’m doing exactly what I supposed to do. This is a huge problem! Managers do this almost every day. They rarely do what they need to do, 1-1-the-borgactually manage the work going on and the people doing the work. They are managing our compliance and control, not the work; the work we do is mere afterthought that increasingly does not need me any competent person would do. At one time work felt good and important with a deep sense of personal value and accomplishment. Slowly and surely this sense is being under-mined. We have gone on a long slow march away from being empowered and valued as contributing individuals. Today we are simply ever-replicable cogs in a machine that cannot tolerate a hint of individuality or personality.

All over the place, from the popular culture to the propaganda system, there is constant pressure to make people feel that they are helpless, that the only role they can have is to ratify decisions and to consume.

― Noam Chomsky

Work place education exemplifies all of these trends. My training is almost 100% compliance oriented. Nothing I am ever trained on is job related, it is all telling me what I shouldn’t do. This training is a good avatar for priorities, and my actual work is not a priority at all. All the training that develops a person is management related. For the rank and file personal development is completely optional and hardly prioritized. We are there to get our stuff done, and the stuff we do is increasingly shit. They have lots and lots of training, and from what I see use almost none of it. It is full of the best theory that cannot be applied to the workplace. Their training would tell them to harness my passion and empower me, yet none of that ever happens. The entire system is completely oriented toward the opposite. The training system signals very clear values to all of us, the rank and file need to comply and submit, managers are the only employees’ worth developing even if the development is all delusional.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum…

― Noam Chomsky

The management literature is full of the gospel of the value of human talent. It sounds 07d3f7fgreat, and I believe in it. Management should be the art of enabling and working to get the most out of employees. If the system was working properly this would happen. For some reason society has removed its trust for people. Our systems are driven and motivated by fear. The systems are strongly motivated to make sure that people don’t fuck up. A large part of the overhead and lack of empowerment is designed to keep people from making mistakes. A big part of the issue is the punishment meted out for any fuck ups. Our institutions are mercilessly punished for any mistakes. Honest mistakes and failures are met with negative outcomes and a lack of tolerance. The result is a system that tries to defend itself through caution, training and control of people. Our innate potential is insufficient justification for risking the reaction a fuck up might generate. The result is an increasingly meek and subdued workforce unwilling to take risks because failure is such a grim prospect.

People get used to anything. The less you think about your oppression, the more your tolerance for it grows. After a while, people just think oppression is the normal state of things. But to become free, you have to be acutely aware of being a slave.

― Assata Shakur

One of the key things that drives the system is a complete lack of cost-benefit analysis

Elliott Erwitt

A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

. The cost of a given measure to control a risk is rarely accounted for. The TSA is a prime example. One asshole tries to blow up a shoe, and forever we can’t take toothpaste on a plane. It is patently absurd on the face of it. We give up freedom, we give up time and we expend enormous effort to control minuscule risks. In the process, we have made a wonder of technology and the modern world, something to be hated. So much of the wonder of the modern world is being sacrificed to fear designed to control risks that are so small to be ridiculous. In the process, the vast benefits of modernity are lost. The vast benefits of easy and convenient air travel are overrun by a system designed to control irrational fears. Our fears are things that are completely out of control, and the vast opportunity cost is never considered. The result is a system that destroys our time and productivity in a disproportionate manner.

If one is forever cautious, can one remain a human being?

― Aleksandr Solzhenitsyn

73094e062316fa1224d2661b4c761a22--movie-posters-quote-postersThe same thing is happening to our work. Fear and risk is dominating our decision-making. Human potential, talent, productivity, and lives of value are sacrificed at the altar of fear. Caution has replaced boldness. Compliance has replaced value. Control has replaced empowerment. In the process work has lost meaning and the ability for an individual to make a difference has disappeared. Resistance is futile, you will be assimilated.

To be given dominion over another is a hard thing; to wrest dominion over another is a wrong thing; to give dominion of yourself to another is a wicked thing.

― Toni Morrison

 

Thinking about Flux Splitting for General Riemann Solvers

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

— Sir Arthur Stanley Eddington

AS USUAL WORD PRESS’ LaTeX is annoying.  I need to post and go to bed.

Rather than continue to talk about global issues I’ll get back in the weeds this week and get into a technical conversation. I don’t really know where this is going, so this is a bit of stream of consciousness in thinking about a topic and developing a new idea. The inspiration for this came from my talk at the Multimat 2017 meeting, and considering how to fix problems I’ve seen with rarefactions. As a reminder, I had seen that some solvers produce solutions with small, but consistent violations of the second law of thermodynamics in their solution of expansions (e.g. rarefactions). Nothing catastrophic is observed, but it is a troubling failure from nominally robust solvers. This study was itself motivated by the observation of a systematic failure of these solvers to produce convergent solutions to very strong rarefactions, and examine what sort of character the solutions have under more ideal circumstances.

A few points are worth making about the solvers used and how they have been tested in the past. Mostly I’ve worked with finite volume codes, sort of the gold standard of production codes for fluid dynamics. These codes are very reliable, and well understood.  For the most part, the focus of test problems has been shock waves where bad methods can result in catastrophic instability for the codes. Rarefactions are far less studied and tested because they are generally benign, and don’t threaten the stability of the code. As a result, the rarefaction focused test problems are mostly missing. We do know that expansions can produce unphysical solutions for Eulerian codes at critical points (where the characteristic speeds go to zero, and numerical dissipation may vanish). Bad solutions can arise with strong rarefactions, but no one has pointed out that these solutions actually violate the second law of thermodynamics before. The end result is a relative ignorance about shortcomings of the code, and a potentially important outlet for improvement of the methods.

Von Neumann told Shannon to call his measure entropy, since “no one knows what entropy is, so in a debate you will always have the advantage.

― Jeremy Campbell

One of my suggestions about examining this problem is examining the solution to these problems with a wider variety of codes. This would include codes that do not use pure finite volume methods.  One example are methods based on flux differencing where the code can go to formally high-order accuracy for nonlinear problems. Control volume codes are limited to second-order accuracy and the leading nonlinear truncation error can produce the entropy-condition violating energy transfer in expansions, C f_{uu} u_x u_{xx} . For almost every control volume code these terms are dissipative in shock waves, thus providing additional stability to the codes in this dangerous configuration. The opposing reaction in expansions can go unnoticed because any imperfections in the solution are modulated by the physics of the problem. For this reason, the failing has gone completely unnoticed for decades.  A reasonable question to explore is whether codes based on different design principles exhibit the same problems, or produce solutions that satisfy the second law of thermodynamics more uniformly.

An important technique in defining flux difference schemes of high-order accuracy is flux splitting (more than second-order accuracy). The core idea is that approximating the fluxes to high order can produce higher formal accuracy then the variables. The question is does this produce solutions of a fundamentally different character with respect to entropy. Simply put, a flux splitting is a decomposition of the fluxes being differenced into negative and positive moving contributions. These fluxes are then differenced and then recomposed into the total flux. The splitting techniques add a directionality to the approximation needed for numerical stability associated with upwinding the approximation. The flux splitting techniques are closely related to Riemann solvers, but here to fore only include a small number of simple linearized Riemann solutions. I’d like to explore some greater generalization of this concept including flux splittings based on exact Riemann solvers.

The Riemann problem is the exact solution to the interaction of two discontinuous states described by hyperbolic equations. This analytic information can be used to develop numerical methods that encode this physically relevant information into the solution. In terms of numerical methods, the Riemann solution is a generalization of the principle of upwinding, where the physical direction of propagation is taken into account. The first person to describe this approach to numerical methods was SK Godunov in 1959. Godunov’s method was first-order accurate and used the exact solution to the Riemann problem. It was soon realized that one only needed to approximate the Riemann solution. This became a key development in the methods over time and allowed great progress. Over time it was realized that it also allowed great flexibility too.

In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. Of course, you seldom, if ever, see the pure state.

– Richard Hamming

A simple Riemann solver can be defined by linearizing the problem, f(u_l,u_r) = \frac{1}{2} \left[ f_l + f_r \right]    - R | \lambda | L $ ( u_r – u_l ) $. The quantity f is the nonlinear flux, u_l, u_r are the states to the left and right of the interface. The dissipation is defined by the eigen-decomposition of the flux Jacobian, \partial_u f = A = \partial_u f = R \lambda L . This decomposition is contained of the right and left eigenvectors and the eigenvalues, \lambda. These eigenvalues are the characteristic velocities, which for gas dynamics are u-c, u, u+c being the velocities and the sound speeds, c. This basic decomposition is the basis of flux splitting techniques.

The basic flux splitting takes the flux and decomposes it into right and left moving pieces, f(u) = f(u)^- + f(u)^+ . One was to do this is choose a velocity, \alpha > 0 , and create contributions where f(u)^+ = \frac{1}{2} \left[ f(u) + \alpha u\right] and f(u)^- = \frac{1}{2}\left[ f(u) - \alpha u\right]. A simple choice of \alpha = \Delta x/\Delta t creates the Lax-Friedrichs flux, the simplest (and most dissipative) Riemann solver. For the general linearized Riemann solver the flux splitting is f(u)^+ = \frac{1}{2}\left[ f(u) + R |\lambda | L u\right] and f(u)^- = \frac{1}{2}\left[ f(u) - R  | \lambda | L u\right]. The choice of the left and right states to evaluate the flux Jacobian defines the flux splitting. For example, if the states are evaluated using Roe’s recipe, we get the Roe flux splitting. If we evaluate the eigenvalues in a bounding fashion we can get the local Lax-Friedrichs method.

Another approach to generating a flux splitting does not use the variables in the expression of splitting, and only uses the fluxes. In this case the expressions are developed in terms of the sign of the eigenvalues/characteristic velocity. The splitting then works only as a scaling by eigenvector decomposition of the flux Jacobian. The expressions are somewhat simplified, as f^+ = \frac{1}{2} \left(f + R  \mbox{sign}(\lambda) L \right) f and $ f^- = \frac{1}{2} ( f – R \mbox{sign(\lambda) L ) f$. We not in passing that the smooth or soft version of the sign function might be extremely useful in this type of splitting and introducing a continuously differentiable function (https://wjrider.wordpress.com/2017/03/24/smoothed-operators/ https://wjrider.wordpress.com/2017/03/24/smoothed-operators/). By the same token, the absolute value used in the usual flux splitting approach could also be smoothed to similar effect. We need to take care in our choices to assure that the accuracy of the resulting numerical method is not negatively impacted. We get into some very big problems when we want to generalize to other Riemann solvers. Examples of these solvers are the HLL family of solvers, and the most classical Riemann solver, the exact solver or close approximations to that approach (e.g., a single iteration of the nonlinear Newton’s method used in the exact solver). How can these important methods be utilized in flux splitting methods? For very strong wave interactions these classes of methods are extremely valuable and not presently possible to be used effectively in flux splitting.

 Nature not only suggests to us problems, she suggests their solution.

—Henri Poincare´

We can start with the simpler case of the HLL type of flux, which has an algebraic description. The HLL flux is defined using the space-time diagram by integrating the equations to derive a flux. The simplest form of the flux uses bounds for the wave speeds and neglecting all of the structure inside the Riemann fan resulting in a simple closed form expression for the flux, $f_{lr} = \left[a_r f_l – a_l f_r + a_l a_r \left( u_r – u_l \right)\right]/(a_r – a_l)$. The flux is quite simple, but dependent on the estimates for the smallest and largest wave speeds in the system. The left wave speed, a_l is the smallest wave speed and needs to be bounded at zero (i.e., it is negative). The right most wave speed is a_r and is bounded below by zero. The HLL flux has the benefit of reducing to simple upwind flux for the system if all the wave speeds are either negative or positive. For a flux splitting we need to take this apart into negative and positive moving pieces for the purposes of splitting nearby fluxes as we did with the Roe, or flavors of Lax-Friedrichs.

The flux splitting can be defined almost by inspection. The positive flux is $ f^+ = (a_r f – a_l a_r u) / (a_r – a_l) $. The negative flux is $ f^- = (- a_l f + a_l a_r u) / (a_r – a_l) $. This is a wonderfully simple result, and meets all the basic requirements for a flux splitting. Unfortunately, the HLL flux is extremely dissipative, thus lacking some degree of practical utility. Still we expect this flux splitting to be quite robust especially for strong waves with the proviso that the wave speed estimates bound the physical wave speeds. This is a much more delicate estimate than usually recognized. The case of a reflected wave can produce wave speeds nonlinearly that exceed the wave speeds in the initial data.

The harder case is the class of exact Riemann solvers that are defined algorithmically and do not have a closed form. After using an exact Riemann solver we do have a set of initial left and right states, and the resolved state at the centering point x/t=0. If we desire a flux splitting, it needs to be defined in terms of these variables. The trick in this endeavor is choosing an algebraic structure to help produce a workable flux splitting technique. We build upon the experience of the HLL flux partially because we can incorporate the knowledge arising from the exact solution into the algebraic structure to good effect. In particular, the nature of the one-sided differencing can be reproduced effectively. This requires the wave speed bounds to use the interior states of the solution.

The exact flux is different than the HLL flux, and this will be defined by changing the dissipation vector in the flux. Our chosen structure is a flux defined by $ f_{lr} = \(a_r f_l – a_l f_r – D  \left( u_r – u_l \right)) / (a_r – a_l) $. If we can derive the form for D our work will be done. The positive flux is $ f^+ = (a_r f + D u) / (a_r – a_l})$. The negative flux is $ f^- = (- a_l f – D u) / (a_r – a_l) $. Now we just have a little bit of algebra to arrive at our final expression. The math is nice and straightforward, $ D = (a_r f_l – a_l f_r – (a_r – a_l) f_{lr} ) / (u_r – u_l) $. A couple comments are needed at this point. When the states become equal, the solver becomes ill defined, u_l = u_r. Fortunately, this is exactly where the linearized flux splitting approaches or HLL would be ideal.

The secret to being wrong isn’t to avoid being wrong! The secret is being willing to be wrong. The secret is realizing that wrong isn’t fatal.

― Seth Godin

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Van Leer, Bram. “Flux-vector splitting for the Euler equations.” In Eighth international conference on numerical methods in fluid dynamics, pp. 507-512. Springer Berlin/Heidelberg, 1982.

Harten, Amiram, Peter D. Lax, and Bram Van Leer. “On upstream differencing and Godunov-type schemes for hyperbolic conservation laws.” In Upwind and High-Resolution Schemes, pp. 53-79. Springer Berlin Heidelberg, 1997

Shu, Chi-Wang, and Stanley Osher. “Efficient implementation of essentially non-oscillatory shock-capturing schemes, II.” Journal of Computational Physics 83, no. 1 (1989): 32-78..

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics126, no. 1 (1996): 202-228.

Quirk, James J. “A contribution to the great Riemann solver debate.” International Journal for Numerical Methods in Fluids 18, no. 6 (1994): 555-574.

Multimat 2017: Where did all the New Ideas go?

Science is what we have learned about how to keep from fooling ourselves.

― Richard Feynman

Last week was another trip to the Multimat conference, a biannual meeting of scientists who solve the equations of multiple material flows under highly energetic and compressible conditions. I’ve always greeted the meeting with great enthusiasm and returned to work bristling with new ideas and inspiration. In many ways this community is the tip of the intellectual spear for modeling and simulation capability.  It also marks an anniversary of sorts, four years of blogging.  My very first post here was about the San Francisco edition of the meeting that coincided with my 50th birthday (https://wjrider.wordpress.com/2013/09/13/thoughts-about-multimat2013/). Two years ago we had a wonderful meeting in Wurzburg Germanyfad6939fd02149c8aa33953ec9789f41 (https://wjrider.wordpress.com/2015/09/11/multimat2015-a-biannual-festival-on-computing-compressible-multiple-materials/). Every meeting was wonderful and this was no exception, except in one very important and perhaps essential regard; the meeting seemed devoid of the usual exciting intellectual inspiration. What happened to the big ideas that flowed so easily in every previous meeting? Is it my imagination, or have the exciting new ideas dried up?

Do stuff. be clenched, curious. Not waiting for inspiration’s shove or society’s kiss on your forehead. Pay attention. It’s all about paying attention. attention is vitality. It connects you with others. It makes you eager. stay eager.

― Susan Sontag

This assessment might seem rather harsh, but upon reflecting on the previous meetings, it holds up under considerable scrutiny. Each previous meeting was full of moments where you are watching someone else’s talk and thinking, “I wish I’d thought of that, this is brilliant”. This is exactly what makes conferences so wonderful and important to attend; you get to cherry pick people’s best ideas accumulated at great effort all at once. In the moment these ideas seem like Athena springing fully formed from Zeus’ brow! Your colleagues get to look like the geniuses they are and present their most creative and ingenious thoughts in an intellectual banquet (https://wjrider.wordpress.com/2014/06/13/why-do-scientists-need-to-attend-conferences/, https://wjrider.wordpress.com/2014/10/27/a-holistic-view-of-attending-conferences/). The reason for attending conferences isn’t to give talks; it is to learn new things taught by the smartest people you know. It is to meet and let ideas breed openly over laughter, food and drinks. You give talks as an act of repayment for the knowledge you are granted byimgres being in the audience. Giving talks is pretty low on the list of reasons, but not in the mind of our overlords, which starts to get at the problems I’ll discuss below. Given the track record of this meeting my expectations were sky-high, and the lack of inspiring ideas left me slightly despondent.

A few more thoughts about the meeting are worth pointing out before getting to the dialog about fresh ideas, their importance and postulates for their recent absence. The meeting is attended by a collection of computational scientists (mathematics, physics, engineering,…) dominated by the nuclear “club”. This means American, French and British with a smattering of Russians and Chinese – who couldn’t come this year for undisclosed reasons. These scientists for the most part work at their nation’s respective nuclear weapons’ labs. Occasional others attend like Israelis (an unacknowledged member of the club) along with a handful of Czechs, Italians, and Germans. As such the meeting serves as a proverbial checkup on the intellectual health of this important area of science at the West’s nuclear weapons Labs. This year’s checkup should give everyone pause, the state of health is declining. There is a real lack of creative energy surrounding the heart of our most important codes. Many important codes are built around a powerful hydro-solver that produces accurate, physically relevant solutions to the multi-material “hydrodynamic” equations. Previous meetings have seen a healthy resurgence of new ideas, but that upswing seems to have come to a staggering halt. These labs have also provided a deep well of inspired research that has benefited the broader scientific community including weather, climate, astrophysics and a broad swath of engineering use of computation.

In my opinion the reasons for this halt in creative energy are simple and straightforward. The foolhardy worldwide push for exascale computers is sucking the air out of the room. It is gobbling up all the resources and attention leaving nothing for new ideas. This completely obsessive and unwise focus on the hardware is attempting to continue – the already dead – Moore’s law. This push is draining the community of vitality, resources and focus. The reasons for the push are worth considering because they help define the increasingly hostile nature of the modern world toward science. The computers being build for the future are abysmal to use and the efforts to move our codes to them are sucking all the energy from the Labs. Nothing is left of creative work; nothing is left for new ideas. Simply put, the continued use of old ideas is hard enough if you add these generally unusable computers to the mix. The reason is simple; the new computers completely suck. They are true monstrosities (in the classic definition of the word) and complete pieces of shit as scientific hardware. They are exactly the computers we don’t want to use. The price of forcing them down our throats is the destruction of research that isn’t associated with simply making these awful computers work. Worse yet, the return on the massive investment of effort will be vanishingly small in terms of added modeling and simulation capability.

titan

As noted this whole direction is a foolish attempt to breathe life into the already rigid corpse of Moore’s law. Now dead at every scale of computing and already a decade deceased at the level of computer chips – note the death of Moore’s law and the ascendency of cell phones is strongly correlated, and that probably is not a coincidence. The truth of our real performance on computers is far more dire and damning of this entire effort. We have been getting an ever-lower proportion of the potential performance on our computers for 25 years. Each computer has a peak performance measured on silly hardware friendly benchmarks that no one gives a flying fuck about (the dense linear algebra LU decomposition, Linpac). This silly and useless benchmark is how we crown the fastest computer! Our actual code performance on these machines is truly atrocious and gets worse every year. The dirty little secret is that its been getting ever worse every year. It was god-awful 20 years ago, and it has just gotten worse. Zero is a pretty good approximation to the proportion of the performance we get – generally much less than one percent. We mindfully ignore the situation just like one would ignore a cancer threatening to devour our lives. The attitude is generally, “look away, nothing to see here”. The exascale program is that cancer metastasized.

Part of the discussion about exascale needs to acknowledge the nature of choices and priorities in research. In isolation, the exascale program is an unambiguous good; it would be genuinely awesome to have – usable – exascale computers (https://wjrider.wordpress.com/2014/09/19/what-would-we-actually-do-with-an-exascale-computer/). This good needs to be weighed in terms of its cost and the impact of alternatives. It needs to be viewed through the lens of reality too. If one looks at the raw cost, the opportunity cost and collateral damage, and under this examination we can see that the exascale program is a massively negative force in science (https://wjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/, https://wjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/, https://wjrider.wordpress.com/2016/10/19/why-china-is-kicking-our-ass-in-hpc/, ). In isolation without considering anything else, it is a clear positive. In the context of lost opportunities and effective use of available resources, the program is an unmitigated disaster. We will all be poorer for it as it lays waste to potential breakthroughs we will be denied in its wake. In today’s world we talk about things in isolation, free of nuance and trade spaces that would make for a more robust and responsible conversation. Our leaders are irresponsible in the extreme for taking down this path with no real discussion, or any debate taking place. The message in the trenches is “do what you’re paid to do and quit asking questions”.

dag006The really dirty secret is that chasing exascale as a route to scientific discovery is simply bullshit of the highest and most expensive order. We would be far better served by simply figuring out how to use the hardware we already have. Figuring out how to efficiently use hardware we have had for decades would be a difficult and worthy endeavor. The punch line is that we could get orders of magnitude in improved performance out of the hardware we’ve been using for decades. By simply figuring out how to get our codes working more efficiently on the computers already existing would meet most scientific goals without eviscerating the rest of computational science in the process. Instead we chase goals that are utterly meaningless. In the process we are destroying the research that has true and lasting value. The areas being ignored in the push for exascale have the capacity to provide far more scientific capability than even the most successful exascale program could possibly deliver. This brings me back to the meeting in Santa Fe and the lack of energy and exciting ideas. In the past the meeting has been a great survey of the active work from a creative and immensely talented group of people. As such this meeting is the proverbial canary in the coalmine. The ideas are dying right in front of our eyes.

1wakdnThis outcome is conflated with the general lack of intellectual vigor in any public discourse. The same lack of intellectual vigor has put this foolish exascale program in place. Ideas are viewed as counter-productive today in virtually every public square. Alarmingly, science is now suffering from the same ill. Experts and the intellectual elite are viewed unfavorably and their views held in suspicion. Their work is not supported, nor is projects and programs dependent on deep thinking, ideas or intellectual labor. The fingerprints of this systematic dumbing down of our work have reached computational science, and reaping a harvest of poisoned fruit. Another sign of the problem is the lack of engagement of our top scientists in driving new directions in research. Today, managers who do not have any active research define new directions. Every year our manager’s work gets further from any technical content. We have the blind leading the sighted and telling them to trust them, they can see where we are going. This problem highlights the core of the issue; the only thing that matters today is money. What we spend the money on, and the value of that work to advance science is essentially meaningless.

leland_taylor_320Effectively we are seeing the crisis that has infested our broader public sphere moving into science. The lack of intellectual thought and vitality pushing our public discourse to the lowest common denominator is now attacking science. Rather than integrate the best in scientific judgment into our decisions on research direction, it is ignored. The experts are simply told to get in line with the right answer or be silent. In addition, the programs defined through this process then feed back to the scientific community savaging the expertise further. The fact that this science is intimately connected to national and international security should provide a sharper point on the topic.  We are caught in a vicious cycle and we are seeing the evidence in the hollowing out of good work at this conference. If one is looking for a poster child for bad research directions, the exascale programs are a good place to look. I’m sure other areas of science are suffering through similar ills. This global effort is genuinely poorly thought through and lacks any sort of intellectual curiosity.

Moving our focus back to exascale provides a useful case study of what is going wrong. We see that programs are defined by “getting funding” rather than what needs to be done or what should be done. Arguments for funding need to be as simple as possible, and faster computers are naïve enough for unrefined people to buy into. It sounds good and technically unsophisticated people buy it hook line and sinker. Computers are big loud and have lots of flashing lights to impress managers, politicians and business people who know no better. Our scientists have been cowered into compliance and simply act happy to get money for doing something. A paycheck beats the alternative, and we should feel happy that we have that. The level of inspiration in the overall approach has basically fallen off a cliff, and new ideas are shunned because they just make things complicated. We are left with the least common denominator as the driving force. We have no stomach for nuance or subtlety.

mediocritydemotivatorPriority is placed on our existing codes working on the new super expensive computers. The up front cost of these computers is the tip of the proverbial cost iceberg. The explicit cost of the computers is their purchase price, their massive electrical bill and the cost of using these monstrosities. The computers are not the computers we want to use, they are the ones we are forced to use. As such the cost of developing codes on these computers is extreme. These new computers are immensely unproductive environments. They are a huge tax on everyone’s efforts. This sucks the creative air from the room and leads to a reduction in the ability to do anything else. Since all the things being suffocated by exascale are more useful for modeling and simulation, the ability to actually improve our computational modeling is hurt. The only things that benefit from the exascale program are trivial and already exist as well-defined modeling efforts.

Increasingly everything is run through disconnected projects that are myopic by construction. The ability to do truly unique and groundbreaking science is completely savaged by this approach to management. Breakthroughs are rarely “eureka” moments where someone simply invents something completely new. Instead, most good research is not made through connections to other good research. Conferences are great incubators for these connections. Well-defined and proven ideas are imported and redefined to make contributions to a new area. This requires people to work across discipline boundaries, and learn about new things in depth. People need to engage deeply with one another, which is similarly undermined today by project management and information security focus. The key thing is exposure to new and related areas of endeavor and basic learning. The breakthroughs come episodically and do not lend themselves to the sort of project management in vogue today.

It isn’t like I came back with nothing. There were a couple of new things that really fall into the category of following up. In one case there was a continuation of a discussion of verification of shock tube problems with someone from Los Alamos. The discussion started in Las Vegas at the ASME VVUQ meeting, and continued in Santa Fe. In a nutshell, we were trying to get cleaner verification results by dividing the problem into specific regions associated with a particular solution feature and the expectation of different rates of convergence for each. We found something unexpected in the process that doesn’t seem to follow theoretical expectations. It’s worth some significant follow-up.  A mysterious result is always something worth getting to the bottom of. The second bit of new intellectual blood came in direct response to my talk. I will also freely admit that my contribution wasn’t the best. I haven’t had any better luck with a good free energy at work to energize my work. The same exascale demon is sucking my intellectual lifeblood out. I simply reported on a here-to-fore unreported structural failing of solvers. In summary, we find systematic, but small violations of the second law of thermodynamics in rarefactions for modern and classical methods. This shouldn’t happen and violations of the second law lead to unphysical solutions. All of this stems from identifying a brutal problem (https://wjrider.wordpress.com/2017/06/09/brutal-problems-make-for-swift-progress/ ) that every general-purpose code fails at – what I call “Satan’s shock tube” with 12 order of magnitude jumps in density and pressure approximating propagation of waves into a vacuum.

We cannot live only for ourselves. A thousand fibers connect us with our fellow men; and among those fibers, as sympathetic threads, our actions run as causes, and they come back to us as effects.

― Herman Melville

Before closing I can say a thing or two about the meeting. None of the issues dulled the brilliance of the venue in Santa Fe, “the City Different”. While I was disappointed about not enjoying the meeting in some exotic European venue, Santa Fe is a fabulous place for a meeting. It is both old (by American standards), yet wonderfully cosmopolitan. There is genuine beauty in the area, and our hotel was nearly perfect. Santa Fe boasts exceptional weather in the fall, and the week didn’t disappoint. It has a vibrant art community including the impressive and psychedelic Meow Wolf. It was the Drury Plaza hotel placed in a remodeled (and supposedly haunted) old hospital. Two short blocks from the plaza, the hotel is enchanting and comfortable. We all shared two meals each day catered by the hotel’s exceptional restaurant. Having meals at the conference and together with the participants is optimal and makes for a much-improved meeting compared to going out to restaurants.

We had a marvelous reception on the hotel’s rooftop bar enjoying a typical and gorgeous New Mexico early autumn sunset with flowing drinks, old friends and incredibly stimulating conversation. American laws virtually prohibit government funds paying for alcohol, thus the drinks were courtesy of the British and French governments. One more idiotic prohibition on productivity and common sense that only undermines our collective efforts especially creatively and collaboratively. These laws have only gotten more prescriptive and limiting. We no longer can pay for meals for interview lunches and dinners, much less business meetings. None of this is reflective of best practice for any business. The power of breaking bread and enjoying a drink to lubricate human interactions is well known. We only hurt our productivity and capacity to produce valuable work by the restrictions. We are utterly delusional about the wisdom of these policies. All of this only serves to highlight the shortcomings in the creative energy evident from the rather low level of vibrancy exhibited by the lack of exciting new ideas.

Never underestimate the power of human stupidity.

– Robert A. Heinlein

 

Testing the limits of our knowledge

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

― Daniel J. Boorstin

All knowledge hurts.

― Cassandra Clare

Science is an important source and process for generating knowledge for humanity. Much of science is a core of well-known and well-tested knowledge about the univeunnamedrse. Most of the activity for working scientists is at the boundaries of our knowledge working to push back our current limits on what is known. The scientific method is there to provide structure and order to the expansion of knowledge. We have well chosen and understood ways to test proposed knowledge. A method of using and testing our theoretical knowledge in science is computational simulation. Within computational work the use of verification, validation with uncertainty quantification is basically the scientific method in action (https://wjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/ ). One of the key activities in the accomplishment of our understanding of theory is the determination of uncertainties.

Unfortunately what we call “uncertainty quantification” is only a small piece of what uncertainty needs to be evaluated in testing knowledge. Too often people only do this narrow part of uncertainty quantification and falsely believe this is sufficient for science.

The basic premise is that we can test our knowledge via the systematic understanding of uncertainties. If we examine the uncertainty in an honest and complete manner, the limits of knowledge can be explored. Some uncertainty can be reduced via greater computational effort, some uncertainty can be reduced with improved modeling and some uncertainty is irreducible. It is absolutely essential to understand the nature of what we do and don’t know systematically. Careful and honest study is clear, and the strong tendency of people is to view all uncertainty as bad. This is not necessarily true. Uncertainty is completely unavoidable, and understanding the degree to which this is true can be a great unburdening. If an uncertainty is unavoidable, one can dispense with attempting to reduce it and simply figure out how to live with it.

Crays-Titan-SupercomputerIf the uncertainty is irreducible and unavoidable, the problem with not assessing uncertainty and taking an implied value of ZERO for uncertainty becomes truly dangerous (https://wjrider.wordpress.com/2016/04/22/the-default-uncertainty-is-always-zero/). In this case there is an uncertainty that should be there, and instead of building this knowledge into our work, we mindlessly ignore it. Sometimes it is actually mindfully ignored, which is utterly contemptible. This situation is so common as to be laughable, but actually provides the source for tragedy. Looking at weather and climate provides innumerable situations where ignorance of uncertainty pileofshitmay prove deadly in rather commonly encountered situations. As systems become more complex and energetic, chaotic character becomes more acute and common. This chaotic character leads to solutions that have natural variability. Understanding this natural variability is essential to understanding the system. Building this knowledge is the first step in moving to a capability to control and engineer it, and perhaps if wise, reduce it. If one does not possess the understanding of what the variability is, such variability cannot be addressed via systematic engineering or accommodation.

Courage doesn’t happen when you have all the answers. It happens when you are ready to face the questions you have been avoiding your whole life.

― Shannon L. Alder

This entire issue is engaged frequently. In the case of experiments for complex systems, the problem arises because lack of repeated experiments. Often such systems are complex and expensive, thus tests are carried out once. Even though any thoughtful examination of the situation would conclude that the results of the experiment are likely –almost certainly – highly variable, the experiment is treated as a unique event. Computational simulations of the experiment are viewed the same way; the calculation should try to reproduce this single experiment. This is potentially a recipe for disaster. A healthier point of view would be looking at this experiment as a single instance of drawn from a probability distribution. If the experiment were repeated there would be different results. A computational simulation if truly predictive would do exactly the same thing. Of course the simulations themselves are designed to compute the average response of such an experiment (based on mean field equations). All of this conspires to create big problems should the experiment actually draw from a low probability outcome (i.e., the tails of the distribution).5f282213e3d57606200fffd45374ecc5

To address this systemic issue we need to reframe both the experimental and theoretical practice. If an experiment fails to give repeatable results, we need to work to unveil the statistical nature of the results especially for complex, high consequence systems. Theoretical models need to have the same properties more mindfully rather than producing the variability by happenstance (when mean field models produce variability almost serendipitously). Computational simulations follow the theory by construction and great benefits to our knowledge and understanding by more structurally and mindfully building in the variability to the theory. Failure to address this issueostrich-head-in-sandsystematically is an ever-growing limit for science. We have a major scientific gap open in front of us and we are failing to acknowledge and attack it with our scientific tools. It is simply ignored almost by fiat. Changing our perspective would make a huge difference in experimental and theoretical science, and remove our collective heads from the sand about this matter.

Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.

― Marie Curie

At a deeper and more fundamental level the whole exploration of the true uncertainty is the search for the understanding. We seek to define our level of precision for the modeling of something by systematically examining the levels of certainty (by proxy of studying uncertainty). We need to understand the quality of our knowledge, and a complete survey of uncertainty is a path to this end. For computational modeling there are three broad categories for the uncertainty, the model itself, the model’s numerical solution, and the experimental fidelity used to grant confidence to the model. Each of these uncertainties can in turn be broken down into more detailed pieces. For

example we must compare to the model itself rather than the error in the solution of the model. It is important to choose some structure for the uncertainty and commit to an estimate of all portions of the structure. One should never take a zero magnitude for the uncertainty of any structural element by ignoring it.

Any fool can know. The point is to understand.

― Albert Einstein

One of the clearest ways to undermine this quest for understanding the boundaries of our knowledge is ignoring a key uncertainty. There are several usual suspects for the _12122_tex2html_wrap26willful uncertainty ignorance. Probably the most common uncertainty to be willfully ignorant of is numerical error. The key numerical error is discretization error that arises from the need to make a continuous problem, discrete and computable. The basic premise of computing is that more discrete degrees of freedom should produce a more accurate answer. Through examining the rate that this happens, the magnitude of the error can be estimated. Other estimates can be had though making some assumptions about the solution and relating the error the nature of the solution (like the magnitude of estimated derivatives). Other generally smaller numerical errors arise from solving systems of equations to a specified tolerance, parallel consistency error and round-off error. In most circumstances these are much smaller than discretization error, but are still non-zero.

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

A second category of uncertainty that is highly prone to being ignored is the experimental variability. Often this is the direct consequence of only doing a single experiment. Rather then know the physics of the problem well enough to make the conclusion that the experiment will be highly variable, this is ignored and we will endeavor to model the single experiment as a unique well-determined event. The result of this set of unjustified assumptions is wholesale ignorance of systematic and irreducible uncertainty. This is truly scientific malpractice. Addressing this shortcoming should be the focus of significant effort experimentally, as well as in modeling and its numerical solution. It is a very large and largely unaddressed issue in science.

In addition to ignoring the intrinsic variability in the experiments, the more classical and even less excusable uncertainty often ignored is measurement error. This error is always present even in cases where the experiment is well posed and utterly reproducible. Measurements are always finite precision and have some degree of error and uncertainty. This finite value should always be reported as part of the experimental comparison even when the uncertainty is small. In a deep sense the small uncertainty is more important because it lends credence to the sense that the data is high quality. Not reporting the uncertainty simply leaves this vague and undefined.

The-most-powerful-Exascale-ComputerThe last area of uncertainty is the modeling uncertainty. In the vast majority of cases this will be the largest source of uncertainty, but of course there will be exceptions. It has three major components, the choice of the overall discrete model, the choice of models or equations themselves, and the coefficients defining the specific model. The first two areas are usually the largest part of the uncertainty, and unfortunately the most commonly ignored in assessments. The last area is the most commonly addressed because it is amenable to automatic evaluation. Even in this case the work is generally incomplete and lacks full disclosure of the uncertainty.

Today in examining modeling uncertainty we most often attack the least important one systematically, and the more important modeling uncertainties are ignored. The “easy” uncertainty to attack is the coefficients in the model. This can be achieved using well-developed methods such as MCMC (Markov chain Monte Carlo). One can define a set of parameters to be varied and ranges for the variation. The calculations can bemcmc-samplingrepeated using values drawn to efficiently sample the probability space of the calculation and produce the uncertainty. This sampling is done for a very highly dimensional space, and carries significant errors. More often than not the degree of error associated with the under sampling is not included in the results. It most certainly should be.

The other two uncertainties are generally larger and thus more important to characterize. Unfortunately neither is amenable to the sort of turnkey black box approach the parametric uncertainty allows. As a result these uncertainties are usually completely ignored. These two areas of uncertainty are closely related. Any complex problems can be modeled in a variety of ways. An analyst ends up making innumerable assumptions and choices in the course of modeling a complex problem. One choice is the code used for the analysis and the specific settings within the code. Beyond this there are choices on how the problem is meshed, boundary conditions, initial conditions, submodels to close the equations, and analysis of results. Each and every one of these choices can produce changes in the results. It is very uncommon to see a problem solved in different or remotely independent ways. As a result the uncertainty from the modeling and analysis is usually completely hidden.

Science, my boy, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.

― Jules Verne

To truly test our knowledge in a topic we must be open to a full and honest evaluation of the uncertainties. Knowledge and uncertainty are two sides of the same coin. If you don’t know the uncertainty, you don’t know the extent of your knowledge. Too often we only do the assessment that is easy. In this assessment strategy we also implicitly choose uncertainty estimates of ZERO for information we know is uncertain, but difficult to assess. This is a pernicious and dangerous practice. Due diligence and responsibility should dictate that some sort of uncertainty be estimated for all sources. If we cannot estimate these uncertainties in a credible and knowledgeable manner, we have no business in modeling and our experiments are not yielding their full value. The only responsible act in this case would be to produce work that would make such a bounding assessment possible.

Negative results are just what I want. They’re just as valuable to me as positive results. I can never find the thing that does the job best until I find the ones that don’t.

― Thomas A. Edison

The Inspirational Promise of TeD Talks

Your number-one mission as a speaker is to take something that matters deeply to you and to rebuild it inside the minds of your listeners. We’ll call that something an idea.
― Chris J. Anderson

janell-500x500Every September my wife and I attend the local TeDx event here in Albuquerque. It is a marvelous way to spend the day, and leaves a lasting impression on us. We immerse ourselves in inspiring, fresh ideas surrounded by like-minded people. It is empowering and wonderful to see the local community of progressive people together at once listening, interacting and absorbing a selection of some of the best ideas in our community. This year’s event was great and as always several talks stood out particularly including Jannell MacAulay (Lt.Olivia-GatwoodCol USAF) talking about applying mindfulness to work and life, or Olivia Gatwood inspiring poetry about the seeming mundane aspects of life that speaks to far deeper issues in society. The smallest details are illustrative of the biggest concerns. Both of these talks made me want to think deeply about applying these lessons in some fashion to myself and improving my life consequentially.

TED-Talks

That’s part of the point of TeD, the talks are part of the gospel of progress, part marketing of great ideas and part performance art. All of these things have a great use to society in lifting up and celebrating a drive to be better and progress toward a better future. Humanity has immense power to change the world around them for the better. We can look across the globe and witness the collective power of humanity to change their environment. A great deal of this change is harmful or thoughtless, but much of it is a source of wonder. Our understanding of the World around us and the worlds within us has changed our biological destiny.

main-event-2017-web-headerWe have transitioned from an animal fighting for survival during brief violent lives, to beings capable of higher thought and aspiration during unnaturally long and productive lives. We can think and invent new things instead of simply fighting to feed us and reproduce a new generation of humans to struggle in an identical manner. We also can produce work whose only value is beauty and wonder. TeD provides a beacon for human’s best characteristics along with a hopeful forward-looking community committed to positive common values. It is a powerful message that I’d like to take with me every day. I’d like to live out this promise with my actions, but the reality of work and life comes up short.

There was a speaker from my employer this year, and there always is. There wasn’t anyone from my former employer, the other major scientific Lab in our state (what was once one of the premier scientific institutions in the World, but that’s a thing of the past). Also noticeable is the lack of support for the local TeD organization by either employer. I’ll grant you that Los Alamos has supported it in the past, but no longer. There’s probably some petty and idiotic reason for the withdrawal of support. My employer, Sandia, doesn’t support it, and hasn’t ever. It looks like our local University doesn’t support it either. I know that Los Alamos did their own local TeD conference and perhaps they thought that was enough TeD for them. That’s the sad best-case scenario, and I don’t know what the full story is.

For Sandia it’s not particularly surprising as it’s not exactly a progressive, idea-centered place, and these days no place is anyway. The University should be, but the lack of financial support from the state could explain it (its a common characteristic of GOP governance to eviscerate universities). It is quite hard for me to express my level of disappointment in these institutions’ lack of civic support for progressive thought. It is stark testimony on the current state of affairs where two National Laboratories and a University cannot be supportive of a major source of progressive thought in the community they are embedded within. An active progressive and intellectual community in the areas where these institutions are located should be beneficial for recruiting and retention of progressive and intellectual staff. It is one sign that this sort of long view isn’t at work. It is a sign of the times.

download-1TeD talks are often the focus of criticism for their approach and general marketing nature strongly associated with the performance art nature. These critiques are valid and worth considering including the often-superficial nature of how difficult topics are covered. In many ways where research papers can be criticized increasingly as merely being the marketing of the actual work, TeD talks are simply the 30-second mass market advertisement of big ideas for big problems. Still the talks provide a deeply inspiring pitch for big ideas that one can follow up on and provide the entry to something much better. I find the talk is a perfect opening to learning or thinking more about a topic, or merely being exposed to something new.

Control leads to compliance; autonomy leads to engagement.

– Daniel H. Pink

One prime example is one of my favorite talks of all time by Daniel Pink (https://www.ted.com/talks/dan_pink_on_motivation). This talk is basically a pitch for the book “Drive” and touches only superficially on the topic. The book itself is a distillation of very complex topics. All of this is true, but none of this undermines the value in the ideas. TeD provides a platform to inspire people to do more and get closer to the actual application of the ideas to their lives (not just buy Pink’s book, the true cynics take on the purpose). Interestingly, the managers at work were also reading Pink’s book and discussing the ideas therein. The rub was the observation that I coulddownload-2 not identify a single thing recommended in Pink’s book that made it to the workplace. It seemed to me that the book simply inspired the management to a set of ideals that could not be realized. The managers aren’t really in charge; they are simply managing the corporate compliance instead of managing in a way that maximizes the performance of its people. The Lab isn’t about progress any more; it is about everything, but progress. Compliance and subservience has become the raison d’etre.

For artists, scientists, inventors, schoolchildren, and the rest of us, intrinsic motivation the drive do something because it is interesting, challenging, and absorbing is essential for high levels of creativity.

– Daniel H. Pink

download

Intrinsic motivation is conducive to creativity; controlling extrinsic motivation is detrimental to creativity.

–Daniel H. Pink

This deep frustration isn’t limited to TeD talks; it is almost every source of great advice or inspiration available. Almost every manager I know reads the Harvard Business Review. I read it too. It is full of wonderful ideas and approaches to improving the way we work. It is impossible to see anything ever done with all the great advice or inspiration. My workplace looks like all the “before” cases studies in HBR and more like it every day, not less. Nothing ever recommended happens at work, nothing is tried, nothing changes in the positive direction; its like we are committed to moving backwards. HBR download-1is progressive in terms of the business world. The problem is that the status quo and central organizing principle today is anti-progressive. Progress is something everyone is afraid of, and the future appears to be terrifying and worth putting off for as long as possible. We see genuinely horrible lurch toward an embrace of the past along with all its anger, bigotry, violence and fear. Fear is the driving force for avoiding anything that looks progressive.

Management isn’t about walking around and seeing if people are in their offices, he told me. It’s about creating conditions for people to do their best work.

– Daniel H. Pink

Now that I’ve firmly established the lack of relevance of TeD and progressive thought in my workplace, I can at least appreciate and apply it at a personal level. I’d love for work to reflect a place for genuine progress, but this seems a bridge too far today. Work is a big part of life and these observations are rather dismaying. Ideally, I’d like a workplace that reflects my own values. The truth of the matter is that this is nearly impossible for a progressive-minded person in America today. Even the bastions of progressive thought like Universities are not working well. Society at large seems to be at war with elites and progressive thought far more under siege than whites, or Christians. I can ask the serious question, how many atheists are in Congress? How much well proven and accepted science does our government reject already? Don’t get me started on our judicial system, or the war on drugs both of which focus far more on oppressing minorities than crime or drug abuse. The bottom line is the sense that we are in a societal backlash against change; so more progress seems to be impossible. We will be fighting to hold onto the progress we’ve already made.

maxresdefault-1Still I can offer a set of TeD talks that have both inspired me and impacted my life for the better. They have either encouraged me to learn more, or make a change, or simply change perspective. I’ll start with a recent one where David Baron gave us an incredibly inspiring call to see the total eclipse in its totality (https://www.ted.com/talks/david_baron_you_owe_it_to_yourself_to_experience_a_total_solar_eclipse). I saw the talk concluding that I simply had to go, and then I showed to my wife to convince her. It did! We hopped into the car at midnight the day of eclipse and drove eight hours to get from Northern Idaho to Eastern Oregon. We got off I-82 atmaxresdefaultDurkee finding a wonderful community center with a lawn and watched it with 50 people from all over the local area plus a couple from Berlin! The totality of the eclipse lasted only two minutes. It was part of a 22-hour day of driving over 800 miles, and it was totally and completely worth every second! Seeing the totality was one of the greatest experiences I can remember. My life was better for it, and my life was better for watching that TeD talk.

61431a6a6184b8f62bc9e27705d1b9f8429541cf_2880x1620Another recent talk really provoked me to think about my priorities. It is a deep consideration of what your priorities are in terms of your health. Are you better off going to the gym or going to party, or the bar? Conventional wisdom says the gym will extend your life the most, but perhaps not. Susan Pinker provides a compelling case that social connection is the key to longer life (https://www.ted.com/talks/susan_pinker_the_secret_to_living_longer_may_be_your_social_life ). This gets at the disparity between men and women since women tend to connect in long life affirming friendships with greater ease than men. The talk is backed up by data, and by visiting places where people live long lives. These people live in communities where they are entangled in each other’s lives almost by design. It gets to the priorities associated with health care and self care along with the benefit of actions. Focusing on your social life is a genuinely beneficial act to prolonging your life.

Our modern computing world is a marvel, but it also has some rather pronounced downsides. In many ways our cell phones are making us far unhappier people. The phones and their apps are designed to grab, demand our attention. They can become sources of deep and pervasive anxiety. This is exactly what they are designed to do. As Adam Alter explains, an entire industry is set up to get as much of our attention as possible because our attention equals money, big money (https://www.ted.com/talks/adam_alter_why_our_screens_make_us_less_happy). He also explains that it doesn’t have to be like this. The same social engineering that has gone into making the phones so demanding could be harnessed to help us be better. If we balanced the naked profit motive with some measure of social responsibility, we might turn this problem into a benefit. This is a wonderfully inspiring idea; it is also terribly progressive and dangerous to the unfettered capitalism fueling this growing societal crisis.

374906e7e2a0f3970763c48ce8e7cc28614444a0_2880x1620

Love rests on two pillars: surrender and autonomy. Our need for togetherness exists alongside our need for separateness.

– Esther Perel

The power of TeD extends to far deeper personal matters as well. A couple of talks by Esther Perel speak to reframing our love lives (https://www.ted.com/talks/esther_perel_the_secret_to_desire_in_a_long_term_relationship, https://www.ted.com/talks/esther_perel_rethinking_infidelity_a_talk_for_anyone_who_has_ever_loved ). Perel defies conventional thought on love, marriage and infidelity providing a counter theory to all these matters. Her first talk is an accompaniment to her first book and tackles the thorny issue of keeping your long-term relationship hot and steamy. It is a challenge many of us have tackled, and no doubt struggled with. This 27485struggle is for good reasons, and knowing the reasons provides insight to solutions. Perel powerfully explains the problem and speaks to working toward solutions.

The thornier issue of infidelity is the second talk (and her brand new book). Like before, she tackles the topic from a totally different perspective. Her approach is unconventional and utterly refreshing. The new perspectives provide an alternative narrative to handling this all too common human failing. Explaining and understanding the complex root of this all-to-common relationship problem can improve our lives. It is an alternative to the moral perspective that has failed to provide any solutions. Among the threads to concentrate on is the relatively new character of modern marriage in the history of humanity, and the consequences of the deep changes in the institution. One of the beauties of TeD is the exposure to fresh perspective on old ideas along side completely new ideas.

The very ingredients that nurture love mutuality, reciprocity, protection, worry, and responsibility for the other are sometimes the very ingredients that stifle desire.

– Esther Perel

Truth and courage aren’t always comfortable, but they’re never weakness.

– Brene Brown

The last talk I’ll highlight today is truly challenging to most of us. Brene Brown is a gifted and utterly approachable speaker presenting a topic that genuinely terrifies most of us, vulnerability (https://www.ted.com/talks/brene_brown_on_vulnerability). Begin vulnerable is an immensely valuable characteristic that almost everyone struggles with. Vulnerable often equates with being weak, but also open and honest. That openness and honesty is the key to being a better person and developing better relationships. In many cases the weakness and honesty is shared only with yourself. In either case vulnerability provides an avenue to connection and an embrace of humanity that both frees you and allows deeper relationships to flourish. The freedom you give yourself allows you to grow, learn and overcome bad experiences.

What would you be glad you did–even if you failed?

– Brene Brown

I always wish that I could focus on most of what I hear at a local TeD event, but one must make choices, time and effort are limited. While I do plan to more mindfully apply mindfulness to my life, right now I’ll hedge toward the artistic side of things, if for no Nature___Clouds_Clouds_in_the_moonlit_night_080130_other reason that I usually don’t. I will close by honoring the inspirational gift of Olivia Gatwood’s talk on poetry about seeking beauty and meaning in the mundane. I’ll write a narrative of a moment in my life that touched me deeply.

The Best Gift

A night of enchanting companionship was drawing to a close,

and I was longing for one last kiss before parting

Those early autumn nights are so welcoming,

 

the crisp nights promised, but not yet arrived,

summer still alive, but fading

I hadn’t even bothered to fully dress for the goodbye,

 

Conventions and neighbors be damned

It was a warm evening and my skin wanted to drink it in,

drink her in too, one last time

 

We slowly made our way out to my driveway

talking, still flirting, our banter unabated

The moon full, bright, and peeking between the gaps in the single cloud

 

adorning the sky as it illuminates the night

It will light her way home as a warm beacon

“Good,” I think, “you’ll be safe” on your long drive home

 

We draw close to each other, pressing hard while

savoring the time spent together fun and friendship

with a depth that was unexpected, but welcome

 

You ask, “What would you like for your birthday?”

My mind goes to my elaborate tattoo to adorn me soon,

“I’m already getting what I want for myself”

 

“I always ask for more time,” she said longingly

Her words cut me to the core,

of course, what else would she want?

 

My head spins with the truth revealed by her breathtaking honesty,

with words failing me for a breath or two, … or three

My mind opens with the realization of her precious offering

 

“I just want good memories”

Realization washes over me, she just gave me the best gift I could have hoped for

We kiss deeply and parted until we next renew making good memories