The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

I chose the name the “Regularized Singularity” because it’s so important to the conduct of computational simulations of significance. For real world computations, the nonlinearity of the models dictates that the formation of a singularity is almost a foregone conclusion. **To remain well behaved and physical, the singularity must be regularized, which means the singular behavior is moderated into something computable**. This almost always accomplished with the application of a dissipative mechanism and effectively imposes the second law of thermodynamics on the solution.

A useful, if not vital, tool is something called “hyperviscosity”. Taken broadly hyperviscosity is a broad spectrum of mathematical forms arising in numerical calculations. I’ll elaborate a number of the useful forms and options. **Basically a hyperviscosity is viscous operator that has a higher differential order than regular viscosity. **As most people know, but I’ll remind them the regular viscosity is a second-order differential operator, and it is directly proportional to a physical value of viscosity. Such viscosities are usually a weakly nonlinear function of the solution, and functions of the intensive variables (like temperature, pressure) rather than the structure of the solution. The hyperviscosity falls into a couple of broad categories, the linear form and the nonlinear form.

Unlike most people I view numerical dissipation as a good thing and an absolute necessity. This doesn’t mean that it should be wielded cavalierly or brutally because it can and gives computations a bad name. **Generally conventional wisdom dictates that dissipation should always be minimized, but this is wrong-headed.** One of the key aspects of important physical systems is the finite amount of dissipation produced dynamically. The correct asymptotically correct solution with a small viscosity is not zero dissipation; it is a non-zero amount of dissipation arising from the proper large-scale dynamics. This knowledge is useful in guiding the construction of good numerical viscosities that enable us to efficiently compute solutions to important physical systems.

One of the really big ideas to grapple with is the utter futility of using computers to simply crush problems into submission. For most problems of any practical significance this will not be happening, ever. In terms of the physics of the problems, this is often the coward’s way out of the issue. **In my view, if nature were going to be submitting to our mastery via computational power, it would have already happened. The next generation of computing won’t be doing the trick either. Progress depends on actually thinking about modeling. **A more likely outcome will be the diversion of resources away from the sort of thinking that will allow progress to be made. Most systems do not depend on the intricate details of the problem anyway. The small-scale dynamics are universal and driven by large scales. The trick to modeling these systems is to unveil the essence and core of the large-scale dynamics leading to what we observe.

Given that we aren’t going to be crushing our problems out of existence with raw computing power, hyperviscosity ends up being a handy tool to get more out of the computing we have. **Viscosity depends upon having enough computational resolution to effectively allow it to dissipate energy from the computed system**. If the computational mesh isn’t fine enough, the viscosity can’t stably remove the energy and the calculation blows up. This provides a very stringent limit on the resolution that can be computationally achieved.

The first form of viscosity to consider is the standard linear form in its simplest form which is a second order differential operator, . If we apply a Fourier transform to the operator we can see how simple viscosity works, (just substitute the Fourier description for the function into the operator). **The viscosity grows in magnitude with the square of the wave number** . Only when the product of the viscosity and wavenumber squared becomes large will the operator remove energy from the system effectively.

Linear dissipative operators only come from even orders of the differential. Moving to a fourth-order bi-Laplacian operator it is easy to see how the hyperviscosity will works, . The dissipation now kicks in faster () with the wavenumber allowing the simulation to be stabilized at comparatively coarser resolution than the corresponding simulation only stabilized by a second-order viscous operator. **As a result the simulation can attack more dynamic and energetic flows with the hyperviscosity**. One detail is that the sign of the operator changes with each step up the ladder, a sixth order operator will have a negative sign, and attack the spectrum of the solution even faster, , and so on.

Taking the linear approach to hyperviscosity is simple, but has a number of drawbacks from a practical point-of-view. First the linear hyperviscosity operator becomes quite broad in its extent as the order of the method increases. **The method is also still predicated on a relatively well-resolved numerical solution and does not react well to discontinuous solutions. As such the linear hyperviscosity is not entirely robust for general flow**s. It is better as an additional dissipation mechanism with more industrial strength methods and for studies of a distinctly research flavor. Fortunately there is a class of methods that remove most of these difficulties, nonlinear hyperviscosity. Nonlinear is almost always better, or so it seems, not easier, but better.

Linearity breeds contempt

– Peter Lax

The first nonlinear viscosity came about from Prantl’s mixing length theory and still forms the foundation of most practical turbulence modeling today. For numerical work the original shock viscosity derived by Richtmyer is the simplest hyperviscosity possible, . Here is a relevant length scale for the viscosity. In purely numerical work, . **It provides what linear hyperviscosity cannot, stability and robustness, making flows that would be computed with pervasive instability and making them stable and practically useful**. It provides the fundamental foundation for shock capturing and the ability to compute discontinuous flows on grids. In many respects the entire CFD field is grounded upon this method. The notable aspect of the method is the dependence of the dissipation on the product of the coefficient and the absolute value of the gradient of the solution.

Looking at the functional form of the artificial viscosity, one sees that it is very much like the Prantl mixing length model of turbulence. The simplest model used for large eddy simulation (LES) is the Smagorinsky model developed first by Joseph Smagorinsky and used in the first three dimensional model for global circulation. This model is significant as the first LES and the model that is a precursor of the modern codes used to predict climate change. **The LES subgrid model is really nothing more than Richtmyer (and Von Neumann’s) artificial viscosity and is used to stabilize the calculation against instability that invariably creeps in with enough simulation time. The suggestion to do this was made by Jules Charney upon seeing early weather simulations. **The significance of having the first useful numerical method for capturing shock waves, and computing turbulence being one and the same is rarely commented upon. I believe this connection is important and profound. Equally valid arguments can be made that state that the form of nonlinear dissipation is fated by the dimensional form of the governing equations and the resulting dimensional analysis.

Before I derive a general form for the nonlinear hyperviscosity, I should discuss a little bit about another shortcoming of the linear hyperviscosity. **In its simplest form the linear operator for classical linear viscosity produces a positive-definite operator. Its application as a numerical solution will keep positive quantities positive**. This is actually a form of strong nonlinear stability. The solutions will satisfy discrete forms for the second law of thermodynamics, and provide so-called “entropy solutions”. In other words the solutions are guaranteed to be physically relevant.

This isn’t generally considered important for viscosity, but in the content of more complex systems of equations may have importance. **One of the keys to bringing this up is that generally speaking linear hyperviscosity will not have this property, but we can build nonlinear hyperviscosity that will preserve this property. At some level this probably explains the utility of nonlinear hyperviscosity for shock capturing**. In nonlinear hyperviscosity we have immense freedom in designing the viscosity as long as we keep it positive. We then have a positive viscosity multiplying a positive definite operator, and this provides a deep form of stability we want along with a connection that guarantees of physically relevant solutions.

With the basic principles in hand we can go wild and derive forms for the hyperviscosity that are well-suited to whatever we are doing. **If we have a method with high-order accuracy, we can derive a hyperviscosity to stabilize the method that will not intrude on the accuracy of the method**. For example, let’s just say we have a fourth-order accurate method, so we want a viscosity with at least a fifth order operator, . If one wanted better high-frequency damping a different form would work like . To finish the generalization of the idea consider that you have eighth-order method, now a ninth- or tenth-order viscosity would work, for example, . The point is that one can exercise immense flexibility in deriving a useful method.

I’ll finish with making brief observation about how to apply these ideas to systems of conservations laws, . This system of equations will have characteristic speeds, determined by the eigen-analysis of the flux Jacobian, . A reasonable way to think about hyperviscosity would be to write the nonlinear version as , where is the number of derivatives to take. A second approach that would work with Godunov-type methods would compute the absolute value jump at cell interfaces in the characteristic speeds where the Riemann problem is solved to set the magnitude of the viscous coefficient. This jump is the order of the approximation, and would multiply the cell-centered jump in the variables, . This would guarantee proper entropy production through the hyperviscous flux that would augment the flux computed via the Riemann solver. The hyperviscosity would not impact the formal accuracy of the method.

We can not solve our problems with the same level of thinking that created them

― Albert Einstein

I spent the last two posts railing against the way science works today and its rather dismal reflection in my professional life. I’m taking a week off. It wasn’t that last week was any better, it was actually worse. **The rot in the world of science is deep, but the rot is simply part of larger World to which science is a part.** Events last week were even more appalling and pregnant with concerns. Maybe if I can turn away and focus on something positive, it might be better, or simply more tolerable. Soon I have a trip to Washington and into the proverbial belly of the beast, it should be entertaining at the very least.

Till next Friday, keep all your singularities regularized.

Think before you speak. Read before you think.

― Fran Lebowitz

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” *Journal of applied physics* 21.3 (1950): 232-237.

Borue, Vadim, and Steven A. Orszag. “Local energy flux and subgrid-scale statistics in three-dimensional turbulence.” *Journal of Fluid Mechanics* 366 (1998): 1-31.

Cook, Andrew W., and William H. Cabot. “Hyperviscosity for shock-turbulence interactions.” *Journal of Computational Physics* 203.2 (2005): 379-385.

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. the basic experiment*.” Monthly weather review 91.3 (1963): 99-164.