To test a perfect theory with imperfect instruments did not impress the Greek philosophers as a valid way to gain knowledge.

― Isaac Asimov

Note: I got super annoyed with the ability of WordPress to parse LaTeX so there are a lot of math type in here that I gave up on. Apologies!

Last week I introduced a set of alternatives to discontinuous intrinsic functions providing logical functional capability in programming. In the process I outlined some of the issues that arise due to discontinuous aspects of computer code operation. The utility of these alternatives applies to common regression testing in codes and convergence of nonlinear solvers. The issue remains with respect to how useful these alternatives are. My intent is to address components of this aspect of the methods this week. Do these advantageous functions provide their benefit without undermining more fundamental properties of numerical methods like stability and convergence of numerical methods? Or do we need to modify the implementation of these smoothed functions in some structured manner to assure proper behavior.

The answer to both questions is an unqualified yes, they are both useful and they need some careful modification to assure correct behavior. The smoothed functions may be used, but the details do matter.

To this end we will introduce several analysis techniques to show these issues concretely. One thing to get out of the way immediately is how this analysis does not change some basic aspects of the functions. For all of the functions we have the property that the original function is recovered in an asymptotic limit that is \mbox{softsign}(x) = \tanh(n x) becomes the original sign function, \rightarrow \infty . Our goal is to understand the behavior of these functions within the context of a numerical method away from this limit where we have obviously deviated substantially from the classical functions. We fundamentally want to assure that the basic approximation properties of methods are not altered in some fatal manner by their use. A big part of the tool kit will be systematic use of Taylor series approximations to make certain that the consistency of the numerical method and order of accuracy are retained when switching from the classical functions to their smoothed versions. Consistency simply means that the approximations are valid approximations to the original differential equation (meaning the error is ordered).

There was one important detail that I misplaced during last week’s post. If one takes the definition of the sign function we can see an alternative that wasn’t explored. If we have \mbox{sign}(x) = \|x\|/x = x/\|x\|. Thus we can easily rearrange this expression to give two very different regularized absolute value expressions, \|x\| = x \mbox{sign}(x) and \|x\| = x /\mbox{sign}(x). When we move to the softened sign function the behavior of the absolute value changes in substantive ways. In particular in the cases where the softened absolute value was everywhere less than the classical absolute value, it would be greater for the second interpretation and vice-versa. As a result functions like \mbox{softsign}(x) = \tanh(n x) now can produce an absolute value, \mbox{softabs}(x)=x/\mbox{softsign}(x) that doesn’t have issues with entropy conditions as the dissipation will be more than the minimum, not less. Next we will examine whether these different views have significance in truncation error.

Our starting point will be the replacement of the sign function or absolute value in upwind approximations to differential equations. For entropy satisfaction and generally stable approximation the upwind approximation is quite fundamental as the foundation of robust numerical methods for fluid dynamics. We can start with the basic upwind approximation with classical function, the absolute value in this case. We will base the analysis on the semi-discrete version of the scheme using a flux difference, u_t = - f(u)_x = $\frac{1}{h} \left[ f(j+1/2) – f(j-1/2) \right]  $. The basic upwind approximation is f(j+1/2) = \frac{1}{2} \left[ f(j) + f(j+1)\right] $ – \frac{1}{2} \left|a\right| \left[ u(j+1)  –  u(j)  \right]$ where a is the characteristic velocity for the flux and provides the upwind dissipation. The Taylor series analysis gives the dissipation in this scheme as f(u)_x - \frac{1}{2} \left|a\right| u_{xx} + {\cal O}(h^2) . We find a set of very interesting conclusions can be drawn almost immediately.

All of the smoothed functions introduced are workable as alternatives although some versions seem to be intrinsically better. In other words all produce a valid consistent first-order approximation. The functions based on analytical functions like \tanh or \mbox{erf} are valid approximations, but the amount of dissipation is always less than the classical function leading to potential entropy violation. They approach the classical absolute value as one would expect and the deviations similarly diminish. Functions such as \mbox{softabs}(x) = x^2 /(\left|x\right| + n) or \mbox{softabs}(x) = x^2 /\sqrt{x^2+ n} result in no change in the leading order truncation error although similarly the deviations are always produce less dissipation than classical upwind. We do find that for both functions we need to modify the form of the regularization to get good behavior to \mbox{softabs}(x) = x^2 /(\left|x\right| + n h) and \mbox{softabs}(x) = x^2 /\sqrt{x^2+ n^2 h^2}.For the classical softmax function based on logarithms and exponentials is the same, but it always produces more dissipation than upwinding rather than less, \mbox{softabs}(x) = \mbox{softmax}(a,-a) \ge \|a \|. This may make this functional basis better for replacing the absolute value for the purpose of upwinding. The downside to this form of the absolute value is the regularized sign function’s passage through hard zero, which makes division problematic.

Let’s look at the functions useful for producing a more entropy satisfactory result for upwinding.  We find that these functions work differently than the original ones. For example the hyperbolic tangent does not as quickly become equivalent to the upwind scheme as n \gg 0. There is a lingering departure from linearity with \mbox{softsign}(x) =x/ (\|x\| + n h) \rightarrow \mbox{softabs}(x) = (\|x\| + n h) proportional to the mesh spacing and n. As a result the quadratic form of the softened sign is best because of the h^2 regularization. Perhaps this is a more widely applicable conclusion as will see as we develop the smoothed function more with limiters.

Where utility ends and decoration begins is perfection.

― Jack Gardner

Now we can transition to looking at a more complex and subtle subject, limiters. Briefly put, limiters are nonlinear functions applied to differencing schemes to produce non-oscillatory (or monotone solutions) with higher order accuracy. Generally in this context high-order is anything above first order. We have theory that confines non-oscillatory methods to first-order accuracy where upwind differencing is canonical. As a result the basic theory applies to second-order method where a linear basis is added to the piecewise constant basis the upwind method is based on. The result is the term “slope limiter” where the linear, slope, is modified by a nonlinear function. Peter Sweby produced a diagram to describe what successful limiters look like parametrically. The parameter is non-dimensionally described by the ratio of discrete gradients, $ r = \frac{(u(j+1) – u(j)}{u(j) – u(j-1)}$. The smoothed functions described here modify the adherence to this diagram. The classical diagram has a region where second-order accuracy can be expected. It is bounded by the function \mbox{minmod}(1,r) and twice the magnitude of this function.

We can now visualize the impact of the smoothed functions on this diagram. This produces systematic changes in the diagram that lead to deviations from the ideal behavior. Realize that the ideal diagram is always recovered in the limit as the functions recover the classical form. What we see is that the classical curves are converged upon from above or below, and produces wiggles in the overall functional evaluation. My illustrations all show the functions with the regularization chosen to be unrealistically small to exaggerate the impact of the smooth functions. A bigger and more important question is whether the functions impact the order of approximation.

To finish up this discussion I’m going to look at analyzing the truncation error of the methods. Our starting point is the classical scheme’s error, which provides a viewpoint on the nature of the nonlinearity associated with limiters. What is clear about a successful limiter is its ability to produce a valid approximation to a gradient with an ordered error of a least order h= \Delta x. The minmod limiter produces a truncation error of $ \mbox{minmod}(u(j)-u(j-1), u(j+1) – u(j)) = u_x – \frac{h}{2} \left| \frac{u_{xx}}{u_x} \right| u_x $. The results with different sorts of recipes for the smoothed sign function and its extension to softabs, softmin and softmax are surprising to say the least and a bit unexpected.

Here is a structured summary of the options as applied to a minmod limiter, \mbox{minmod}(a,b) = \mbox{sign}(a) \max\left[ 0, \min\left( \|a\|, \mbox{sign}(a) b\right) \right] :

 

  1. \mbox{softsign}(x) = \tanh(n x) and \mbox{softabs}(x) = x \tanh(n x). The gradient approximation is u_x \approx \tanh(n)\tanh(2 n) u_x + \mbox(giant mess) h u_{xx} + \cal{O}(h^2). The constant in front of the gradient approaches one very quickly as n grows.tanh-constant
  2. \mbox{softsign}(x) = \tanh(n x) and \mbox{softabs}(x) = x/ \tanh(n x). The gradient approximation is u_x \approx \frac{1}{2}\left(2 \coth(2 n)- \frac{1}{n} \right) \tanh(n) u_x + \mbox(giant mess) h u_{xx} + \cal{O}(h^2). The constant in front of the gradient approaches one very slowly as n grows. This is smoothing is unworkable for limiters.one-over-tanh-constant
  3. \mbox{softsign}(x) = x/(n+\|x\|) and \mbox{softabs}(x) = x^2/(n+\|x\|) putting a mesh dependence in the sign function results in inconsistent gradient approximations. The gradient approximation is u_x \approx \frac{2}{(n+1)(n+2)} u_x + \frac{3n+2n^2}{(1+n)^2(2+n)^2} h u_{xx} + \cal{O}(h^2). The leading constant goes slowly to one as n\rightarrow 0.
  4. \mbox{softsign}(x) = x/(n+\|x\|) and \mbox{softabs}(x) = (n h +\|x\|). The gradient approximation is u_x \approx u_x - n h u_x + \cal{O}(h^2).linear-constant
  5. \mbox{softsign}(x) = x/\sqrt{x^2 + n^2} and \mbox{softabs}(x)= x^2/\sqrt{x^2 + n^2}. putting a mesh dependence in the sign results in inconsistent gradient approximations. The gradient approximation is u_x \approx u_x +\mbox{giant unordered mess} + \cal{O}(h). This makes the approximation utterly useless in this context.
  6. \mbox{softsign}(x) = x/\sqrt{x^2 + n^2 h^2} and \mbox{softabs}(x) = \sqrt{x^2 + n^2 h^2}. The gradient approximation is $ u_x \approx u_x – \left[ u_x\sqrt{n^2 + \left(\frac{(u_{xx}}{u_x}} \right)^2 \right]h+ \cal{O}(h^2)$.quadratic-constant

Being useful to others is not the same thing as being equal.

― N.K. Jemisin

Sweby, Peter K. “High resolution schemes using flux limiters for hyperbolic conservation laws.” SIAM journal on numerical analysis 21, no. 5 (1984): 995-1011.

Advertisements