We’ve taken the world apart but we have no idea what to do with the pieces.

― Chuck Palahniuk

There are a lot of ways to turn differential equations into a discrete system and numerical solve them. The choices usually come down to three different, but slightly ambiguous choices, finite differences, finite volumes, or finite elements. One of the important, but fuzzy pieces of knowledge is the actual meaning of the variables you’re solving in the first place. Some clarity regarding the variables detailed identity can come in handy, if not be essential to high fidelity solution. With any luck we can shed some light on this. The place to start is writing down the governing (usually differential) equations. If your problem is geometrically complex, finite element methods have a distinct appeal. The finite element method has a certain “turn the crank” approach that takes away much of the explicit decision-making in discretization, but the decision and choices are so much deeper than this if you really care about the answer.

Tiny details imperceptible to us decide everything!

― W.G. Sebald

I’ve never been a fan of not thinking deeply about what you are doing at a very specific way. Finite element aficionados seem to be very keen on avoiding as much thought as possible in discretization with a philosophy of choosing your element and associated shape functions and letting the chips fall where they may. For simple problems with a lot of regularity (smoothness) this can work well and even be advantageous, but for difficult problems (i.e., hyperbolic PDEs) this can be disastrous. In the end people would be far better served by putting more thought into the transition from the continuous to the discrete.

My recent posts have been examples of the sorts of details that really matter a lot in determining the quality of results in computing. Ideas like convergence, limiting solutions, dissipation, and accuracy all matter a tremendous degree and make the difference between stability and instability, high and low fidelity, run of the mill and state of the art, and ultimately high quality. Failure to pay acute attention to the details of the discretization will result in mediocrity.

It should come as no surprise that I don’t particularly care for the finite element method, so in keeping with my tastes I’m focusing on the finite volume and finite difference methods today. There are significant differences between the two that should be taken into account when deriving discrete approximations. Perhaps more interestingly, there is a fairly well defined way to translate between the two points of view. This translation makes for a useful addition to anyone’s discretization “toolbox”.

Once upon a time there was no such thing as the “finite volume method” it was simply a special finite difference method. Some finite differences employed a discrete conservation principle, and could be thought of as directly updating conserved quantities. These distinctions are not terribly important until methods become higher than second-order in order of accuracy. Gradually the methodology became distinct enough that it was worth making the distinction. The term “finite volume” came into the vernacular in about 1973 and stuck (an earlier paper in 1971 coined “finite area” method in 2-D).

The start of the distinctions between the two approaches involves how the equations are updated. For a finite difference method the equations should be updated using the differential equation form of the equations at a point in space. For a finite volume method the equations should be updated in a manner consistent with an integral conservation principle for a well-defined region. So the variables in a finite difference method are defined at a point, and the values in a finite volume method are defined as the integral of that quantity over a region. Transfers between adjoining regions via fluxes are conserved, what leaves one volume should enter the next one. Nothing about the finite difference method precludes conservation, but nothing dictates it either. For a finite volume method conservation is far more intrinsic to its basic formulation.

Consider a conservation law, $\partial_t u + \partial_x f(u)=0$. One might be interested in approximating with either finite differences or finite volumes. Once a decision is made, the approximation approach falls out naturally. In the finite difference approach, one takes the solution at points in space (or in the case of this PDE, the fluxes, $f(u)$) and interpolates these values in some sort of reasonable manner. Then the derivative of the flux $\partial_x f(u)=0$ is evaluated. The update equation is then $\partial_t u_j = - \partial_x f(u)_j$. This then can be used to update the solution by treating the time like an ODE integration. This is often called the “method of lines”.

For a finite volume method the approach is demonstrably different. The update of the variable makes explicit contact with the notion that it is a quantity that is conserved in space. For the simple PDE like that above the update equation requires the updating of the variables with fluxes computed at the edge of the cell. Notice that the fluxes have to be evaluated at the edges, which can assure that the variable, is conserved discretely. The trick to doing the finite volume method properly is the conversion of the set of cell (element) conserved values of $u$ to the point values at the edges as fluxes. It isn’t entirely simple. Again an interpolation can be utilized to achieve this end, but the interpolation form must adhere to the character of the conservation of the variable. Thus when the interpolant is integrated over the cell it must return the conserved quantity in that cell precisely. This can be accomplished through the application of the classical Legendre polynomial basis for example.

Perhaps you’re now asking about how to translate between these two points of view. It turns out to be a piece of cake. It is another useful technique to place into the proverbial toolbox. It turns out that the translation between the two can be derived from the interpolations discussed about and rightfully mirrors each other. If one takes a set of control volume, integrally averaged, values and recovers the corresponding point values the formula is remarkably simple. Here I will refer to control volume values by $\bar{u}_j$ and the point values by $u_j$. Then we can transform to point values via $u_j \approx \bar{u}_j - \frac{1}{24} \partial_{xx} \bar{u}_j + \text{HOT}$. The inverse operation is $\bar{u}_j \approx u_j - \frac{1}{24} \partial_{xx} u_j + \text{HOT}$ and is derived by integrating the point wise interpolation over a cell. For higher order approximations these calculation are a bit more delicate than these formula imply!

For finite element methods we usually have the standard flavor of the continuous finite elements. Thinking about the variables in that case they are defined by nodal values, the “shape function” describing their variation in space, and its appropriately weighted integral. A more modern and exciting approach is discontinuous Galerkin, which does not require continuity of solution across element boundaries. The lowest order version of this method is equivalent to a low-order finite volume scheme. The variable is the zeroth moment of the solution over a cell. One way of looking at high order discontinuous Galerkin methods is taking successive moments of the solution over the cells (elements). This method holds great promise because of its high fidelity and great locality.

This is just the start of this exploration, but the key is know what your variables really mean.

Little details have special talents in creating big problems!

― Mehmet Murat ildan