Over the past couple of weeks I’ve experienced something very irritating time and time again. Each time I’ve been left more frustrated and angry than before. It has been a continual source of disappointment. I went into a room expecting to learn something and left knowing less than when I entered. What is it? “The finite element method”

“If you can’t explain it to a six year old, you don’t understand it yourself.” ― Albert Einstein

In short, the answer to my title is nothing at all and everything. Nothing is technically wrong with the finite element method, absolutely nothing at all. Given that nothing is wrong with it there is a lot wrong with what it does to the practice of mathematics and scientific computing. More specifically there isn’t a thing wrong with the method except how people use it, which is too damn abstractly. Much of the time the method is explained in a deep code undecipherable to anyone except a small cadre of researchers working in the field. Explaining finite elements to a six year old is a long suit, but a respectable goal. Too often you can’t explain what you’re doing to a 46 year old with a PhD unless they are part of the collective of PhD’s working directly in the field and have received the magic decoder ring during their graduate education.

A common occurrence for someone to begin their research career with papers that clearly state what they are doing, and as the researcher becomes successful, all clarity leaves their writing. I saw a talk at a meeting where a researcher who used to write clearly had simultaneously obscured their presentation while pivoting toward research on easier problems. This is utter madness! The mathematics of finite element research tends to take a method that works well on hard problems, and analyze them on simpler problems while making the whole thing less clear. One of the key reasons to work on simpler problems is to clarify, not complicate. Too often the exact opposite is done.

Sometimes this blog is about working stuff out that bugs me in a hopefully articulate way. I’ve spent most of the last month going to scientific meetings and seeing a lot of technical talks and one of the things that bugs me the most are finite element methods (FEM). More specifically the way FEM is presented. There really isn’t a lot wrong with FEM per se, it’s a fine methodology that might even be optimal for some problems. I can’t really say because its proponents so often do such an abysmal job of explaining what they are doing and why. That is the crux of the matter.

Scientific talks on the finite element method tend to be completely opaque and I walk out of them knowing less than I walked in. The talks are often given in a manner that seems to intentionally obscure the topic with the seeming objective of making the speaker seem much smarter than they actually are. I’m not fooled. The effect they have gotten is to piss me off, and cause me to think less of them. Presenting a simple problem in an intentionally abstract and obtuse way is simply a disservice to science. It serves no purpose, but to make the simple grandiose and distant. It ultimately hurts the field, deeply.

The point of a talk is to teach, explain and learn not to make the speaker seem really smart. Most FEM talks are about making the speaker seem smart instead of explaining why something works. The reality is that the simple clear explanation is actually the hallmark of intellectual virtue. Simplicity is a virtue that seems to be completely off the map with FEM, FEM is about making the simple, complex instead. To make matter more infuriating, much of the current research on FEM is focused on attacking the least important and most trivial mathematical problems instead of the difficult problems that are pacing computational science. Computational science is being paced today by issues such as multiphysics (where multiple physical effects interact to define a problem) particularly involving transport equations (defined by hyperbolic PDE’s). In addition uncertainty quantification along with verification and validation is extremely important.

Instead FEM research is increasingly focused on elliptic PDE’s, which are probably the easiest thing to solve in the PDE world. In other words, if you can solve an elliptic PDE well I know very little about the ability of a methodology’s capacity to attack the really hard important problems. It is nice, but not very interesting (the very definition of necessary and insufficient). Frankly the desire and interest in taking a method designed for solving hyperbolic PDE’s such as discontinuous Galerkin and applying it to elliptic PDE’s is worthwhile, but should not receive anywhere near the attention I see. It is not important enough to get the copious attention it is getting.

The effect is that we are focused on the areas of less importance, which has the impact of taking the methodology backwards. The research dollars are focused on less important problems instead of more important ones. Difficult important problems should be the focus of research, not the kind of “Mickey Mouse” stuff I’ve seen the whole month. On top of Mickey Mouse problems, the talks make the topic as complex as possible, and seem to be focused on trying not to explain anything in simple terms.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.” ― Edgar Wybe Dijkstra

I think Dijkstra was talking about something entirely different, but the point is similar, the complexity sells and that is why it is trotted out time and time again. While it sells, it also destroys the sort of understanding that allows ideas to be extended and modified to solve new problems. The complexity tends to box ideas in rather than making them more general and less specific. There is a lot at stake beyond style, the efficacy of science is impacted by a false lack of simplicity. Ultimately it is the lack of simplicity that works against FEM, not the method itself. This is a direct failure of the practice of FEM rather than the ideas embedded within.

The people who tend to work on FEM tend to significantly overelaborate things. I’m quite close to 100% convinced that the overelaboration is completely unnecessary, and it actually serves a supremely negative purpose in the broader practice of science. One of the end products is short-changing the FEM. In a nutshell, people can solve harder problems with finite volume methods (FVM) than FEM. The quest for seemingly rigorous mathematics has created a tendency to work toward problems with well-developed math. Instead we need to be inventing math to attack important problems even if the rigor is missing. Additionally, researchers over time have been far more innovative with FVM than FVM.

The FEM folks usually trot out that bullshit quip that FEM is exactly like FVM with the properly chosen test function. OK, fair enough, FEM is equivalent to FVM, but this fails to explain the generic lack of innovation in numerical methods arising from the FEM community. In the long run it is the innovations that determine the true power of a method, not the elaborate theories surrounding relatively trivial problems. These elaborations actually undermine methods and lead to a cult of complexity that so often defines the practice.

Where FEM excels is the abstraction of geometry from the method and ability to include geometric detail in the simulation within a unified framework. This is extremely useful and explains the popularity of FEM for engineering analysis where geometric detail is important, or assumed to be important. Quite often the innovative methodology is shoehorned into FEM having been invented and perfected in the finite volume (or finite difference) world. Frequently the innovative devices have to be severely modified to fit into the FEM’s dictums. These modifications usually diminish the overall effectiveness of the innovations relative to their finite volume or difference forbearers. These innovative devices are necessary to solve the hard multiphysics problems often governed by highly nonlinear hyperbolic (conservation or evolution) equations. I personally would be more convinced by FEM if some of the innovation happened within the FEM framework instead of continually being imported.

Perhaps most distressingly FEM allows one to engage in mathematical masturbation. I say this with complete sincerity because the development of methods in FVM is far more procreative where methods are actually born of the activity. Too often FEM leads to mathematical fantasy that have no useful end product aside from lots of self-referential papers in journals, and opaque talks at meetings such as those I’ve witnessed in the last month. For example computational fluid dynamics (CFD) is dominated by FVM methods. CFD solvers are predominantly FVM not FEM largely for the very reason that innovative methods are derived first and used best in FVM. Without the innovative methods CFD would not be able to solve many of its most important and challenging problems today.

Mathematically speaking, I think the issue comes down to regularity. For highly regular and well-behaved problems FEM works very well, and it’s better than FVM. In a sense FEM often doubles down on regularity with test functions. When the solution is highly regular this yields benefits. The issue is that highly regular problems actually define the easier and less challenging problems to be solved, not the hard technology-pacing ones. FVM on the other hand hedges its’ bets. Discontinuous Galerkin (DG) is a particular example. It is a really interesting method because it sits between FEM and FVM. The DG community puts a lot of effort in making it a FEM method with all the attendant disadvantages of assumed regularity.  This is the heart of the maddening case of taking a method so well suited to very hard problems and studying in incessantly on very easy problems with no apparent gain in utility. It seems to me that DG methods have actually gone backwards in the last decade due to this practice.

In a sense the divide is defined by whether you don’t assume regularity and add it back, or you assume it is there and take measures to deal with it when it’s not there. Another good example comes from the use of FEM for hyperbolic PDE’s where conservation form is important. Conservation is essential, and the weak form of the PDE should give conservation naturally. Instead with the most common Galerkin FEM if one isn’t careful the implementation can destroy conservation. This should not happen, conservation should be a constraint, an invariant that comes for free. It does with FVM, it doesn’t with FEM, and that is a problem. Simple mistakes should not cause conservation errors. In FVM this would have been structurally impossible because of how it was coded. The conservation form would have been built in. In FEM the conservation is a specially property, which is odd for something built on the weak form of the PDE. This goes directly to the continuous basis selected in the construction of the scheme.

Another place where the FEM community falls short is stability and accuracy analysis. With all the mathematical brouhaha surrounding the method one might think that stability and accuracy analysis would be ever-present in FEM practice. Quite the contrary is true. Code and solution verification are common and well practiced in the FVM world and almost invisible in FEM. It makes no sense. A large part of the reason is the abstract mathematical focus of FEM instead of the practical approach of FVM. At the practical end where engineering and science are being accomplished with the aid of scientific computing, the mathematical energy seems to yield very little. It is utterly baffling.

“Simplicity is the ultimate sophistication.” ― Leonardo da Vinci

The issue is where the math community spends its time; do they focus on proving things for easy problems, or expand the techniques to handle hard problems? Right now, it seems to focus on making the problem easier and proving things rather than expanding the techniques available and create structures that would work on the harder problems.  The difference is rather extreme. The goal should be to solve the hard problems we are offered, not transform the hard problems into easy problems with existing math. If the math needed for the hard problems aren’t there we need to invent it and start extending ourselves to provide the rigor we want to see. Too often the opposite path is chosen.

A big issue is the importance or prevalence of problems for which strong convergence can be expected. How much of the work in the world is focused where this doesn’t or can’t happen. How much is? Where is the money or importance?

A think a much better path for FEM in the future is to focus on first making the style and focus of presentation simple and pedagogical. Secondarily the focus should be pushed toward solving harder problems that pace computational science rather than toys that are amenable to well-defined mathematical analysis. The advantages of FEM are clear, the hardest this we have to do is make the method clear, comprehensible and extensible.

Gil Strang is a good example of presenting the FEM in a clear manner free of jargon and emphasizing understanding.

I fully expect to catch grief over what I’m saying. Instead I’d like to spur those working on FEM to both attack harder problems, and make their explanation of what they are doing simple. The result will be a better methodology that more people understand. Maybe then the FEM will start to be the source of more innovative numerical methods. Everyone will benefit from this small, but important change in perspective.

“Any darn fool can make something complex; it takes a genius to make something simple.” ― Pete Seeger