**No.**

The ideal is the enemy of the real.

― Susan Block

In the early days of computational science we were just happy to get things to work for simple physics in one spatial dimension. Over time, our grasp on more difficult coupled multi-dimensional physics became ever more bold and expansive. The quickest route to this goal was the use of operator splitting where the simple operators, single physics and one-dimensional were composed into complex operators. Most of our complex multiphysics codes operate in this operator split manner. **Research into doing better almost always entails doing away with this composition of operators, or operator splitting and doing everything fully coupled**. It is assumed that this is always superior. Reality is more difficult than this proposition, and most of the time the fully coupled or unsplit approach is actually worse with lower accuracy and greater expense with little identifiable benefits. So the question is should we keep trying to do this?

This is another example where the reality of simulating difficult problems gives a huge home field advantage to simple approaches. It is much the same as the issues with high-order methods for discretization. **Real problems bring complexities and singularities (shocks, corners, turbulence, etc.), and this relegates results to first-order accuracy or less.** Operator splitting is often first order accurate without extensive and difficult measures. We have the situation where reality collides with the simplest approach. The truth is that the simple operator split approach is really good and powerful in many, if not most cases. It is important to realize when this is not and something better really is needed.

The unsplit, fully coupled approach really yields an unambiguous benefit when the solution involves a precise dynamic balance. This is when you have the situation of equal and opposite terms in the equations that produce solutions in near equilibrium. T**his produces critical points where solution make complete turns in outcomes based on the very detailed nature of the solution**. These situations also produce substantial changes in the effective time scales of the solution. When very fast phenomena combine in this balanced form, the result is a slow time scale. It is most acute in the form of the steady-state solution where such balances are the full essence of the physical solution. This is where operator splitting is problematic and should be avoided.

Such balances are also rarely the entire problem, and often only present in a localized region in time and space. As such the benefit of coupling is not present everywhere and the cost of it should not be applied by the entire procedure. **Unfortunately, this isn’t what people do, once they remove operator splitting and fully couple, they do it everywhere**. A way forward is to only apply fully coupling where it has a favorable impact on the solution in the region of critical points, and use more effective, accurate and efficient operator splitting elsewhere.

The other reason for not applying coupled methods is their disadvantage for the fundamental approximations. When operators are discretized separately quite efficient and optimized approaches can be applied. **For example if solving a hyperbolic equation it can be very effective and efficient to produce an extremely high-order approximation to the equations. For the fully coupled (unsplit) case such approximations are quite expensive, difficult and complex to produce**. If the solution you are really interested in is first-order accurate, the benefit of the fully coupled case is mostly lost. This is with the distinct exception of small part of the solution domain where the dynamic balance is present and the benefits of coupling are undeniable.

This entire dialog is even stronger when considering multi-physics where procedures for solving single physics are highly optimized and powerful. **The fully coupled methods tend to be clunky and horribly expensive often being defined by dropping the entire system into an implicit system without regard to the applicability and utility of such an approximation for the problem at hand**. To make matters worse the implicitness often undermines accuracy in really pernicious ways in the very regions where the coupling is actually necessary. Moreover the cost of this less accurate approximation is vastly greater due to the nature of the full system, and the departure from all the tricks of the trade leading to efficiency.

A really great path forward is the encouragement to pursue fully coupled methods only where their benefit is greatest. This is another case where the solution method should be adaptive and locally tailored to the nature of the solution. **One size fits all is almost never the right answer (to anything).** Unfortunately this whole line of attack is not favored by anyone these days, we seem to be stuck in the worst of both worlds where codes used for solving real problems are operator split, and research is focused on coupling without regard for the demands of reality. We need to break out of this stagnation! This is ironic because stagnation is one of the things that coupled methods excel at!

The secrets of evolution are death and time—the deaths of enormous numbers of lifeforms that were imperfectly adapted to the environment; and time for a long succession of small mutations.

― Carl Sagan

Pingback: 10 Better Things for Scientific Computing to focus on in 2018 | The Regularized Singularity