The Crisis & Economics, Part 5: “Shhh! We’re Working On It”

Posted on the 24 July 2014 by Unlearningecon

This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.

#5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”

Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.

In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behavior usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.

Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms.  It would be an exaggeration to call this approach tautological, but it’s certainly not far off.

There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behavior cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.

Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.

Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.