Bank of England’s Chief Economists Workshop, 18th May 2010

Response by Andrew Smithers as Discussant to Andrew Lo’s
“WARNING: Physics Envy May Be Hazardous To Your Wealth!”.

I am extremely honoured to be asked to comment on Andrew Lo’s paper and extremely happy to do so, as he has made a most important contribution to an urgent issue.

Our trade, as economists, has had a bad press since the crisis broke. This may not be entirely justified, but it should not be dismissed or ignored.

There is a long tradition among scientists for getting on with the job rather than discussing how it should be done. Lo and Mueller are sensitive to this and take a humble approach hoping that their paper “..will hold some interest to our academic colleagues” and stating that “this paper can hardly be classified as original research.” Such self-effacement is the approved manner. When Sir Peter Medawar gave the Jayne Lectures of the American Philosophical Society in 1968 he remarked that “It is not at all usual for scientists to deliver formal lectures on the nature of scientific method, particularly if they are still engaged in scientific research. Of course it is understood that scientists of a specially elevated kind, e.g. theoretical physicists, may from time to time express quietly authoritative opinions….but that a biologist should speak up where so many physicists and chemists have chosen to remain silent must seem to you to be yet another symptom of the decay of values and the loss, in this modern world, of all sense of the fitness of things.”

Economics comes in the pecking order of science well below that of biology. For economists to talk in public about the methodology of their trade may show that the decay in values has accelerated over the past 40 years, or that it is the sciences at the bottom of the pecking order which have the greatest need to think carefully about how they do things. The latter is my view and I fully support the implications of Andrew Lo’s paper, that the key errors in economics have not arisen through the use of mathematics but from poor methodology.

I would like to give a couple of examples of failure and look at how they could have been avoided and, in particular, whether Andrew Lo’s suggested taxonomy should help us do better in the future.

The most damaging error has I think been correctly recognised by many. It was the widespread, though by no means universal, acceptance of the Efficient Markets Hypothesis. It is also an outstanding example of my thesis that economics has suffered more from bad epistemology than bad maths. In “A Non-random Walk Down Wall Street”, which Andrew Lo wrote with Craig MacKinlay, the authors point out that “The Efficient Markets Hypothesis is, by itself, not a well-defined and empirically refutable hypothesis.” Unless therefore it is reformulated so that it is empirically testable, it is not a real Hypothesis at all. Mathematical models have been produced in which markets can be perfectly efficient without following a random walk. But it has so far at least been impossible to produce ones which are testable. For economists in such large numbers to have accepted it as if it were both testable and robust under testing was not an error of maths, it was an error of epistemology. In my view, it was a truly appalling error, but one which emphasises how essential it is for economists to analyse and discuss how they do their business.

The wide acceptance of this non-Hypothesis has done great damage to both the theory and the practice of economics. The resulting belief that markets cannot be valued made a strong contribution to the Federal Reserve’s determination to ignore the asset bubbles of 2000 and 2007, despite the evidence of the great damage they have done in the past and not just the distant past, as shown by Japan’s tribulations since its stock market bubble burst at the end of 1989.

The EMH has had a similarly bad influence on theory, notably on the discussion of the equity risk premium. You will all know the famous paper by Mehra and Prescott “The Equity Premium: a Puzzle”, which is still commonly quoted. It is interesting for several reasons. First it illustrates an intriguing aspect of economics, which is the fame that accrues to papers which show that their models are wrong. I should point out that no discredit applies to the authors. It was their clear and successful intention to show that a commonly accepted model failed an important test. This was not a case similar to the EMH in which an untestable model was accepted. It was a case in which the model was testable and failed the test. The problem was that many economists seem unwilling to accept the result. Logic would say that either the form of the test was wrong, or that the model was wrong. But the idea that somehow the equity risk premium is a useful tool and that it ought to be lower than it has been pervades much financial economics.

As I recently wrote in my book Wall Street Revalued, it is my opinion that the form of the test was wrong. The level of abstraction in the model is such that I may have misunderstood it and, if so, I hope that you will correct me – but of course only if I have! The consumer capital model requires that income from equities be consumed. The data on equity returns assume that it is not. The so-called return on equities is calculated by assuming that all income is reinvested. This would not matter if equity returns followed a random walk with drift. If that were the case, the timing of the withdrawal of capital would have no impact on expected future returns. But this is not the case. As demonstrated by Andrew Lo and others, equity returns exhibit negative serial correlation. If that occurs, a steady plan of spending will withdraw proportionately more funds when markets are low and prospective returns are high. The pattern of withdrawals will thus affect the return and volatility of the equity portfolio.

In neither case has the fault been that of the mathematical models used. The acceptance of an untestable hypothesis was a failure of epistemology and, if my diagnosis is correct, the failure to test the equity risk premium correctly arose from a failure in the methodological use of data.

Andrew Lo remarks that “..if, like other scientific endeavours, economics is an attempt to understand, predict and control the unknown through quantitative analysis, the kind of uncertainty affecting economic interactions is critical in determining its successes and failures.” This is surely correct and important. The basic technique of economics is the same as that of other sciences, and economics does not cease to be a science, any more than medicine, because it is often done in an unscientific manner and, indeed, sometimes by quacks. We are first of all seeking to understand. To do this we have to simplify and, despite their apparent complexity, this is the purpose of constructing mathematical models. But these models are useless unless they can be tested and they cannot be tested without quantitative analysis. The more they fit the data, the more useful they are, and the process is continuous.

Even in physics, good models are succeeded by better ones. “Nature, and Nature’s laws lay hid in night: God said, let Newton be! and all was light.” “The devil, shouting “Ho Let Einstein be, restored the status quo.” These two famous couplets show that even though our aim is understanding, it is also prediction. Only at a rarefied level has the change from Newton to Einstein increased our understanding, but it has certainly increased our predictive power.

A good model must be testable and robust under testing, but it cannot be either unless it is coherent and this is the essential role of maths. The problem has not been the maths themselves, but a tendency to believe that elegant models must be right and to ignore the need for testing and robustness. Perhaps being taught at school about the beauty of mathematical elegance, as many of us were, has been a drawback. But if we find that the models don’t work, we must discard them and build better ones. Andrew Lo’s suggestion that we remind ourselves of Frank Knight’s distinction between risk and uncertainty and then build upon it to develop a fuller spectrum of means to understand risk, seems to me to be an excellent one, which I hope will prove fruitful in practice.

I am tempted, however, to suggest that his “uncertainty checklist” may be incomplete, in that economic data can from time to time be best understood in terms of patterns which can be fully captured by probability and statistics, interrupted by periods of uncertainty. I give, as an example, the returns on equity which appear to be mean reverting in the absence of major capital destruction, usually induced by war. The returns and volatility can be captured by probability and statistics and, to a limited extent, predicted, but only in the absence of major capital destruction, which belong solely to the world of uncertainty.

An area where I am hopeful that Andrew Lo’s taxonomy will prove fruitful is in dealing with the interaction of finance and macroeconomics. If, as seems to me to be well-nigh certain, asset prices are an important transmission mechanism by which central bankers are able to use changes in short-term interest rates to influence the real economy and that this mechanism breaks down when bubbles burst, we therefore need to avoid bubbles. Avoiding them is not the same as predicting when they will burst, but it does involve knowing when they become dangerous. If we could predict when they would burst, they would cease to exist. This does not mean we should not try and predict them, but it does mean that we should put more attention to the risk analysis and, given the rarity and long tail nature of crashes, it is highly likely that the identification of danger levels will fit into neither of Frank Knight’s categories of “randomness that can be fully captured by probability and statistics”, nor into pure uncertainty. This may change with another few centuries of data, but in practical terms it must surely fall into Andrew Lo’s definition of “partially reducible uncertainty containing a component that cannot be quantified”.

In my opening comments I suggested that it is now essential that, as economists, we consider whether we have failed and, if so, how to improve. Andrew Lo and Mark Mueller have set out to do just that and, since such a response has been all too rare, they deserve congratulations for having the courage to try. I also think they deserve congratulations as well for their suggestions and the perspicacity of their analysis.