Thursday, January 31, 2013

Predicting Markets, or Marketing Predictions

Mark Twain suggested that it is better to remain quiet and be thought a fool, than to open your mouth and remove all doubt.

Would that the 'gurus' who populate the investment and economics landscape would heed Twain's advice. Of course, that will never happen.

We know from studies of expert judgement that gurus who make nuanced predictions and hedge their bets attract much less attention than experts who spin dramatic predictions with unswerving confidence. As a result, firms are predisposed to encourage gurus to voice strong opinions and divergent views that stand out from the crowd. Unfortunately, the qualities that make some gurus more marketable than others are likely to render them less accurate: balanced experts tend to be more accurate than loud ideologues, and their opinions tend to be less damaging when they go wrong. 

And even the best experts get it wrong a lot. In fact, they get it wrong more than they get it right. How do we know?

The best and most comprehensive study of expert judgment was performed by Philip Tetlock. In 1985 Tetlock, fascinated by his previous experience serving on political intelligence committees in the early 1980s, set out to discover just how accurate expert forecasters were in their predictions of future events. Over a span of almost 20 years, he interviewed 284 experts about their level of confidence that a certain outcome would come to pass. Forecasts were solicited across a wide variety of domains, including economics, politics, climate, military strategy, financial markets, legal opinions, and other complex domains with uncertain outcomes. In all, Tetlock accumulated an astounding 82,000 forecasts.

This represents an incredible body of evidence about expert judgment, and Tetlock's analysis rendered several astounding conclusions:

  • Expert forecasts were less well calibrated than one would expect from random guesses
  • Aggregated forecasts were better than any individual forecasts, but were still worse than random guesses
  • Experts who appeared in the media most regularly were the least accurate
  • Experts with the most extreme views were also the least accurate
  • Experts exhibited higher forecast calibration outside of their field of expertise
  • Among all 284 experts, not one demonstrated forecast accuracy beyond random guesses
In short, experts would have delivered better forecasts by flipping coins. But there was a silver lining.

Tetlock also tracked some simple, rules based statistical models alongside the experts to see if these models would be competitive in terms of forecast calibration. He found that many simple models performed with substantially better calibration than the experts, and delivered accuracy well beyond random chance. Chock another one up for the quants.

You might be wondering whether there are any similar types of studies conducted specifically in the area of financial markets. You're in luck, as there are have been several.

CXO Advisory has been tracking and publishing gurus' forecasts of market direction since 1998. Recently, CXO published a review of all 6,459 forecasts from all of the market 'gurus' that they tracked from 1998 - 2012. Specifically, the gurus were graded on their ability to call the direction of the market, but were not penalized for missing the magnitude of the move.

Over 14 years, CXO concluded that the average guru's accuracy in calling the direction of the market has been about 47%, or slightly worse than a coin toss. The following chart shows how the accuracy of forecasts has stabilized over time around the 47% mark as the sample size expanded over time. In other words, the experts were less reliable than flipping coins.

Source: CXO Advisory

The evidence does not end there. The following charts, sourced from James Montier's incredibly useful book,  Behavioural Investing (2007), show aggregate forecasts from Wall Street's most famous oracles through time, next to the actual trajectory of the forecast variable. 

Chart 1. Consensus bond yields forecasts 1 year out vs. actual

Chart 2. Consensus S&P500 level 1 year forecasts vs. actual

Chart 3. Consensus S&P500 aggregate earnings 1 year forecasts vs. actual

In all cases the analysts appear to do a noteworthy job of describing what just happened, but appear to have no vision whatsoever about what is about to happen next. This applies to interest rates, the level of stock indices, and aggregate earnings.


Do any experts get it right? What about the experts at the Federal Reserve who are in charge of setting interest rates? Can they predict the magnitude or direction of interest rates just six months hence?

A working paper entitled "History of the Forecasters: An Assessment of the Semi-Annual U.S. Treasury Bond Yield Forecast Survey" (Brooks & Gray, 2003) studied the ability of Federal Reserve economists, including Alan Greenspan, from 1982 - 2002 to discover whether the group of experts that sets interest rates is able to effectively forecast their trajectory through time. 

Chart 3.
Source: (Brooks & Gray, 2003)

Again we see a strong talent for describing what has just happened, but no talent whatsoever for predicting what will happen next. Just how poor was the forecasting ability of Fed economists, including sitting Fed Chairman Alan Greenspan, over the 20 year survey?

Chart 4. 
Source: (Brooks & Gray, 2003)

The scatter plot above shows how Fed forecasts of interest rates just six months out are negatively correlated with actual outcomes. The r-squared of the regression is 7%, which is not statistically significant, so don't bet the farm against the Fed either. The point is, they can't forecast any better than anyone else.

There is ample evidence that strategists and gurus are unlikely to add much value to the investing process - at least where the goal is to grow your portfolio. Our next article will address another ubiquitous observation in wealth management - overconfidence - and discuss solutions for disillusioned investors looking for a new direction with better odds of success.