The Signal and the Noise

In a sentence the book explains what causes errors in predictions and the fallacy of  forecasters in economics, finance and medicine on being overtly confident of their predictions and mistaking the noise for signal in the data.

Here are some insights I got from reading the book;  

  1. Economic (GDP) forecasts and predicting stock market returns is essentially very hard given so many factors affect them (US has 45,000 economic indicators). However despite this most economists and financial forecasters are too confident of their projections and as a result try to predict too accurately.
  2. Humans have a need for finding patters and abstractions more than other animals; it is a defense mechanism to simplify the world that is diverse and complex.
  3. However this tendency to find patterns is subject to our biases and one of them is the knowledge we have accumulated; the more information we accumulate the more our biases get solidified and so we can never make perfectly objective predictions.  (It is for this reason that crowd-sourced forecasts are more accurate than individual ones from experts often somewhere between 15 and 20 percent since individual biases of each expert forecaster gets smoothed out in a large group).
  4. A natural tendency in summarizing data is to look at averages but the average, like the family with 1.7 children, is just a statistical abstraction and does not occur in reality.
  5. Relying on averages also ignores the extreme values in the data and forecasters often resist considering the out-of-sample data in the model i.e. the black swan problem as popularized by Taleb which is mistaking absence of evidence for evidence of absence.
  6. We often fail to make a distinction between risk and uncertainty; risk is something that can be measured/priced but uncertainty is something that cannot be estimated e.g. the likelihood of Covid-19 occurring or an earthquake happening. In projecting outcomes, we fail to take into account the probabilities of outcomes which is an essential part of forecasting. The virtue in thinking probabilistically is that you will force yourself to stop and ‘smell the data’ or consider the imperfections in your thinking. Bayes’ theorem can be used to account for errors in your own predictions. 
  7. Another approach is to rely on heuristics, a heuristic approach to problem solving consists of employing rules of thumb when a deterministic solution to a problem is beyond our practical capacities. This approach has been popularized by Taleb who believes that it is impossible to predict ‘black swan’ like pandemics, economic meltdown etc events using models.
  8. However despite these biases in humans, it is essential that model predictions be reviewed by humans who can use their judgement and intuition to make sense of the data and predictions. This point was also made in the book ‘Prediction Machines – the Simple Economics of Artificial Intelligence’ which reasoned that in contrast to machines, humans are extremely good at prediction with little data.
 
Side Notes:
 

Overfitting: Overfitting is the process of mistaking noise for a signal and it results in models giving an overly specific solution to a general problem and generally to worse predictions. An overfit model is likely to result when the data is limited and noisy and  understanding of the fundamental relationships is poor. An overfit model will score better according to most statistical tests but will perform poorly for out of sample tests. 

Challenge of Economic Forecasting: The challenge for economic forecasters is to try to determine cause and effect from economic data since the economy is complex with so many measurements; as an example the government produces data on literally 45,000 economic indicators. 

 
Second, the economy is always changing; some economic relationships do not hold in the period between 1983 and 2006—a subset of the Long Boom that is sometimes called the Great Moderation—when the economy was in recession just 3 percent of the time. The problem with looking at these years was that they contained very little economic volatility: just two relatively mild recessions in 1990–1991 and in 2001. The economic forecasts prior to 2008 were failing to account for events like the Great Depression and were sometimes calibrating their forecasts according to the Great Moderation years which were an outlier, historically speaking.  The Fed was looking at data from the Great Moderation years to set their expectations for the accuracy of their forecasts.
 
Third the data that economists have to work with isn’t much good either since there have been only eleven recessions since the end of World War II. So when you build a statistical model that seeks to explain eleven outputs (recessions) but has to choose from among four million inputs you can realize the challenge that economic forecasters face.
 
Inaccuracy of Rating Agency models – A critical assumption in the models used by the rating agencies during the 2008 crisis to value pools of mortgages was that each mortgage was independent of the others.  The actual default experienced was two hundred times more  than what the ratings agencies claimed, meaning that their model was off by a mere 20,000 percent.
 
Wet Bias in Weather – There is a “wet bias” made by humans for consumer facing forecasts, for instance, when the private weather forecasters say there is a 20 percent chance of rain, it has actually only rained about 5 percent of the time. Interesting snippet of information; weather forecasters rarely predict exactly a 50 percent chance of rain, which might seem wishy-washy and indecisive to consumers and so instead, they’ll flip a coin and round up to 60, or down to 40,
 
Climate and Weather: Climate refers to the long-term equilibrium’s that the planet achieves; weather describes short-term deviations from it. The weather is affected by a number of factors besides man-made emissions; the ENSO cycle (the El Niño–Southern Oscillation) evolves over intervals of about three years at a time. During the El Niño years, when the cycle is in full force the weather is much warmer in much of the Northern Hemisphere. In contrast, during the La Niña years, when the Pacific is cool, the affect in the weather of Northern Hemisphere is just the opposite. Similarly, volcanoes, which blast sulfur—a gas that has an anti-greenhouse effect and tends to cool the planet— into the atmosphere.
 
IPCC Forecasts: Intergovermental Panel on Climate Change (IPCC) models for global warming predictions have been notoriously wrong; in the 1980’s IPCC models predicted an increase of global warming at the rate of 3°C warming per century and the high end with a low end of 2°C. However the actual observed increase in temp was at the rate at of 1.5°C per century.  The IPCC’s 1990 forecast also overestimated the amount of sea-level rise. IPCC forecasts of a rise of 3°C were predicated on a “business-as-usual” case that assumed that there would be no success at all in mitigating carbon emissions.
 
 
 
 
 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s