Getting it Right: Professor Studies Expert vs. Crowdsourcing Predictions

November 04, 2013

From anticipated sales of new product launches to predicted forecasts of holiday retail spending, analysts have been known to make predictions that are far from the actual outcome.

After years of practice and an abundance of expertise – how can these predictions be so off?

Over the last several years, crowdsourcing has become an increasingly popular tool for obtaining predictions from a large group of forecasters on topics ranging from sales and outcome of sports events to fundamental macroeconomic quantities such as growth and inflation. Many organizations, including the Federal Reserve, believe resorting to crowds offers more accurate forecasts banking on the ‘wisdom of crowds.’ 

“The public is often shocked when outcomes differ from what was originally predicted, but nothing should ever come as a complete surprise,” said Victor Jose, assistant professor of operations and information management at Georgetown University’s McDonough School of Business.

In his recent study, Trimmed Opinion Pools and the Crowd’s Calibration Problem, Jose and his co-authors highlight the complications of this common methodology and offers a simple tool that could improve the quality of forecasts generated from crowds.

Weakness in Numbers
As crowds become larger, the chance of having a significant number of “non-experts” in a pool of forecasters becomes larger. Forecasters who do not have expertise on a subject are likely to offer more random, uneducated predictions or follow the majority of the group – a phenomenon known as herding. Crowd forecasting organizers have a difficult time distinguishing who is an expert and who is not. When a portion of the crowd is made of unknowledgeable experts, issues related to calibration can easily arise.

The Calibration Issue
In the presence of “non-experts” with extreme or redundant forecasts, the crowd as a whole may be either over- or under-confident.  This can lead to serious problems such as the magnitude of uncertain events becoming skewed. From these predictions, leaders cannot accurately plan nor anticipate risks of such extreme events. There are also long-term implications, as situations may occur far more (or less) often over the course of a longer period of time than originally predicted.

The Solution
Crowds can still be a useful resource in making predictions if this calibration issue could be addressed.  Jose and his coauthors developed a simple tool to address this issue of calibration by trimming some of the information collected from a crowd, which should hopefully capture a significant portion of information that comes from “non-experts.” Using data from the Federal Reserve Bank’s Survey of Professional Forecasters, they demonstrate how this trimming approach can improve a crowdsourced forecast’s sharpness and accuracy.  Firms that use crowdsourcing and care about managing risks can benefit from this because if one cares about risks, focus and attention have to be given on the overall distribution, and not simply on averages.
 

Tags: Victor Jose