Forecast Aggregation via Peer Prediction


  • Juntao Wang Harvard University
  • Yang Liu UC Santa Cruz
  • Yiling Chen Harvard University


Forecast Aggregation, Peer Prediction, Wisdom Of Crowds, Human Computation


Crowdsourcing enables the solicitation of forecasts on a variety of prediction tasks from distributed groups of people. How to aggregate the solicited forecasts, which may vary in quality, into an accurate final prediction remains a challenging yet critical question. Studies have found that weighing expert forecasts more in aggregation can improve the accuracy of the aggregated prediction. However, this approach usually requires access to the historical performance data of the forecasters, which are often not available. In this paper, we study the problem of aggregating forecasts without having historical performance data. We propose using peer prediction methods, a family of mechanisms initially designed to truthfully elicit private information in the absence of ground truth verification, to assess the expertise of forecasters, and then using this assessment to improve forecast aggregation. We evaluate our peer-prediction-aided aggregators on a diverse collection of 14 human forecast datasets. Compared with a variety of existing aggregators, our aggregators achieve a significant and consistent improvement on aggregation accuracy measured by the Brier score and the log score. Our results reveal the effectiveness of identifying experts to improve aggregation even without historical data.




How to Cite

Wang, J., Liu, Y., & Chen, Y. (2021). Forecast Aggregation via Peer Prediction. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 9(1), 131-142. Retrieved from