Information Elicitation Mechanisms for Statistical Estimation
We study learning statistical properties from strategic agents with private information. In this problem, agents must be incentivized to truthfully reveal their information even when it cannot be directly verified. Moreover, the information reported by the agents must be aggregated into a statistical estimate. We study two fundamental statistical properties: estimating the mean of an unknown Gaussian, and linear regression with Gaussian error. The information of each agent is one point in a Euclidean space.
Our main results are two mechanisms for each of these problems which optimally aggregate the information of agents in the truth-telling equilibrium:
• A minimal (non-revelation) mechanism for large populations — agents only need to report one value, but that value need not be their point.
• A mechanism for small populations that is non-minimal — agents need to answer more than one question.
These mechanisms are “informed truthful” mechanisms where reporting unaltered data (truth-telling) 1) forms a strict Bayesian Nash equilibrium and 2) has strictly higher welfare than any oblivious equilibrium where agents' strategies are independent of their private signals. We also show a minimal revelation mechanism (each agent only reports her signal) for a restricted setting and use an impossibility result to prove the necessity of this restriction.
We build upon the peer prediction literature in the single-question setting; however, most previous work in this area focuses on discrete signals, whereas our setting is inherently continuous, and we further simplify the agents' reports.