
2
Experimental methods provide an attractive solution in these contexts. In experiments,
researchers can observe actions and elicit beliefs, both before and after carefully designed
information interventions. For example, within the context of political activism, Cantoni
et al. (2019); Jarke-Neuert et al. (2021); Hager et al. (2022a,b) all elicit subjects’ beliefs
about others’ planned participation in political events/protests. They then measure how
changing subjects’ information about others’ participation causally affects individual beliefs
and, thence, individuals’ own attendance.1
However, eliciting beliefs is fundamentally different than eliciting simpler variables, such
as willingness-to-pay. This is because beliefs are a probability distribution, often defined
over a large set of outcomes, rather than just a point response (i.e., a real number between
0 and 1). For example, the belief about the proportion of Nother survey-participants
who participate in a protest is a probability distribution over the N+ 1 possible values
{0,1/N, 2/N, ..1}. Yet, for tractability, researchers often have to elicit point-responses in
their experimental design, and interpret those as a coarse measure of beliefs. This is the
case in the papers cited above, as well as many others.2
In this paper, we discuss the relation between subjects’ point-responses and their underlying
belief distribution, and how this mapping depends on incentives offered to the subject. In
particular, we compare two popular belief elicitation schemes that seem superficially simi-
lar, but, as we show in Section 2, incentivize different best responses in belief reporting. We
then show the empirical consequences of such differences, which can include identifying
effects with opposite signs to the true ones, or a lack of identification altogether.
The first scheme we consider rewards subjects for correct guesses within an error band
of the true value: for instance, “Please guess x∈[0,1]. If your guess is within ∆percentage
points of x∈[0,1] you will earn a bonus payment of 1 currency unit.” Recent examples of
belief elicitation using this scheme include Cantoni et al. (2019); Chen and Yang (2019);
Bursztyn et al. (2020), among others. In Section 2.1, we prove that subjects’ best response
to these incentives is to report the (approximate) mode of the true distribution of x(hence-
forth, modal beliefs). The second one, which we recommend to practitioners who wish to
elicit the mean of the belief distribution over x(mean beliefs, henceforth), rewards sub-
jects A−B(x−r)2for reporting r, where A, B are constants. Such an incentive scheme
indeed induces profit-maximizing subjects to report their mean beliefs as a best response.
The difference in elicitation designs is subtle: rewards under both schemes are weakly
increasing in accuracy. But they induce very different mappings from the true belief distri-
bution (over x) to the optimal report, as mean and mode do not generally coincide. For
1Such examples go beyond political economy. In another recent example, Bursztyn et al. (2020) evaluate
whether Saudi husbands are more likely to support women working outside the home if they discover that
other husbands do so too.
2Kendall et al. (2015); Cruz et al. (2020) are some exceptions in the experimental political economy context.