pay for an item at an auction. In peer mechanisms, individuals hold preferences or information
about other participants (their peers). The classic approach to social choice is to aggregate the
preferences of voters about a set of candidates. The voters and candidates are distinct. In peer
mechanisms, the participants are both voters and candidates.
We focus this survey on preventing manipulation in peer mechanisms. We do not include
research that focuses on how to aggregate nominations, rankings, or grades. Examples of this
line of research include Caragiannis et al. (2016,2020), which considers how counting methods
can aggregate partial rankings, and Wang and Shah (2019), which considers how to aggregate
grades from individuals that have different standards and ranges. We do not include a recent line
of research that studies how to incentivize participants to invite their peers to participate in a
mechanism (Zhao,2021). We restrict our focus to settings where all participants are aware of the
mechanism and the prize. We do not include research on peer grading that designs mechanisms
to encourage peer graders to exert effort when grading (see Zarkoob et al. (2023) for a recent
example of this line of work). Our focus is on peer grading mechanisms that prevent graders
from improving their own grades or rankings through manipulation.
We include peer prediction mechanisms that have been adapted to evaluating information
about people, such as a person’s need for financial aid or their entrepreneurial ability. Typically,
peer prediction is used for reports about external objects, such as the quality of a product or
the forecast of an event. Not to be confused with the “peer” in peer mechanisms, the “peer”
in peer prediction refers to the way these mechanisms use reports from multiple participants to
incentivize truthful reports without access to ground truth to check the reports. Peer prediction
mechanisms make payments to participants that depend on the participant’s report and the reports
of other participants who evaluate the same target object. We are interested in cases when the
target object is information about another participant. See Faltings and Radanovic (2017) for a
survey of peer prediction mechanisms.
Our survey has some overlap with a recent survey of academic peer review by Shah (2022),
but our surveys make distinct contributions. Rather than focusing on a single application (aca-
demic peer review in the case of Shah (2022)), we focus on manipulation in peer mechanisms,
which extends to several other applications, such as poverty targeting and peer grading of student
assignments. Shah’s (2022) survey covers some work on manipulation but does not go into the
same detail as we do. Also, the models we discuss are often different. The models of peer mech-
anisms we discuss only correspond to models of academic peer review when each author submits
one sole-authored paper and is also available as a reviewer. Authors often submit multiple pa-
pers, each paper can have multiple authors, authors may not act as reviewers, and reviewers may
not submit papers.
We begin the survey with a motivating example to describe why many peer mechanisms
create an opportunity for the participants to manipulate who wins the prize. We then provide
a taxonomy of approaches to prevent manipulation and discuss the range of techniques that re-
searchers have proposed. To highlight the two main theoretical approaches of axiomatic and ap-
proximation analysis, we focus on the model of peer selection. We survey the empirical studies
of peer mechanisms and list key lessons the empirical research provides for theory. We conclude
the survey by highlighting several areas in need of further research.
2. Motivating Example
Suppose a group of people compete for a prize by participating in a peer mechanism. The
mechanism determines who wins the prize by asking each participant to nominate one or more
3