Manipulation and Peer Mechanisms A Survey

2025-05-06 0 0 344.39KB 29 页 10玖币
侵权投诉
arXiv:2210.01984v3 [cs.AI] 29 May 2024
Manipulation and Peer Mechanisms: A Survey
Matthew Olckers
Macquarie University and the e61 Institute
Toby Walsh
School of Computer Science and Engineering, UNSW Sydney
Abstract
In peer mechanisms, the competitors for a prize also determine who wins. Each competitor may
be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such
as financial aid, course grades, or an award at a conference, competitors may be tempted to
manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of
peer mechanisms. We conclude our survey by identifying several important research challenges.
Keywords: peer mechanism, peer ranking, peer review, peer grading, community-based
targeting
1. Introduction
Imagine that you are competing for a prize at your workplace. The prize is awarded by asking
everyone at your work, including you, to nominate who deserves the prize. The person with the
most nominations wins. Who should you nominate? If you tell the truth and nominate who you
think is most deserving, your nomination could cause you to lose out. Would you be truthful?
Do you think your colleagues would be truthful?
As this example illustrates, when the competitors for a prize also determine who wins, the
competitors may be tempted to manipulate the outcome. A growing research literature, which we
collect under the term “peer mechanisms”, aims to prevent or discourage manipulation in these
situations. This paper surveys both the theoretical and empirical research on peer mechanisms
and oers several research challenges.
The interest in peer mechanisms has been fueled by the variety of high-stakes applica-
tions. The prize can take many forms. Closest to the experience of researchers, the prize could
be an award at a conference. Further afield, the prize could be grades in a course (Topping,
1998), a time slot to use a telescope (Merrifield and Saari,2009), aid targeted to people in
need (Conning and Kevane,2002;Alatas et al.,2012), loans for entrepreneurs (Hussam et al.,
2022), a job for freelancers (Kotturi et al.,2020), an award for the best soccer player of the year
(Caragiannis et al.,2019), or even the papacy of the Catholic Church (Mackenzie,2020).
The variety of applications has inspired a variety of models. Some models, known as peer se-
lection, assume the mechanism designer wishes to select a single participant or a limited number
of participants for the prize. While other models, known as peer grading, assume each partici-
pant should receive a cardinal score. Besides the mechanism’s output, the models can dier in
many other dimensions, such as what the participants are asked to report, the type of information
participants hold about their peers, and whether the mechanism designer can make payments.
We provide a taxonomy to highlight the dierences in each model introduced in the liter-
ature, and to identify some common themes. The approaches to prevent manipulation in peer
mechanisms can be grouped into one of three categories:
1. impartial mechanisms where a participant’s report cannot impact their chance of winning
the prize,
2. audits where the mechanism attempts to detect and punish manipulation,
3. rewards for truthful reports.
Although these three approaches are distinct, they are not mutually exclusive. The approaches
can be combined. The context will determine which approach or combination of approaches is
most suitable.
The bulk of the theoretical research has focused on impartial mechanisms. We delve deeper
into these results by focusing on the model of peer selection, where peers nominate each other for
a single prize or several identical prizes. Researchers have taken two main approaches: checking
if impartiality is compatible with desirable axioms and checking how closely impartial mecha-
nisms can approximate the most desirable outcome (such as awarding the prize to the participants
with the most nominations).
The axiomatic and approximation results highlight how flexibility in the number of prizes
is crucial for designing impartial peer mechanisms with desirable properties. If the mechanism
must always award a fixed number of prizes, discouraging manipulation can lead to undesirable
outcomes, such as awarding the prize to a participant who does not receive any nominations.
If the mechanism can award more prizes than planned or choose to leave the prize unassigned,
manipulation can be discouraged while avoiding many of the undesirable outcomes that stem
from a fixed number of prizes. The theoretical results also highlight the importance of random-
ization. In most cases, mechanisms that use randomization provide better approximation results
than mechanisms that use a deterministic rule.
Manipulation in peer mechanisms is not merely a theoretical concern. We survey empir-
ical research that shows that people will try to manipulate peer mechanisms when given the
chance. Employees will reduce the peer grades of coworkers when competing for promotion
(Huang et al.,2019), and entrepreneurs bias peer reports in favor of friends and family when
competing for loans (Hussam et al.,2022). Manipulation has also been shown in the lab. Whether
experimental participants produce art (Balietti et al.,2016) or label envelopes (Carpenter et al.,
2010), they are happy to sabotage their peers to increase their own chance of winning a prize.
The empirical research provides several lessons for the theoretical study of peer mechanisms.
First, participants often make errors in their peer evaluations. Mechanisms that are robust to er-
rors are more likely to succeed in practice. Second, nepotism is common. Most models assume
that participants care only about their own chance of winning a prize and not whether other per-
haps related participants win a prize. Third, small amounts of manipulation may be acceptable.
Rather than aiming to disincentivize manipulation from all participants, mechanisms could allow
some manipulation and still yield good outcomes. Fourth, choosing the optimal manipulation
can be dicult for participants. Complexity may be a useful tool to discourage manipulation.
Models of peer mechanisms dier from both the classic approach to mechanism design and
the classic approach to social choice. The classic approach to mechanism design is to elicit infor-
mation from individuals about themselves, such as the amount an individual would be willing to
2
pay for an item at an auction. In peer mechanisms, individuals hold preferences or information
about other participants (their peers). The classic approach to social choice is to aggregate the
preferences of voters about a set of candidates. The voters and candidates are distinct. In peer
mechanisms, the participants are both voters and candidates.
We focus this survey on preventing manipulation in peer mechanisms. We do not include
research that focuses on how to aggregate nominations, rankings, or grades. Examples of this
line of research include Caragiannis et al. (2016,2020), which considers how counting methods
can aggregate partial rankings, and Wang and Shah (2019), which considers how to aggregate
grades from individuals that have dierent standards and ranges. We do not include a recent line
of research that studies how to incentivize participants to invite their peers to participate in a
mechanism (Zhao,2021). We restrict our focus to settings where all participants are aware of the
mechanism and the prize. We do not include research on peer grading that designs mechanisms
to encourage peer graders to exert eort when grading (see Zarkoob et al. (2023) for a recent
example of this line of work). Our focus is on peer grading mechanisms that prevent graders
from improving their own grades or rankings through manipulation.
We include peer prediction mechanisms that have been adapted to evaluating information
about people, such as a person’s need for financial aid or their entrepreneurial ability. Typically,
peer prediction is used for reports about external objects, such as the quality of a product or
the forecast of an event. Not to be confused with the “peer” in peer mechanisms, the “peer”
in peer prediction refers to the way these mechanisms use reports from multiple participants to
incentivize truthful reports without access to ground truth to check the reports. Peer prediction
mechanisms make payments to participants that depend on the participant’s report and the reports
of other participants who evaluate the same target object. We are interested in cases when the
target object is information about another participant. See Faltings and Radanovic (2017) for a
survey of peer prediction mechanisms.
Our survey has some overlap with a recent survey of academic peer review by Shah (2022),
but our surveys make distinct contributions. Rather than focusing on a single application (aca-
demic peer review in the case of Shah (2022)), we focus on manipulation in peer mechanisms,
which extends to several other applications, such as poverty targeting and peer grading of student
assignments. Shahs (2022) survey covers some work on manipulation but does not go into the
same detail as we do. Also, the models we discuss are often dierent. The models of peer mech-
anisms we discuss only correspond to models of academic peer review when each author submits
one sole-authored paper and is also available as a reviewer. Authors often submit multiple pa-
pers, each paper can have multiple authors, authors may not act as reviewers, and reviewers may
not submit papers.
We begin the survey with a motivating example to describe why many peer mechanisms
create an opportunity for the participants to manipulate who wins the prize. We then provide
a taxonomy of approaches to prevent manipulation and discuss the range of techniques that re-
searchers have proposed. To highlight the two main theoretical approaches of axiomatic and ap-
proximation analysis, we focus on the model of peer selection. We survey the empirical studies
of peer mechanisms and list key lessons the empirical research provides for theory. We conclude
the survey by highlighting several areas in need of further research.
2. Motivating Example
Suppose a group of people compete for a prize by participating in a peer mechanism. The
mechanism determines who wins the prize by asking each participant to nominate one or more
3
peers and awarding the prize to the participant who receives the most nominations. Assume there
is only one prize, which cannot be split between multiple participants. In the case of a tie, the
prize is awarded uniformly at random between those with the most nominations.
One temptation is for each participant to nominate themselves, so we may want the mecha-
nism to exclude self-nominations.1But even when the participants cannot nominate themselves,
they may still have opportunities to manipulate who wins.
A simple example with three participants (a,b, and c) demonstrates that they can still manip-
ulate who wins the prize. Suppose that:
anominates band c,
bnominates c, and
cnominates a.
Since both band chave two nominations each, the mechanism awards the prize randomly to b
or cwith equal probability. Let’s consider cs perspective. All else equal, ccan ensure he wins
the prize by nominating no one or nominating a. Even if cbelieves that bis worthy of the prize,
chas a strong incentive to manipulate the outcome to increase his chance of winning.
Whether the participants are asked to nominate, rank, or grade their peers, the same tempta-
tion remains. To increase their own chance of winning a prize, participants in a peer mechanism
are tempted to manipulate their evaluation of their closest competitors. In the example, cs clos-
est competitor is b. By failing to nominate b,cincreases his chance of winning the prize at bs
expense.
3. A Taxonomy of Models
We surveyed the literature for peer mechanisms that address manipulation. Since peer mech-
anisms are inspired by a variety of applications, there are a variety of dierent models to describe
these applications. In Table 1, we provide a taxonomy of the models we uncovered. We distin-
guish between dierent models according to their:
Input: What do the participants need to report about their peers?
Output: What output does the mechanism produce?
Information: What type of information do participants hold about their peers?
We have some notation within Table 1. We use mfor the number of peers each participant is
asked to evaluate. In most cases, mis small relative to the number of participants. We use kfor
the number of winners when the mechanism selects multiple winners.
The table also includes columns to categorize the mechanisms according to:
1As Ng and Sun (2003) and Ohseto (2012) have shown, excluding self-evaluations can create problems in aggregating
peer grades. Suppose participants aand bunanimously grade ahigher than b, but auses more generous grades than b.
Excluding self-evaluations will discard as generous grade about himself but keep the generous grade he gives to b,
which may cause bto have a higher aggregated grade than a.Ng and Sun (2003) provides theoretical results highlighting
an incompatibility between unanimity and excluding self-evaluations. Ohseto (2012) strengthen Ng and Suns (2003)
results to show that if participants can select grades from a finite and large set of real numbers, no aggregation rule
excludes self-evaluations and satisfies monotonicity and unanimity.
4
Table 1: Taxonomy of peer mechanisms
Model Mechanism
Paper Input Output Information Approach Technique
Holzman and Moulin (2013) Nominate one peer Single winner Subjective Impartial Partition
Mackenzie (2015) Nominate one peer Single winner Subjective Impartial Random dictatorship
Babichenko et al. (2020b) Nominate one peer Single winner Subjective Impartial Expand possible winners
Edelman and Por (2021) Nominate one peer Single winner Subjective Impartial Random dictatorship
Cembrano et al. (2023b) Nominate one peer Single winner Subjective Impartial Permutation
Amorós (2011) Nominate one peer Single winner Common Impartial Fix position
Mackenzie (2020) Nominate one peer At most one winner Subjective Impartial Threshold
Bjelde et al. (2017) Nominate one peer At most two winners Subjective Impartial Permutation
Tamura and Ohseto (2014); Tamura (2016) Nominate one peer Top kwinners Subjective Impartial Expand possible winners
Fischer and Klimm (2014,2015) Nominate one or more peers Single winner Subjective Impartial Permutation
Bousquet et al. (2014) Nominate one or more peers Single winner Subjective Impartial Permutation
Babichenko et al. (2018) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Caragiannis et al. (2019) Nominate one or more peers Single winner Subjective Impartial Jury
Zhang et al. (2021) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Caragiannis et al. (2021) Nominate one or more peers Single winner Subjective Impartial Threshold
Cembrano et al. (2022a) Nominate one of more peers Single winner Subjective Impartial Threshold
Ito et al. (2018) Nominate one or more peers Single winner Subjective Reward Peer prediction
Zhao et al. (2023) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Alon et al. (2011) Nominate one or more peers Top kwinners Subjective Impartial Partition
Cembrano et al. (2022b) Nominate one or more peers Top kwinners Subjective Impartial Expand possible winners
W ˛as et al. (2019) Nominate one or more peers Grade Subjective Impartial Fix grade
Li et al. (2018) Nominate top participant Ranking Common Impartial Fix position
Bao et al. (2021) Nominate network neighbor Single loser Ground truth Audit Compare to nominee
摘要:

arXiv:2210.01984v3[cs.AI]29May2024ManipulationandPeerMechanisms:ASurveyMatthewOlckersMacquarieUniversityandthee61InstituteTobyWalshSchoolofComputerScienceandEngineering,UNSWSydneyAbstractInpeermechanisms,thecompetitorsforaprizealsodeterminewhowins.Eachcompetitormaybeaskedtorank,grade,ornominatepeers...

展开>> 收起<<
Manipulation and Peer Mechanisms A Survey.pdf

共29页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:29 页 大小:344.39KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 29
客服
关注