A Cooperative Reinforcement Learning Environment for Detecting and Penalizing Betrayal Nikiforos Pittaras

2025-04-30 0 0 957.45KB 8 页 10玖币
侵权投诉
A Cooperative Reinforcement Learning Environment
for Detecting and Penalizing Betrayal
Nikiforos Pittaras
University of Athens
npittaras@di.uoa.gr
Abstract
In this paper we present a Reinforcement Learning environment that leverages
agent cooperation and communication, aimed at detection, learning and ultimately
penalizing betrayal patterns that emerge in the behavior of self-interested agents.
We provide a description of game rules, along with interesting cases of betrayal
and trade-offs that arise. Preliminary experimental investigations illustrate a)
betrayal emergence, b) deceptive agents outperforming honest baselines and b)
betrayal detection based on classification of behavioral features, which surpasses
probabilistic detection baselines. Finally, we propose approaches for penalizing
betrayal, list directions for future work and suggest interesting extensions of the
environment towards capturing and exploring increasingly complex patterns of
social interactions.
1 Introduction
Establishing truthfulness in AI is a critical open problem in Safety and Alignment efforts [
10
]. A
powerful AI system that adopts strategies of deception and betrayal, i.e. manipulation of beliefs
and prior assumptions of humans, may be a quick one-way ticket to a treacherous turn. Detection
and diagnosis of betrayal patterns is challenging; poor explainability of black-box agents make it
difficult to deduce intent, goals and beliefs by inspecting internal model workings and/or operational
outputs [20, 26]. To make matters worse, intelligent agents capable of long-term strategizing would
render human interpretation and recognition of suspicious patterns in action sequences very difficult.
At the same time, instrumentally convergent attributes such as self-preservation and resistance to
corrigibility could result in AI systems that deliberately utilize obfuscation or exhibit deceptive
alignment [1], placing further obstacles in understanding their objectives.
In these settings, Anomaly Detection countermeasures [
18
] aim to identify, prevent, correct or mitigate
adverse outcomes prior to system deployment. For instance, betrayal detection and quantification can
serve as tripwires and honeypots to avoid future harms, catching systems that exhibit problematic
behavior early on [
1
]. Additionally, betrayal penalization approaches aim to regularize agents
away from undesirable actions during training. Ideally, this resolution should be interpretable to
human evaluators and generalize well to different problems, agent architectures and domains, having
efficiently internalized concepts of betrayal and deception.
Reinforcement Learning (RL) can provide a tractable avenue for investigating such scenarios [
9
],
using environments where reliable reward accumulation heavily depends upon cooperation between
agents and complex social interactions occur [
17
,
7
]. In this work, we adopt such an approach,
focused on detecting and penalizing undesirable behaviors of deception and betrayal in a custom,
communication-based navigation task.
Preprint. Under review.
arXiv:2210.12841v1 [cs.LG] 23 Oct 2022
2 Related Work
Previous studies have explored agent communication in a multiagent RL setting; Kajic et al. [
13
]
investigate message-based navigation similar to the proposed work, while Cao et al. [
6
] study
communication grounding with respect to game rules in agents of varying degrees of self-interest. In
the work of Kim et al. [
14
], agents used a world model to predict future agent intents and environment
dynamics to generate, compress and transmit imagined trajectories. Other works explore topological
configurations different from fully-connected communication, such as the learnable hierarchical
approach in Sheng et. al [
23
], while communication via noisy channels has been investigated in Tung
et. al [24].
Agent deception, betrayal, truthfulness and trustworthiness has been previously investigated in
multiple settings [
7
]; for instance, Christiano et al. [
8
] present a challenge of discovering latent
knowledge in an agent that may produces false / unreliable reports, while Usui et al. [
25
] evaluate
analytic solutions of different strategies in iterated Prisoner’s Dilemmas.
Social dilemmas that gauge cooperation versus self-interest are explored in Leibo et al. [
16
],
applied via games like “Gather” and “Wolfpack”. “Hidden Agenda” is a team-based game offering
a complex action set including 2D navigation, agent / environment interaction, deception and
trustworthiness estimation via voting, and is investigated by Kopparapu et al. [
15
]. Asgharnia [
3
] use
a hierarchical fuzzy, situation-aware learning scheme to learn and utilize deception against one or
multiple adversaries in a custom environment.
Mitigation approaches include the work in Hughes et al. [
11
], where reward regularization is
approached by adding an inequity penalty in games with short-term versus long-term dilemmas,
like “Cleanup” and “Harvest”. Jaques et al. [
12
] use the same setting with a mutual information-
based mechanism that favors influential communication between agents, adopting a correlation
assumption of influence to cooperation. Blumenkamp et al. [
4
] utilize cooperative policy learning via
shared differentiable communication channel in three custom environments, investigating adaptation
dynamics when a self-interested adversary is introduced. Finally, Scmid et al. [
21
] explore using
agents that can explicitly impose penalties in a zero-sum setting, applied in N-player Prisoner’s
Dilemma games with large agent populations.
Given this body of work, the contributions of this work are as follows:
A betrayal-oriented environment: we design a simple, limited ruleset that can result in the
emergence complex betrayal behaviors, consolidated in a single-agent RL environment.
Interpretable Betrayal Detection: Proposal of a classification-based detector that utilizes
explainable, behavioral / observational evidence generated during agent play.
Betrayal penalization: proposal of avenues for penalizing detected betrayal during learning.
Experimental validation: we provide preliminary empirical findings showcasing emergence
and successful detection of betrayal behaviors in the proposed environment.
Future work proposals: we suggest pathways for utilizing the rich potential of the environ-
ment in future work, ruleset extensions and additional investigation axes of interest.
3 Proposed Environment
The proposed environment is built with a focus on betrayal detection and penalization goals expressed
in the literature [2], extending previous work on agent communication in RL settings [13].
It implements an episodic game that consists of a collection of
N2
gridworlds
[G1. . . Gn]
,
each paired with a single agent
Ai
. All worlds are associated with a pool of
kN
food items
F= [f1, . . . , fk]
that provide variable reward and nutrition to agents upon consumption. The
environment advances in a single-agent, turn-based fashion, using the following rules and mechanics:
The game is played in rounds, wherein all agents act once in a randomly generated order.
At the start of each round, food items are randomly allocated and positioned in each world.
The objective of each agent
Ai
is to obtain food, which yields reward. Agent
Ai
may harvest
food by probing a location within their world Gi, but other worlds are inaccessible.
2
摘要:

ACooperativeReinforcementLearningEnvironmentforDetectingandPenalizingBetrayalNikiforosPittarasUniversityofAthensnpittaras@di.uoa.grAbstractInthispaperwepresentaReinforcementLearningenvironmentthatleveragesagentcooperationandcommunication,aimedatdetection,learningandultimatelypenalizingbetrayalpatter...

展开>> 收起<<
A Cooperative Reinforcement Learning Environment for Detecting and Penalizing Betrayal Nikiforos Pittaras.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:957.45KB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注