Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells Dehong Xu1

2025-04-27 0 0 8.67MB 14 页 10玖币
侵权投诉
Conformal Isometry of Lie Group Representation in
Recurrent Network of Grid Cells
Dehong Xu1
xudehong1996@ucla.edu
Ruiqi Gao2
ruiqig@google.com
Wen-Hao Zhang3
Wenhao.Zhang@UTSouthwestern.edu
Xue-Xin Wei4
weixx@utexas.edu
Ying Nian Wu1
ywu@stat.ucla.edu
1Department of Statistics, UCLA 2Google Research, Brain Team
3Lyda Hill Department of Bioinformatics and O’Donell Brain Institute, UT Southwestern Medical Center
4Departments of Neuroscience and Psychology, UT Austin
Abstract
The activity of the grid cell population in the medial entorhinal cortex (MEC)
of the mammalian brain forms a vector representation of the self-position of the
animal. Recurrent neural networks have been proposed to explain the properties
of the grid cells by updating the neural activity vector based on the velocity
input of the animal. In doing so, the grid cell system effectively performs path
integration. In this paper, we investigate the algebraic, geometric, and topological
properties of grid cells using recurrent network models. Algebraically, we study
the Lie group and Lie algebra of the recurrent transformation as a representation
of self-motion. Geometrically, we study the conformal isometry of the Lie group
representation where the local displacement of the activity vector in the neural
space is proportional to the local displacement of the agent in the 2D physical
space. Topologically, the compact abelian Lie group representation automatically
leads to the torus topology commonly assumed and observed in neuroscience. We
then focus on a simple non-linear recurrent model that underlies the continuous
attractor neural networks of grid cells. Our numerical experiments show that
conformal isometry leads to hexagon periodic patterns in the grid cell responses
and our model is capable of accurate path integration. Code is available at
https:
//github.com/DehongXu/grid-cell-rnn.
1 Introduction
Grid cells [
25
,
19
,
41
,
29
,
28
,
12
] in the mammalian dorsal medial entorhinal cortex (MEC) exhibit
striking hexagon grid patterns when the agent (e.g., a rodent) navigates in 2D open environments
[
20
,
25
,
17
,
6
,
38
,
5
,
7
,
11
,
34
,
1
]. It has been hypothesized that grid cell system performs path
integration [
10
,
15
,
25
,
16
,
32
,
24
,
35
,
27
]. That is, the grid cells integrate the self-motion of the
animal over time to keep track of the animal’s own location in space. This can be implemented by
a recurrent neural network that takes the velocity of the self-motion as input, and transforms the
activities of the grid cells based on the velocity inputs. The animal’s self-position can then be decoded
from the activities of the grid cells.
Collectively, the activities of the grid cell population form a vector in the high-dimensional neural
activity space. This provides a representation of the self-position of the agent in space. The
Equal contributions.
Preprint. Under review.
arXiv:2210.02684v2 [q-bio.NC] 7 Nov 2022
recurrent network transforms the activity vector based on the movement velocity of the agent, so
that the transformation is a representation of self-motion, when considered from the perspective of
representational learning. The vector and the transformation together form a representation of the 2D
Euclidean group, which is an abelian additive Lie group.
In a recent paper, [
21
] studied the group representation property and the isotropic scaling or conformal
isometry property for the general transformation model. In the context of linear transformation models,
they connected this property to the hexagon periodic patterns of the grid cell response maps. With the
conformal isometry property of the transformation of the recurrent neural network, the change of the
activity vector in the neural space is proportional to the input velocity of the self-motion in the 2D
physical space. [
21
] justified this condition in terms of robustness to errors or noises in the neurons.
Although [
21
] studied general transformation model theoretically, they focused on a prototype model
of linear recurrent network numerically, which has an explicit algebraic and geometric structure in
the form of a matrix group of rotations.
In this paper, we study conformal isometry in the context of the non-linear recurrent model that
underlies the hand-crafted continuous attractor neural network (CANN) [
6
,
7
,
34
,
1
]. In particular, we
will focus on the vanilla version of the recurrent network that is linear in the vector representation of
self-position and is additive in the input velocity, followed by an element-wise non-linear rectification
(such as ReLU non-linearity). This model has the simplicity that it is additive in input velocity before
rectification. We also explore more complex variants for non-linear recurrent networks, such as
the long short-term memory network (LSTM) [
26
]. Such models have been studied in recent work
[9, 3, 37, 8].
Our numerical experiments show that our conformal isometry condition is able to learn highly
structured multi-scale hexagon grid code, consistent with the properties of experimentally observed
grid cells of rodents. In addition, our learned model is capable of accurate path integration over a
long distance. Our results generalize previous results of linear network models in [
21
] to an important
class of non-linear neural network models in theoretical neuroscience that are more physiologically
realistic.
The main contributions of our paper are as follows. (1) We investigate the algebraic, geometric, and
topological properties of the transformation models of grid cells. (2) We study a simple non-linear
recurrent network that underlies the hand-crafted continuous attractor networks for grid cells, and our
numerical experiments suggest that conformal isometry is linked to hexagonal periodic patterns of
grid cells.
2 Lie group representation and conformal isometry
2.1 Representations of self-position and self-motion
We start by introducing the basic components of our model.
x= (x1,x2)R2
denotes the position
of the agent. Let
x= (x1,x2)
be the input velocity of the self-motion, i.e., displacement of the
agent within a unit time, so that the agent moves from xto x+xafter the unit time.
We assume
v(x) = (vi(x),i=1,...,D)
to be the vector representation of self-position
x
, where
each element
vi(x)
can be interpreted as the activity of a grid cell when the agent is at position
x
.
(vi(x),x)
corresponds to the response map of grid cell
i
.
D
is the dimensionality of
v
, i.e., the
number of grid cells. We refer to the space of
v
as the “neural space”. We normalize
kv(x)k=1
in
our experiments.
The set
(v(x),xR2)
forms a 2D manifold, or an embedding of
R2
, in the
D
-dimensional neural
space. We will refer to (v(x),xR2)as the “coding manifold”.
With self-motion
x
, the vector representation
v(x)
is transformed to
v(x+x)
by a general
transformation model:
v(x+x) = F(v(x),x) = Fx(v(x)),(1)
where by simplifying
F(·,x)
as
Fx(·)
in notation, we emphasize that the transformation
F
is
dependent on
x
. While
v(x)
is a representation of
x
,
Fx
is a representation of
x
.
(v(x),x)
and
(Fx(·),x)
together form a representation of the 2D additive Euclidean group
R2
, which is
an abelian Lie group. Specifically, we have the following group representation condition for the
transformation model:
2
Condition 1.
(Algebraic condition on Lie group representation). For any
x
, we have (1)
F0(v(x)) =
v(x), and (2) Fx1+x2(v(x)) = Fx2(Fx1(v(x))) = Fx1(Fx2(v(x))) for any x1and x2.
Condition 1(1) requires that the coding manifold
(v(x),x)
are fixed points of
F0
with
x=0
. If
F0
is further a contraction off the coding manifold, then
(v(x),x)
are the attractor points of
F0
.
Condition 1(2) requires that moving in one step with displacement
x1+x2
should be the same
as moving in two steps with displacements
x1
and
x2
respectively. The group representation
condition is the necessary condition for any valid transformation model (Equation (1)) of grid cells.
Group representation is a central theme in modern mathematics and physics [
42
]. However, most
of the transformations studied in mathematics and physics are linear transformations that form
matrix groups, and the coding manifold
(v(x),x)
is often made implicit. [
22
] focused on matrix
groups, with
Fx(v(x)) = M(x)v(x)
, so that
M(x1+x2)v(x) = M(x1)M(x2)v(x) =
M(x2)M(x1)v(x)
. [
21
] studied general transformation model theoretically, but then focused
on the linear transformation model in their numerical experiments. Since the transformations in
recurrent neural networks are usually non-linear, we will focus on non-linear transformation models
in this paper.
2.2 Conformal embedding and conformal isometry
For an infinitesimal self-motion
δx
, it is straightforward to derive a first-order Taylor expansion of
the transformation model in Equation (1) with respect to δx
v(x+δx) = F0(v(x)) + F0
0(v(x))δx+o(|δx|)
=v(x) + f(v(x))δx+o(|δx|),(2)
where f(v(x)) = Fx
x>(v(x)) |x=0is a D×2 matrix.
While
(Fx,xR2)
forms an abelian Lie group, its derivative of
x
at
0
, i.e.,
f
, spans its Lie
algebra. Both Fxand fare transformations acting on the coding manifold (v(x),x).
We identify the conformal isometry condition of the Lie group representation as follows:
Condition 2. (Geometric condition on conformal embedding and conformal isometry).
f(v(x))>f(v(x)) = s2I2,x,(3)
where
I2
is the 2-dimensional identity matrix. That is, the two column vectors of
f(v(x))
are of
equal norm s, and are orthogonal to each other.
Under the condition above,
v(x+δx)v(x)f(v(x))δx
is conformal to
δx
, i.e., the 2D local
Euclidean space of
(δx)
in the physical space is embedded conformally as another 2D local Euclidean
space
f(v(x))δx
in the neural activity space. We only need to replace the two orthogonal axes for
δxin the 2D physical space by the two column vectors of f(v(x)) in the neural activity space.
An equivalent statement for the above condition is
kv(x+δx)v(x)k=skδxk+o(kδxk),x,δx.(4)
That is, the displacement in the neural space is proportional to the displacement in the 2D physical
space.
Note that since our analysis is local,
s
may depend on
x
. If
s
is a global constant, then the coding
manifold
(v(x),x)
has a flat intrinsic geometry (imagining folding a piece of paper without
stretching it).
[
21
] studied the conformal isometry property in a local polar coordinate system. In our definition
above, we use 2D cartesian coordinate system, which is more convenient for the non-linear recurrent
model where
x
enters the model additively. In the context of linear recurrent networks, [
21
] linked
the conformal isometry to the hexagon grid patterns of grid cells.
2.3 2D torus, 2D periodicity, and hexagon grid patterns
The 2D torus topology is commonly assumed a priori in the continuous attractor neural networks
(CANN) for grid cells [
6
,
7
,
34
,
1
]. The torus topology has been recently supported by analyzing
3
摘要:

ConformalIsometryofLieGroupRepresentationinRecurrentNetworkofGridCellsDehongXu1xudehong1996@ucla.eduRuiqiGao2ruiqig@google.comWen-HaoZhang3Wenhao.Zhang@UTSouthwestern.eduXue-XinWei4weixx@utexas.eduYingNianWu1ywu@stat.ucla.edu1DepartmentofStatistics,UCLA2GoogleResearch,BrainTeam3LydaHillDepartmento...

展开>> 收起<<
Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells Dehong Xu1.pdf

共14页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:14 页 大小:8.67MB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 14
客服
关注