EFFICIENT PROBABILISTIC ANALYSIS OF COMBINATORIAL NEURAL CODES Thomas F. Burns

2025-05-03 0 0 575.79KB 13 页 10玖币
侵权投诉
EFFICIENT,PROBABILISTIC ANALYSIS OF
COMBINATORIAL NEURAL CODES
Thomas F. Burns
Neural Coding and Brain Computing Unit
OIST Graduate University, Okinawa, Japan
thomas.burns@oist.jp
Irwansyah
Department of Mathematics
University of Mataram
West Nusa Tenggara, Indonesia
irw@unram.ac.id
ABSTRACT
Artificial and biological neural networks (ANNs and BNNs) can encode inputs
in the form of combinations of individual neurons’ activities. These combinato-
rial neural codes present a computational challenge for direct and efficient anal-
ysis due to their high dimensionality and often large volumes of data. Here we
improve the computational complexity – from factorial to quadratic time – of di-
rect algebraic methods previously applied to small examples and apply them to
large neural codes generated by experiments. These methods provide a novel and
efficient way of probing algebraic, geometric, and topological characteristics of
combinatorial neural codes and provide insights into how such characteristics are
related to learning and experience in neural networks. We introduce a procedure
to perform hypothesis testing on the intrinsic features of neural codes using infor-
mation geometry. We then apply these methods to neural activities from an ANN
for image classification and a BNN for 2D navigation to, without observing any
inputs or outputs, estimate the structure and dimensionality of the stimulus or task
space. Additionally, we demonstrate how an ANN varies its internal representa-
tions across network depth and during learning.
1 INTRODUCTION
To understand the world around them, organisms’ biological neural networks (BNNs) encode infor-
mation about their environment in the dynamics of spikes varying over time and space. Artificial
neural networks (ANNs) use similar principles, except instead of transmitting spikes they usually
transmit a real-valued number in the range of [0,1] and their dynamics are typically advanced in a
step-wise, discrete manner. Both BNNs and ANNs adjust their internal structures, e.g., connection
strengths between neurons, to improve their performance in learned tasks. This leads to encoding
input data into internal representations, which they then transform into task-relevant outputs, e.g.,
motor commands. Combinatorial neural coding schemes, i.e., encoding information in the collective
activity of neurons (also called ‘population coding’), is widespread in BNNs (Averbeck et al., 2006;
Osborne et al., 2008; Schneidman et al., 2011; Froudarakis et al., 2014; Bush et al., 2015; Stevens,
2018; Beyeler et al., 2019; Villafranca-Faus et al., 2021; Burns et al., 2022; Hannagan et al., 2021)
and long-utilized in ANNs, e.g., in associative memory networks (Little, 1974; Hopfield, 1982;
Tsodyks & Feigel'man, 1988; Adachi & Aihara, 1997; Krotov & Hopfield, 2016).
Advances in mathematical neuroscience (Curto & Itskov, 2008; Curto et al., 2019) has led to the
development of analyses designed to understand the combinatorial properties of neural codes and
their mapping to the stimulus space. Such analyses were initially inspired by the combinatorial
coding seen in place cells (Moser et al., 2008), where neurons represent physical space in the form
of ensemble and individual activity (Brown & Alex, 2006; Fenton et al., 2008). Place fields, the
physical spatial areas encoded by place cells, can be arranged such that they span multiple spatial
dimensions, e.g., 3D navigation space in bats (Yartsev & Ulanovsky, 2013). They can also encode
for ‘social place’ (Omer et al., 2018), the location of conspecifics. Just as these spatial and social
dimensions of place (external stimuli) may be represented by combinatorial coding, so too may
other dimensions in external stimuli, such as in vision (Fujii & Ito, 1996; Panzeri & Schultz, 2001;
Averbeck et al., 2006; Froudarakis et al., 2014; Fetz, 1997).
1
arXiv:2210.10492v1 [cs.NE] 19 Oct 2022
In place cells, the term receptive field (RF) or place field may intuitively be thought of as a physical
place. In the context of vision, for example, we may think of RFs less spatially and more abstractly as
representing stimuli features or dimensions along which neurons may respond more or less strongly,
e.g., features such as orientation, spatial frequency, or motion (Niell & Stryker, 2008; Juavinett &
Callaway, 2015). Two neurons which become activated simultaneously upon visual stimuli moving
to the right of the visual field may be said to share the RF of general rightward motion, for example.
We may also think of RFs even more abstractly as dimensions in general conceptual spaces, such
as the reward–action space of a task (Constantinescu et al., 2016), visual attributes of characters or
icons (Aronov et al., 2017), olfactory space (Bao et al., 2019), the relative positions people occupy
in a social hierarchy (Park et al., 2021), and even cognition and behaviour more generally (Bellmund
et al., 2018).
In the method described in Curto et al. (2019), tools from algebra are used to extract the combina-
torial structure of neural codes. The types of neural codes under study are sets of binary vectors
C Fn
2, where there are nneurons in states 0(off) and 1(on). The ultimate structure of this method
is the canonical form of a neural code CF (C). The canonical form may be analysed topologically,
geometrically, and algebraically to infer features such as the potential convexity of the receptive
fields (RFs) which gave rise to the code, or the minimum number of dimensions those RFs must
span in real space. Such analyses are possible because CF (C)captures the minimal essential set
of combinatorial descriptions which describe all existing RF relationships implied by C. RF rela-
tionships (whether and how RFs intersect or are contained by one-another in stimulus space) are
considered to be implied by Cby assuming that if two neurons become activated or spike simul-
taneously, they likely receive common external input in the form of common stimulus features or
common RFs. Given sufficient exploration of the stimulus space, it is possible to infer topolog-
ical features of the global stimulus space by only observing C(Curto & Itskov, 2008; Mulas &
Tran, 2020). To the best of our knowledge, these methods have only been developed and used for
small examples of BNNs. Here we apply them to larger BNNs and to ANNs (by considering the
co-activation of neurons during single stimulus trials).
Despite the power and broad applicability of these methods (Curto & Itskov, 2008; Curto et al.,
2019; Mulas & Tran, 2020), two major problems impede their usefulness: (1) the computational time
complexity of the algorithms to generate CF (C)is factorial in the number of codewords O(nm!)1,
limiting their use in large, real-world datasets; and (2) there is no tolerance for noise in C, nor
consideration given towards the stochastic or probabilistic natures of neural firing. We address these
problems by: (1) introducing a novel method for improving the time complexity to quadratic in the
number of neurons O(n2)by computing the generators of CF (C)and using these to answer the
same questions; and (2) using information geometry (Nakahara & Amari, 2002; Amari, 2016) to
perform hypothesis testing on the presence/absence of inferred geometric or topological properties
of the stimulus or task space. As a proof of concept, we apply these new methods to data from a
simulated BNN for spatial navigation and a simple ANN for visual classification, both of which may
contain thousands of codewords.
2 PRELIMINARIES
Before describing our own technical developments and improvements, we first outline some of the
key mathematical concepts and objects which we use and expand upon in later sections. For more
detailed information, we recommend referring to Curto & Itskov (2008); Curto et al. (2019).
2.1 COMBINATORIAL NEURAL CODES
Let F2={0,1},[n] = {1,2, . . . , n},and Fn
2={a1a2· · · an|aiF2,for all i}.A codeword is
an element of Fn
2.For a given codeword c=c1c2· · · cn,, we define its support as supp(c) = {i
[n]|ci6= 0}, which can be interpreted as the unique set of active neurons in a discrete time bin which
correspond to that codeword. A combinatorial neural code, or a code, is a subset of Fn
2.The support
of a code Cis defined as supp(C) = {S[n]|S=supp(c)for some cC}, which can be
interpreted as all sets of active neurons represented by all corresponding codewords in C.
1nis the number of neurons and mis the number of codewords. In most datasets of interest nm.
2
Let be a subset of 2[n].The subset is an abstract simplicial complex if for any S,the
condition S0Sgives S0,for any S0S. In other words, 2[n]is an abstract simplicial
complex if it is closed under inclusion. So, the simplicial complex for a code Ccan be defined as
∆(C) = {S[n]|Ssupp(c),for some cC}.
A set Sin a simplicial complex is referred to as an (|S| − 1)-simplex. For instance, a set with
cardinality 1 is called 0-simplex (geometrically, a point), a set with cardinality 2 is called a 1-simplex
(geometrically, an edge), and so on. Let Sbe an m-simplex in .Any S0Sis called a face of S.
2.2 SIMPLICIAL COMPLEXES AND TOPOLOGY
Let CFn
2be a code and ∆(C)be the corresponding simplicial complex of C. From now on,
we will use to denote the corresponding simplicial complex of a code C. Define mas a set of
m-simplices in .Define
Cm=(X
Sm
αSS|αSF2,Sm).
The set Cmforms a vector space over F2whose basis elements are all the m-simplicies in m.Now,
define the chain complex C(∆,F2)to be the sequence {Cm}m0.For any m1,define a linear
transformation m:CmCm1,where for any σm, ∂m(σ) = Pm
i=0 σi,with σim1
as a face of σ, for all i= 0, . . . , m. Moreover, the map mcan be extended linearly to all elements
in Cmas follows
m X
Sm
αSS!=X
Sm
αSm(S).
Define the m-th mod-2 homology group of as
Hm(∆,F2) = Ker (m)
Im (m+1)
for all m1and
H0(∆,F2) = C0
Im (1).
Note that Hm(∆,F2)is also a vector space over F2,for all m0.So, the mod-2 m-th Betti number
βm(∆) of a simplicial complex is the dimension of Hm(∆,F2).The βm(∆,F2)gives the number
of m-dimensional holes in the geometric realisation of .
2.3 CANONICAL FORM
Let σand τbe subsets of [n],where στ=.The polynomial of the form QiσxiQjτ(1xj)
F2[x1m . . . , xn]is called a pseudo-monomial. In a given ideal J F2[x1, . . . , xn],a pseudo-
monomial fin Jis said to be minimal if there is no pseudo-monomial gin Jwith deg(g)<deg(f)
such that f=gh for some hF2[x1, . . . , xn].For a given code CFn
2,we can define a neu-
ral ideal related to Cas JC=hρc0|c0Fn
2Ci,where ρc0is a pseudo-monomial of the form
Qisupp(c0)xiQj6∈supp(c0)(1 xj).A set of all minimal pseudo-monomials in JC,denoted by
CF (JC)or simply CF (C),is called the canonical form of JC.Moreover, it can be shown that
JC=hCF (C)i.Therefore, the canonical form CF (C)gives a simple way to infer the RF rela-
tionships implied by all codewords in C. One way to calculate the CF (C)is by using a recursive
algorithm described in Curto et al. (2019). For a code C={c1,...,c|C|},the aforementioned al-
gorithm works by constructing canonical forms CF (), CF ({c1}), CF ({c1,c2}), . . . , CF (C),
respectively. In each stage, the algorithm evaluates polynomials, checks divisibility conditions, and
adds or removes polynomials from a related canonical form.
3 METHODS
Our main methodological contributions are: (1) improving the computational complexity of the
analyses relying on computing CF (C)(see Algorithm 1); and (2) using information geometry to
identify whether identified algebraic or topological features are statistically significant.
3
摘要:

EFFICIENT,PROBABILISTICANALYSISOFCOMBINATORIALNEURALCODESThomasF.BurnsNeuralCodingandBrainComputingUnitOISTGraduateUniversity,Okinawa,Japanthomas.burns@oist.jpIrwansyahDepartmentofMathematicsUniversityofMataramWestNusaTenggara,Indonesiairw@unram.ac.idABSTRACTArticialandbiologicalneuralnetworks(ANNs...

展开>> 收起<<
EFFICIENT PROBABILISTIC ANALYSIS OF COMBINATORIAL NEURAL CODES Thomas F. Burns.pdf

共13页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:13 页 大小:575.79KB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 13
客服
关注