
In place cells, the term receptive field (RF) or place field may intuitively be thought of as a physical
place. In the context of vision, for example, we may think of RFs less spatially and more abstractly as
representing stimuli features or dimensions along which neurons may respond more or less strongly,
e.g., features such as orientation, spatial frequency, or motion (Niell & Stryker, 2008; Juavinett &
Callaway, 2015). Two neurons which become activated simultaneously upon visual stimuli moving
to the right of the visual field may be said to share the RF of general rightward motion, for example.
We may also think of RFs even more abstractly as dimensions in general conceptual spaces, such
as the reward–action space of a task (Constantinescu et al., 2016), visual attributes of characters or
icons (Aronov et al., 2017), olfactory space (Bao et al., 2019), the relative positions people occupy
in a social hierarchy (Park et al., 2021), and even cognition and behaviour more generally (Bellmund
et al., 2018).
In the method described in Curto et al. (2019), tools from algebra are used to extract the combina-
torial structure of neural codes. The types of neural codes under study are sets of binary vectors
C ⊂ Fn
2, where there are nneurons in states 0(off) and 1(on). The ultimate structure of this method
is the canonical form of a neural code CF (C). The canonical form may be analysed topologically,
geometrically, and algebraically to infer features such as the potential convexity of the receptive
fields (RFs) which gave rise to the code, or the minimum number of dimensions those RFs must
span in real space. Such analyses are possible because CF (C)captures the minimal essential set
of combinatorial descriptions which describe all existing RF relationships implied by C. RF rela-
tionships (whether and how RFs intersect or are contained by one-another in stimulus space) are
considered to be implied by Cby assuming that if two neurons become activated or spike simul-
taneously, they likely receive common external input in the form of common stimulus features or
common RFs. Given sufficient exploration of the stimulus space, it is possible to infer topolog-
ical features of the global stimulus space by only observing C(Curto & Itskov, 2008; Mulas &
Tran, 2020). To the best of our knowledge, these methods have only been developed and used for
small examples of BNNs. Here we apply them to larger BNNs and to ANNs (by considering the
co-activation of neurons during single stimulus trials).
Despite the power and broad applicability of these methods (Curto & Itskov, 2008; Curto et al.,
2019; Mulas & Tran, 2020), two major problems impede their usefulness: (1) the computational time
complexity of the algorithms to generate CF (C)is factorial in the number of codewords O(nm!)1,
limiting their use in large, real-world datasets; and (2) there is no tolerance for noise in C, nor
consideration given towards the stochastic or probabilistic natures of neural firing. We address these
problems by: (1) introducing a novel method for improving the time complexity to quadratic in the
number of neurons O(n2)by computing the generators of CF (C)and using these to answer the
same questions; and (2) using information geometry (Nakahara & Amari, 2002; Amari, 2016) to
perform hypothesis testing on the presence/absence of inferred geometric or topological properties
of the stimulus or task space. As a proof of concept, we apply these new methods to data from a
simulated BNN for spatial navigation and a simple ANN for visual classification, both of which may
contain thousands of codewords.
2 PRELIMINARIES
Before describing our own technical developments and improvements, we first outline some of the
key mathematical concepts and objects which we use and expand upon in later sections. For more
detailed information, we recommend referring to Curto & Itskov (2008); Curto et al. (2019).
2.1 COMBINATORIAL NEURAL CODES
Let F2={0,1},[n] = {1,2, . . . , n},and Fn
2={a1a2· · · an|ai∈F2,for all i}.A codeword is
an element of Fn
2.For a given codeword c=c1c2· · · cn,, we define its support as supp(c) = {i∈
[n]|ci6= 0}, which can be interpreted as the unique set of active neurons in a discrete time bin which
correspond to that codeword. A combinatorial neural code, or a code, is a subset of Fn
2.The support
of a code Cis defined as supp(C) = {S⊆[n]|S=supp(c)for some c∈C}, which can be
interpreted as all sets of active neurons represented by all corresponding codewords in C.
1nis the number of neurons and mis the number of codewords. In most datasets of interest nm.
2