
November 15, 2022
information-theoretic metrics can efficiently detect their interaction in dynamical brain networks, and it’s widely used
in the field of neuroscience [7]. For instance, quantify information encoding and decoding in the neural system [8
–
11],
measure visual information flow in the biological neural networks [12,13], and color information processing in the
neural cortex [14], and so on. However, although functional connectivity has already become a hot research topic in
neuroscience [15,16], systematic studies on the information flow or the redundancy and synergy amongst brain regions
remain limited. One extreme type of redundancy is full synchronization, where the state of one neural signal may be
used to predict the status of any other neural signal, and this concept of redundancy is thus viewed as an extension of
the standard notion of correlation to more than two variables [17]. Synergy, on the other hand, is analogous to those
statistical correlations that govern the whole but not its constituent components [18]. High-order brain functions are as-
sumed to require synergies, which give simultaneous local independence and global cohesion, but are less suitable for
them under high synchronization situations, such as epileptic seizures [19]. Most functional connectivity approaches
until now have mainly concentrated on pairwise relationships between two regions. The conventional approach used to
estimate indirect functional connectivity among brain regions is Pearson correlation (CC) [20] and Mutual Information
(I) [8,21
–
23]. However, real brain network relationships are often complex, involving more than two regions, and the
pairwise dependencies measured by correlation or mutual information could not reflect these multivariate dependencies.
Therefore, recent studies in neuroscience focus on the development of information-theoretic measures that can handle
more than two regions simultaneously such as the Total Correlation [24,25].
Total Correlation (TC) [26] (also known as multi-information [27
–
29]) mainly describes the amount of dependence
observed in the data and, by definition can be applied to multiple multivariate variables. Its use to describe functional
connectivity in the brain was first proposed as a empirical measure in [24], but in [25] the superiority of TC over
mutual information was proved analytically. The consideration of low-level vision models allows to derive analytical
expressions for the TC as a function of the connectivity. These analytical results show that pairwise I cannot capture the
effect of different intra-cortical inhibitory connections while the TC can. Similarly, in analytical models with feedback,
synergy can be shown using TC, while it is not so obvious using mutual information [25]. Moreover, these analytical
results allow to calibrate computational estimators of TC.
In this work we build on these empirical and theoretical results [24, 25] to infer a larger scale (whole brain) network
based on TC for the first time. As opposed to [24,25] where the number of considered nodes was limited to the range of
tens and focused on specialized subsystems, here we consider wider recordings [30,31] so we use signals coming from
hundreds of nodes across the whole brain. Additionally, applying our analysis to data of the same scale for regular
and altered brains
1
. We also show the possibility of using this kind of wide-range networks as biomarkers. From the
technical point of view, here we use Correlation Explanation (CorEx) [32,33] to estimate TC in these high-dimensional
scenarios. Furthermore, graph theory and clustering [15,16] are used here to represent the relationships between the
considered regions.
The rest of this paper is organized as follows: Section 2 introduces the necessary information-theoretic concepts and
explains CorEx. Sections 3 and 4 show two synthetic experiments that prove that CorEx results are trustable. Section 5
estimates the large-scale connectomes with fMRI datasets that involve more than 100 regions across the whole brain.
Moreover, we show how the analysis of these large scale networks based on TC may indicate brain alterations. Sections
6 and 7 give a general discussion and the conclusion of the paper, respectively.
2 Total Correlation as neural connectivity descriptor
2.1 Definitions and Preliminaries
Mutual Information:
Given two multivariate random variables
X1
and
X2
, the mutual information between them,
I(X1;X2), can be calculated as the difference between the sum of individual entropies, H(Xi)and the entropy of the
variables considered jointly as a single system, H(X1, X2)[34]:
I(X1;X2) = H(X1) + H(X2)−H(X1, X2)(1)
where for each (multivariate) random variable
v
, the entropy is
H(v) = h− log2p(v)i
and the brackets represent
expectation values spanning random variables. The mutual information also can be seen as the information shared by
the two variables or the reduction of uncertainty in one variable given the information about the other [35].
Mutual information is better than linear correlation:
For Gaussian sources mutual information reduces to linear
correlation because the entropy factors in Eq. 1 just depend on
|hX1·X>
2i|
. However, for more general (non-Gaussian)
sources mutual information cannot be reduced to covariance and cross-covariance matrices. In these (more realistic)
1http://fcon_1000.projects.nitrc.org/indi/ACPI/html/
2