Multi-modal Dynamic Graph Network Coupling Structural and Functional Connectome for Disease Diagnosis and Classification

2025-05-02 0 0 1.87MB 8 页 10玖币
侵权投诉
Multi-modal Dynamic Graph Network: Coupling
Structural and Functional Connectome for Disease
Diagnosis and Classification
1stYanwu Yang
Harbin Institute of Technology
at Shenzhen
Shenzhen, China
Peng Cheng Laboratory
Shenzhen, China
2rdXutao Guo
Harbin Institute of Technology
at Shenzhen
Shenzhen, China
Peng Cheng Laboratory
Shenzhen, China
3th Zhikai Chang
Harbin Institute of Technology
at Shenzhen
Shenzhen, China
4th Chenfei Ye
Harbin Institute of Technology
at Shenzhen
Shenzhen, China
5th Yang Xiang*
Peng Cheng Laboratory
Shenzhen, China
6th Ting Ma*
Harbin Institute of Technology
at Shenzhen
Shenzhen, China
Peng Cheng Laboratory
Shenzhen, China
Abstract—Multi-modal neuroimaging technology has greatlly
facilitated the efficiency and diagnosis accuracy, which pro-
vides complementary information in discovering objective disease
biomarkers. Conventional deep learning methods, e.g. convolu-
tional neural networks, overlook relationships between nodes and
fail to capture topological properties in graphs. Graph neural
networks have been proven to be of great importance in modeling
brain connectome networks and relating disease-specific patterns.
However, most existing graph methods explicitly require known
graph structures, which are not available in the sophisticated
brain system. Especially in heterogeneous multi-modal brain
networks, there exists a great challenge to model interactions
among brain regions in consideration of inter-modal dependen-
cies. In this study, we propose a Multi-modal Dynamic Graph
Convolution Network (MDGCN) for structural and functional
brain network learning. Our method benefits from modeling
inter-modal representations and relating attentive multi-model
associations into dynamic graphs with a compositional corre-
spondence matrix. Moreover, a bilateral graph convolution layer
is proposed to aggregate multi-modal representations in terms of
multi-modal associations. Extensive experiments on three datasets
demonstrate the superiority of our proposed method in terms
of disease classification, with the accuracy of 90.4%, 85.9%
and 98.3% in predicting Mild Cognitive Impairment (MCI),
Parkinson’s disease (PD), and schizophrenia (SCHZ) respectively.
Furthermore, our statistical evaluations on the correspondence
matrix exhibit a high correspondence with previous evidence of
biomarkers.
Index Terms—Graph Neural Network, Multi-modal Graph
Network, Diagnosis, Dynamic Graph
I. INTRODUCTION
Recently, computer-aided diagnosis technologies using ad-
vanced neuroimaging developments have been widely adopted
* Corresponding authors.
for medical scenarios, e.g. disease diagnosis, and medical im-
age segmentation. Among these neuroimaging tools, functional
Magnetic Resonance Imaging (fMRI) and Diffusion Tensor
Imaging (DTI) have become promising candidates for brain
study. Functional MRI is a stimulus-free acquisition used
to track changes in co-activation across brain regions. DTI
captures the directional diffusion of water molecules as a proxy
for structural connectivity. Derived functional and structural
connectivity is feasible to model the brain as a network by
representing brain parcellations along with their structural
or functional connectivity. The brain connectome provides a
more holistic view by modeling the entire human brain and
characterizes individual subject behavior, cognition, and men-
tal health [1]. There is mounting evidence that demonstrates
functional and structural connectivity could be used to identify
predictive biomarkers for brain disorders such as Alzheimer’s
disease (AD), Schizophrenia (SCZ), and Parkinson’s disease
(PD) [2]–[4].
Medical image-based diagnosis is a challenging task due
to the sophisticated structure of brain systems and subtle
lesions, which might be overlooked by medical experts [5].
Neuroimage processing with multiple modalities is feasible to
assess and develop distinctive biomarkers from multiple fields.
Previous studies link the functional signals with structural
pathways for mediating and suggest that functional connec-
tivity and structural connectivity might be mediated by each
other [6]–[8]. Recently, state-of-the-art graph neural networks
(GNN) have achieved promising performance in multi-modal
graph-structural data learning [8]–[10]. However, most existing
GNNs built graphs on the originally derived connectivity, and
fail to sufficiently model sophisticated associations among
nodes. This issue is even aggravated when modeling multi-
arXiv:2210.13721v1 [eess.IV] 25 Oct 2022
modal brain networks since there exist heterogeneous struc-
tures and representations among multiple modalities. How-
ever, most existing studies potentially ignore these issues and
achieves sub-optimal results.
In terms of this, we propose a Multi-modal Dynamic
Graph Convolution Network (MDGCN) to model multi-modal
complementary associations by dynamic graphs. Our net-
work allows for tighter coupling of context between multiple
modalities by representing functional and structural connec-
tome dynamically and providing a compositional space for
reasoning. Specially, we first parse both the functional and
structural into dynamic graphs with embedded representations
as nodes. A correspondence factor matrix is introduced to
capture the corresponding values of each pair of nodes be-
tween modalities, which is denoted as the adjacency matrix.
And multimodal representations are aggregated by a Bilateral
Graph Convolution (BGC) layer for complementary message
passing. Extensive experiments on three datasets demonstrate
that our proposed method outperforms other baselines in the
prediction of Mild Cognitive Impairment (MCI), Parkinson’s
Disease (PD), and Schizophrenia (SCHZ) with the accuracy
performances of 90.4%, 85.9%, and 98.3% respectively.
The rest of our paper is structured as follows. We would
like to review competitive methods in terms of connectome
study and multi-modal models in Section II. The details of
the proposed model are introduced in Section III. Section IV
describes the experiments of our proposed model in disease
classification on 3 datasets and provides the experimental
results. Section V draws the conclusions of the work.
II. RELATED WORKS
A. Brain connectome network study
With the flexibility of uncovering the complex biological
mechanisms using rs-fMRI and DTI, deep learning methods
have been widely coordinated to examine and analyze the
patterns. Convolution neural networks (CNN) and graph neural
networks (GNN) have become useful tools for brain con-
nectome embedding, where high dimensional neuroimaging
features are embedded into a low dimensional space that pre-
serves their context as well as capturing topological attributes.
BrainNetCNN is proposed to take the brain connectome net-
works as grid-like data, and measure the topological locality in
connectome [11], which has achieved promising performance
for disease diagnosis and phenotype prediction.
Apart from convolution neural networks, graph neural net-
works retain a state that can represent information about
the neighbors and provides a powerful way to explore the
dependencies between nodes. However, applying a graph
network directly to the brain connectome is problematic. On
one hand, brain networks have sophisticated and non-linear
structures. For example, most existing methods apply the
derived functional connectivity as the adjacency matrix, which
is measured linearly between two brain regions. These de-
rived linear connectivities fail to model complex associations
between brain regions. On the other hand, graph convolution
networks explicitly require a known graph structure, which is
not available in the brain connectome. Several strategies have
been proposed to tackle the unknown structure issue [12]–[14].
Especially, dynamic graph convolution methods are proposed
to model graph structures adaptively to characterize intrinsic
brain connectome representations and achieve promising per-
formances in prediction [9], [15]. Nevertheless, there is still a
lack of studies to tackle the multi-modal connectome graphs.
B. Multi-modal connectome learning
Existing multi-modal connectome learning methods can
be categorized into two classes: feature learning methods
and deep learning methods. Compared with feature learning
methods [16]–[18] that leverage feature selection to identify
disease-related features, deep learning methods are feasible
to capture intrinsic meaningful representations and achieves
better performances. [19] devised a calibration mechanism to
fuse fMRI and DTI information into edges. [20] proposed
to perform a two-layer convolution on the fMRI and DTI
data simultaneously. [8] regularizes convolution on functional
connectivity with structural graph Laplacian. However, most
of these studies lack the ability to sufficiently model com-
plementary associations between modalities, since there is a
lack of joint compositional reasoning over both functional and
structural connectome networks.
III. METHOD
The proposed Multi-modal Dynamic Graph Convolution
Network (MDGCN) aims at parsing multi-modal representa-
tions into dynamic graphs and performing graph aggregation
for message passing. In this section, we would like to firstly
introduce the brain graph definition, and then the detail of the
proposed method.
A. Preliminaries
Brain network graph: The brain networks derived from
neuroimages are usually symmetric positive define (SPD)
matrices XRM×M, where Mdenotes the number of
brain regions. Each element xi,j denotes a co-variance or
connectivity strength between the regions. The brain network
is usually formulated as an undirected graph G= (V, E, H),
where Vis a finite set of vertices with |V|=Mand
ERM×Mdenotes the edges in the graphs. The nodes and
edges are represented by the derived SPD matrices X. For each
vertex vi, the node feature vector hiis constructed by the i-th
row or column in the SPD matrix hi={xi,k|k= 1,2, ..., M}.
The edges are represented by the matrices directly, of which
an element is assigned by ei,j =xi,j .
Multi-modal brain graph: The multi-modal brain graphs
are constructed by the functional and structural brain networks
derived from fMRI and DTI respectively. An input ˆ
Gis ex-
pressed by a tuple of graphs as ˆ
G={Gs, Gf}, where Gsand
Gfdenote the structural and functional brain network graphs
respectively. Formally, given a set of graphs {ˆ
G1,ˆ
G2, ..., ˆ
GN}
with a few labeled graph instances, the aim of the study
is to decide the state of the unlabeled graphs as a graph
classification task.
摘要:

Multi-modalDynamicGraphNetwork:CouplingStructuralandFunctionalConnectomeforDiseaseDiagnosisandClassication1stYanwuYangHarbinInstituteofTechnologyatShenzhenShenzhen,ChinaPengChengLaboratoryShenzhen,China2rdXutaoGuoHarbinInstituteofTechnologyatShenzhenShenzhen,ChinaPengChengLaboratoryShenzhen,China3t...

展开>> 收起<<
Multi-modal Dynamic Graph Network Coupling Structural and Functional Connectome for Disease Diagnosis and Classification.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:1.87MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注