
2 Z. Deng et al.
Federated learning(FL)[1][2] is a distributed machine learning paradigm in
which all clients train a global model collaboratively while preserving their data
locally. The naive repeat steps of FL are: (i) each client trains its model with
local data; (ii) the server collects and aggregates the models from clients to get
a global model, then delivers the global model to clients. The data flow between
clients and server is the trained models rather than the original data, which
avoids the leak of data privacy. As a crucial core of them, aggregation algorithm
plays an important role in releasing data potential and improving global model
performance. FedAvg[1], as pioneering work, is a simple and effective aggregation
algorithm, which makes the proportions of local datasets size as the aggregation
weights of local models. [3] proposed FedProx to limit the updates between lo-
cal and global models by modifying the training loss of local models. FedMA[4]
matches and averages the hidden elements with similar feature extraction sig-
natures to construct the shared global model in a layer-wise manner. Federated
learning has attracted the attention of scholars in more research fields.
In medical image segmentation, Since [5] and [6] explored the feasibility of
FL in brain tumor segmentation(BraTS), FL on medical image segmentation is
in full swing. Liu et al. [7] proposed FedDG to make the model generalize to
unseen target domains via episodic learning in continuous frequency space in
retinal fundus image segmentation. Xia et al. [8] proposed Auto-FedAvg, where
the aggregation weights are dynamically adjusted according to the data distribu-
tion, to accelerate the training process and get better performance in COVID-19
lesion segmentation. Zhang et al. [9] proposed SplitAVG to overcome the perfor-
mance drops from data heterogeneity in FL by network split and feature map
concatenation strategies in the BraTS task. More than this, the first computa-
tional competition on federated learning, Federated Tumor Segmentation(FeTS)
Challenge1[10] is held to measure the performance of different aggregation algo-
rithms on glioma segmentation[11,12,13,14]. Leon et al. [15] proposed FedCost-
WAvg to get a notable improvement compared to FedAvg by including the cost
function decreased during the last round and won the challenge. However, most
of these methods only study the single granularity or add other regular terms to
the aggregation method, without considering the finer granularity factors, which
limit the performance of global model.
Different from the above methods, in this paper, we propose a novel aggrega-
tion strategy, FedGraph, which attempts to explore the aggregation algorithm
of FL from the topological perspective of neural networks. After the server col-
lects the local models, the FedGraph explores the internal correlations between
local models by three aspects from coarse to fine: the proportion of each local
dataset size, the topology structure of model graphs, and the model weights.
The proportion of local dataset size factor is similar to FedAvg. We compute
the topological correlation by mapping the local models into topological graphs.
Meanwhile, the finer grain model weights correlations are taken into account.
Through the weighted combination of three different granularity factors from
1https://fets-ai.github.io/Challenge/