TPGNN Learning High-Order Information in Dynamic Graphs via Temporal Propagation Zehong Wang

2025-05-06 0 0 497.58KB 8 页 10玖币
侵权投诉
TPGNN: Learning High-Order Information in
Dynamic Graphs via Temporal Propagation
Zehong Wang
School of Mathematics
University of Leeds
Leeds, United Kingdom
mm22zw@leeds.ac.uk
Qi Li
Department of Computer Science
and Engineering, Shaoxing University
Shaoxing, China
liqi0713@foxmail.com
Donghua Yu
Department of Computer Science
and Engineering, Shaoxing University
Shaoxing, China
donghuayu163@163.com
Abstract—Temporal graph is a powerful tool for modeling
evolving systems with dynamic interactions. In this paper, we
address an important yet neglected problem in temporal graph
analysis—how to efficiently and effectively incorporate information
from high-order neighboring nodes. We identify two challenges
in learning high-order information from temporal graphs, i.e.,
computational inefficiency and over-smoothing, which cannot
be solved by conventional techniques applied on static graphs.
To address these challenges, we propose a novel temporal
propagation-based graph neural network, called TPGNN. Our
model consists of two distinct components: propagator and node-
wise encoder. The propagator efficiently propagates messages
from the anchor node to its temporal neighbors within k-
hop, and simultaneously updates the state of neighborhoods,
achieving efficient computation. The node-wise encoder adopts
a transformer architecture to learn node representations by
explicitly modeling the importance of memory vectors preserved
on the node itself, thus mitigating the over-smoothing. Since the
encoding process does not require querying temporal neighbors,
our model dramatically reduces time consumption in inference.
Extensive experiments on temporal link prediction and node
classification demonstrate the superiority of TPGNN over state-
of-the-art baselines in terms of efficiency and robustness.
Index Terms—Continuous-time dynamic graph, temporal
graph, graph neural network, high-order information.
I. INTRODUCTION
Graph serves a fundamental tool for modeling complex
systems, representing elements as nodes and their interac-
tions as edges. The task of learning network representations
to mine knowledge and discover patterns from graphs has
attracted significant research attention across diverse domains,
including recommender systems [1], drug discovery [2], traffic
prediction [3], and academic networks [4], [5]. Graph neural
networks (GNNs) [6]–[8] are one of the most influential
techniques in graph mining, providing a powerful capacity
to jointly model both graph topology and semantics. GNNs
have achieved state-of-the-art (SOTA) performance in various
downstream tasks, such as node classification, node clustering,
and link prediction.
To model the dynamic nature of real-world networks, a
significant amount of research has focused on dynamic graphs
where each edge is associated with a timestamp [9]–[11].
For example, in recommender systems, users and items can
Corresponding author.
appear or disappear over time, and attributes on them may
also vary across different timestamps. One intuitive approach
to represent dynamic graphs is the continuous-time dynamic
graph (CTDG) model, where a graph is represented as chrono-
logically arranged edges. Existing algorithms [10], [12] pro-
pose temporal subgraph-aggregation methods to model the
dynamics preserved in CTDG. However, to speed up training
and inference in downstream tasks, these algorithms only
aggregate messages from low-order neighbors (e.g., 1-hop or
2-hop), which fails to capture the knowledge preserved in
high-order neighborhoods. Consequently, they inevitably fall
short of achieving optimal performance.
In this paper, we address the challenge of efficiently and
effectively learning high-order information in dynamic graphs.
One major obstacle is the computational inefficiency in the
aggregation process. To better understand this limitation, we
analyze the standard CTDG-based algorithm, such as tempo-
ral graph attention networks (TGAT) [9]. TGAT consists of
two main phases: (1) subgraph generation from a batch of
interactions, and (2) recursive graph convolution with time
encoding to learn node representations. We argue that the
computational inefficiency arises from the aggregation step in
graph convolution (i.e., phase 2), which cannot be parallelized
for all nodes in the computational graph. Specifically, the
message cannot be propagated to low-order nodes until the
aggregation on high-order neighbors is complete, resulting in
exponential expansion of the computational graph, especially
when the model is deep. Moreover, the intermediate state of
message passing must be saved, leading to heavy memory
consumption.
To address this issue, some methods have proposed directly
modeling high-order information using skip-connections or
hypergraphs. However, these methods are limited to static
graphs and cannot be applied to temporal graphs. For instance,
some studies [13], [14] have proposed establishing interactions
between the anchor node (i.e., target node and end node
in an interaction) and its multi-hop neighbors, to uniformly
aggregate messages from k-hop neighborhoods in a single
layer. However, for dynamic graphs, assigning timestamps
to pseudo-links becomes a challenging task. Similarly, some
works [15] have modeled high-order information using hyper-
graphs in static networks, but applying this to dynamic graphs
arXiv:2210.01171v2 [cs.LG] 13 Apr 2023
would introduce additional computation overhead, which con-
tradicts our original objective. Thus, decomposing the update
process and model depth remains a crucial point to ensure
efficiency in learning from high-order information in dynamic
graphs.
While leveraging high-order information has shown to
be useful in GNNs, we still encounter another essential
problem—over-smoothing. Over-smoothing refers to the phe-
nomenon where increasing the model depth smoothens the
node embeddings, rendering nodes indistinguishable and lead-
ing to a decline in performance. This problem is contradictory
to our intuition that deeper models can provide better expres-
siveness. Over-smoothing occurs due to the large receptive
field of deep models, which covers all nodes in the network.
As a result, the learned node representations fail to focus
on the low-order neighbors that might provide more dis-
criminative information, leading to similar node embeddings.
Consequently, the downstream classifier, which predicts labels
based on node embeddings, misclassifies nodes with similar
representations yet different labels. Thus, alleviating the over-
smoothing problem by emphasizing the importance of low-
order neighbors is essential for learning robust and discrim-
inative node representations in dynamic graphs, particularly
when incorporating high-order information.
To address the aforementioned deficiencies in CTDG em-
bedding, we propose a novel temporal propagation-based
graph neural network (TPGNN). Inspired by APAN [16],
TPGNN leverages two key components to learn node repre-
sentations. The propagator propagates messages from anchor
nodes to their k-hop neighbors and updates the node state
on neighborhoods in parallel. The node-wise encoder incor-
porates a layer-aware transformer to model the importance of
messages from different layers, mitigating the over-smoothing
problem by identifying the most influential layer. Additionally,
the node-wise encoder aggregates node representations based
on node-preserving memories, allowing for decoupling of
inference time and model depth. Our extensive experiments
on three real-world datasets demonstrate the effectiveness
and robustness of TPGNN, showing superior performance
compared to SOTA algorithms. We highlight our contributions
as follows:
We propose a novel temporal propagation-based graph
neural network (TPGNN) to effectively and efficiently
learn high-order information in dynamic graphs as well
as address the issue of over-smoothing.
We propose two distinct components, namely propagator
and node-wise encoder. The propagator is used to update
the state of neighborhoods in different layers in parallel,
and the node-wise encoder is designed to aggregate node
representation based on node-preserving memories to
prevent over-smoothing.
We conduct experiments on three real-world datasets to
demonstrate the efficiency and robustness of TPGNN.
Extensive experiments also demonstrate that our model is
not sensitive to crucial hyper-parameters such as model
depth and batch size.
II. RELATED WORK
In recent years, graph representation learning has gained
significant attention, particularly for static graphs [6]–[8],
[17]–[20]. To extend these algorithms to continuously evolving
scenarios, researchers have focused on developing temporal
graph representation learning methods. One intuitive approach
is to split a temporal graph into a sequence of chronological
snapshots, and various methods have been proposed to cap-
ture the evolving information across all snapshots, including
matrix factorization [21], triadic closure process [22], and
random walk [23]. Recurrent neural networks (RNNs) [24]
and transformers [1] have also been utilized to model the
chronologically sequential effect across all snapshots to en-
hance expressiveness. For example, DynGEN [25] updates
snapshot representations based on the node representations
of the previous snapshot, while DySAT [26] proposes a
hierarchical attention mechanism to preserve structural and
temporal properties to overcome the time dependency inherent
in RNNs.
The previous methods for temporal graph representation
learning have mostly focused on discrete approaches, where
the temporal information is captured by dividing a temporal
graph into a series of snapshots. However, this approach over-
looks the dynamics within each snapshot. To address this limi-
tation, some recent works aim to learn the temporal processing
of continuously evolving events. For instance, DyRep [27] and
its variants [28], [29] use temporal Hawkes processes, while
CTDNE [30], FiGTNE [31], and CAW-N [11] employ time-
respect random walks, time-reinforced random walks, and
causal anonymous walks, respectively. GNN-based methods
[12], [32], [33] leverage time encoding to preserve temporal
information. TGAT [9] uses a GAT-like [8] architecture with
continuous time encoding to encode node representations,
TGN [10] leverages GRU [34] to update node representations,
and HVGNN [35] extends temporal GNN to hyperbolic space.
However, these methods generally aggregate messages from 1-
hop or 2-hop neighbors, failing to model high-order informa-
tion. HIT [28] proposes to construct temporal hyper-graphs to
predict the high-order pattern, introducing extra computational
consumption. APAN [16] reverses the direction of message
passing to reduce inference time. In particular, the messages
are propagated from the anchor node to multi-hop neighbors
and each node preserves a queue mailbox. Our proposed
TPGNN aims to overcome the limitations of previous methods
by effectively and efficiently learning high-order information
while mitigating over-smoothing.
III. PROPOSED MODEL
In this paper, we present the TPGNN model, which consists
of the propagator and node-wise encoder, to efficiently gain
knowledge from high-order neighbors. The overview architec-
ture of TPGNN is illustrated in Figure 1.
A. Preliminary
A CTDG G={V,E} consists of a node set Vand a time
sensitive edge set E. The edge set is represented as a series of
摘要:

TPGNN:LearningHigh-OrderInformationinDynamicGraphsviaTemporalPropagationZehongWangSchoolofMathematicsUniversityofLeedsLeeds,UnitedKingdommm22zw@leeds.ac.ukQiLiDepartmentofComputerScienceandEngineering,ShaoxingUniversityShaoxing,Chinaliqi0713@foxmail.comDonghuaYuDepartmentofComputerScienceandEnginee...

展开>> 收起<<
TPGNN Learning High-Order Information in Dynamic Graphs via Temporal Propagation Zehong Wang.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:497.58KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注