
RulE: Knowledge Graph Reasoning with Rule Embedding
Xiaojuan Tang,1,3Song-Chun Zhu,1,2,3Yitao Liang,*1,3Muhan Zhang∗1,3
1Institute for Artificial Intelligence, Peking University 2Tsinghua University
3National Key Laboratory of General Artificial Intelligence, BIGAI
1xiaojuan@stu.pku.edu.cn 1{muhan,yitaol,s.c.zhu}@pku.edu.cn
3{tangxiaojuan,sczhu,liangyitao,mhzhang}@bigai.ai
Abstract
Knowledge graph reasoning is an important
problem for knowledge graphs. In this paper,
we propose a novel and principled framework
called RulE (stands for Rule Embedding) to
effectively leverage logical rules to enhance
KG reasoning. Unlike knowledge graph em-
bedding methods, RulE learns rule embeddings
from existing triplets and first-order rules by
jointly representing entities,relations and log-
ical rules in a unified embedding space. Based
on the learned rule embeddings, a confidence
score can be calculated for each rule, reflect-
ing its consistency with the observed triplets.
This allows us to perform logical rule infer-
ence in a soft way, thus alleviating the brit-
tleness of logic. On the other hand, RulE
injects prior logical rule information into the
embedding space, enriching and regularizing
the entity/relation embeddings. This makes
KGE alone perform better too. RulE is con-
ceptually simple and empirically effective. We
conduct extensive experiments to verify each
component of RulE. Results on multiple bench-
marks reveal that our model outperforms the
majority of existing embedding-based and rule-
based approaches. The code is released at
https://github.com/XiaojuanTang/RulE
1 Introduction
Knowledge graphs (KGs) usually store millions
of real-world facts and are used in a variety of ap-
plications (Wang et al.,2018;Bordes et al.,2014;
Xiong et al.,2017). Examples of knowledge graphs
include Freebase (Bollacker et al.,2008), Word-
Net (Miller,1995) and YAGO (Suchanek et al.,
2007). They represent entities as nodes and re-
lations among entities as edges. Each edge en-
codes a fact in the form of a triplet (head entity,
relation, tail entity). However, KGs are usually
highly incomplete, making their downstream tasks
more challenging. Knowledge graph reasoning,
*Corresponding authors
which predicts missing facts by reasoning on exist-
ing facts, has thus become a popular research area
in artificial intelligence.
There are two prominent lines of work in this
area: knowledge graph embedding (KGE) and rule-
based KG reasoning. Knowledge graph embed-
ding (KGE) methods such as TransE (Bordes et al.,
2013), RotatE (Sun et al.,2019) and BoxE (Ab-
boud et al.,2020) embed entities and relations
into a latent space and compute the score for each
triplet to quantify its plausibility. KGE is effi-
cient and robust to noise. However, it only uses
zeroth-order (propositional) logic to encode exist-
ing facts (e.g., “Alice is Bob’s wife.”) without
explicitly leveraging first-order (predicate) logic.
First-order logic uses the universal quantifier to rep-
resent generally applicable logical rules. For in-
stance, “
∀x, y :xis y’s wife →yis x’s husband
".
Those rules are not specific to particular entities
(e.g., Alice and Bob) but are generally applicable to
all entities. The other line of work, rule-based KG
reasoning, in contrast, explicitly applies logic rules
to infer new facts (Galárraga et al.,2013,2015;
Yi et al.,2018;Sadeghian et al.,2019;Qu et al.,
2020). Unlike KGE, logical rules can achieve inter-
pretable reasoning and generalize to new entities.
However, the brittleness of logical rules greatly
harms prediction performance. Consider the log-
ical rule
(x, works in, y)→(x, lives in, y)
as an
example. It is mostly correct. Yet, if somebody
works in New York but actually lives in New Jer-
sey, the rule can still only infer the wrong fact in
an absolute way.
Considering that the aforementioned two lines of
work can complement each other, addressing each
other’s weaknesses with their own merits, it be-
comes imperative to study how to integrate logical
rules with KGE methods in a principled manner.
If we view this integration in a broader context,
embedding-based reasoning can be seen as a neural
method, while rule-based reasoning can be seen
arXiv:2210.14905v3 [cs.AI] 20 May 2024