complex semantics, and thus, instruction decoding
and execution is a key factor for KGQA. Figure 1
shows an example where suboptimal initial instruc-
tions lead to incorrect KG traversals. While some
methods (Miller et al.,2016;Zhou et al.,2018;Sun
et al.,2018;Xu et al.,2019) attempt to improve the
quality of the instructions, they are mainly designed
to tackle specific question types, such as 2-hop or
3-hop questions, or show poor performance for
complex questions (Sun et al.,2019).
Our method termed REAREV (Reason & Re-
vise) introduces a new way to KGQA reasoning
with respect to both instruction execution and de-
coding. To improve instruction execution, we do
not use instructions in a pre-defined (possibly in-
correct) order, but allow our method to decide on
the execution order on the fly. We achieve this by
emulating breadth-first search (BFS) with GNN
reasoners. The BFS strategy treats the instructions
as a set and the GNN decides which instructions to
accept. To improve instruction decoding, we reason
over the KG to obtain KG-aware information and
use this information to adapt the initial instructions.
Then, we restart the reasoning with the new instruc-
tions that are conditioned to the underlying KG
semantics. To the best of our knowledge, adaptive
reasoning with emulating graph search algorithms
with GNNs has not been previously proposed for
KGQA.
We empirically show that REAREV performs ef-
fective reasoning over KGs and outperforms other
state-of-the-art. For KGQA with complex ques-
tions, REAREV achieves improvement for 4.1 per-
centage points at Hits@1 over the best competing
approach.
Our contributions are summarized below:
•
We improve instruction decoding via adaptive
reasoning, which updates the instructions with
KG-aware information.
•
We improve instruction execution by emu-
lating the breadth-first search algorithm with
graph neural networks, which provides robust-
ness to the instruction ordering.
•
We achieve state-of-the-art (or nearly) perfor-
mance on three widely used KGQA datasets:
WebQuestions (Yih et al.,2015), Complex We-
bQuestions (Talmor and Berant,2018), and
MetaQA (Zhang et al.,2018).
2 Related Work
There are two mainstream approaches to solve
KGQA: (i) parsing the question to executable KG
queries like SPARQL, and (ii) grounding question
and KG representations to a common space for
reasoning.
Regarding the first case, early methods (Berant
et al.,2013;Reddy et al.,2014;Bast and Hauss-
mann,2015) rely on pre-defined question templates
to synthesize queries, which requires strong do-
main knowledge. Recent methods (Yih et al.,2015;
Abujabal et al.,2017;Luo et al.,2018;Bhutani
et al.,2019;Lan et al.,2019;Lan and Jiang,2020;
Qiu et al.,2020b;Sun et al.,2021;Das et al.,2021)
use deep learning techniques to automatically gen-
erate such executable queries. However, they need
ground-truth executable queries as supervisions
(which are costly to obtain) and their performance
is limited when the KG has missing links (non-
executable queries).
Methods in the second category alleviate the
need for ground-truth queries by learning natural
language and KG representations to reason in a
common space. These methods match question
representations to KG facts (Miller et al.,2016;Xu
et al.,2019;Atzeni et al.,2021) or to KG struc-
ture representations (Zhang et al.,2018;Han et al.,
2021;Qiu et al.,2020a). More related to our work,
GraftNet (Sun et al.,2018), PullNet (Sun et al.,
2019), and NSM (He et al.,2021) enhance the rea-
soning with graph-based reasoners. Our approach
aims at improving the graph-based reasoning via
adaptive instruction decoding and execution.
Researchers have also considered the problem of
performing KGQA over incomplete graphs (Min
et al.,2013), where important information is miss-
ing. These methods either rely on KG embed-
dings (Saxena et al.,2020;Ren et al.,2021) or
on side information, such as text corpus (Sun et al.,
2018,2019;Xiong et al.,2019;Han et al.,2020),
to infer missing information. However, they offer
marginal improvement over other methods when
the KG is mostly complete.
KGQA has also been adapted to specific do-
mains, such as QA over temporal (Mavromatis
et al.,2021;Saxena et al.,2021) and common-
sense (Speer et al.,2017;Talmor et al.,2019;Lin
et al.,2019;Feng et al.,2020;Yasunaga et al.,
2021) KGs.