ing dLTMs in Soar agents has enable reasoning complexity
that wasn’t possible earlier (Xu and Laird, 2010; Mohan
and Laird, 2014; Kirk and Laird, 2014; Mininger and Laird,
2018).
However, a crucial question remains unanswered - how
is general world knowledge in semantic memory acquired?
We posit that this knowledge is acquired in two distinctive
ways. Kirk and Laird (2014, 2019) explore the view that
semantic knowledge is acquired through interactive in-
struction when natural language describes relevant declar-
ative knowledge. An example concept is the goal of tower-
of-hanoi a small block is on a medium block and a large
block is below the medium block. Here, the trainer pro-
vides the definition of the concept declaratively which is
later operationalized so that it can be applied to recognize
the existence of a tower and in applying actions while solv-
ing tower-of-hanoi. In this paper, we explore an alternative
view that that this knowledge is acquired through exam-
ples demonstrated as a part of instruction. We augment
Soar dLTMs with a new concept memory that aims at ac-
quiring general knowledge about the world by collecting
and analyzing similar experiences, functionally bridging
episodic and semantic memories.
1.2. Algorithms for analogical processing
To design the concept memory, we leverage the com-
putational processes that underlie analogical reasoning
and generalization in the Companions cognitive architec-
ture - the Structure Mapping Engine (SME; Forbus et al.
2017) and the Sequential Analogical Generalization En-
gine (SAGE; McLure et al. 2015). Analogical matching,
retrieval, and generalization is the foundation of the Com-
panions Cognitive architecture. In Why we are so smart?,
Gentner claims that what makes human cognition superior
to other animals is “First, relational concepts are critical
to higher-order cognition, but relational concepts are both
non-obvious in initial learning and elusive in memory re-
trieval. Second, analogy is the mechanism by which rela-
tional knowledge is revealed. Third, language serves both
to invite learning relational concepts and to provide cogni-
tive stability once they are learned” (Gentner, 2003). Gen-
tner’s observations provide a compelling case for exploring
analogical processing as a basis for concept learning. Our
approach builds on the analogical concept learning work
done in Companions (Hinrichs and Forbus, 2017). Previ-
ous analogical learning work includes spatial prepositions
Lockwood (2009), spatial concepts (McLure et al., 2015),
physical reasoning problems (Klenk et al., 2011), and ac-
tivity recognition (Chen et al., 2019). This diversity of
reasoning tasks motivates our use of analogical processing
to develop an architectural concept memory. Adding to
this line of research, our work shows that you can learn
a variety of conceptual knowledge within a single system.
Furthermore, that such a system can be applied to not
only learn how to recognize the concepts but also acting on
them in the environment within an interactive task learn-
ing session.
1.3. Concept formation and its interaction with complex
cognitive phenomenon
Our design exploration of an architectural concept mem-
ory is motivated by the interactive task learning problem
(ITL; Gluck and Laird 2019) in embodied agents. ITL
agents rely on natural interaction modalities such as em-
bodied dialog to learn new tasks. Conceptual knowledge,
language, and task performance are inextricably tied -
language is a medium through which conceptual knowl-
edge about the world is communicated and learned. Task
performance is aided by the conceptual knowledge about
the world. Consequently, embodied language processing
(ELP) for ITL provides a set of functional requirements
that an architectural concept memory must address. Em-
bedding concept learning within the ITL and ELP contexts
is a significant step forward from previous explorations in
concept formation. Prior approaches have studied concept
formation independently of how they will be used in a com-
plex cognitive system, often focusing on the problems of
recognizing the existence of a concept in input data and
organizing concepts into a similarity-based hierarchy. We
study concept formation within the context of higher-order
cognitive phenomenon. We posit that concepts are learned
through interactions with an interactive trainer who struc-
tures a learner’s experience. The input from the trainer
help group concrete experiences together and a generaliza-
tion process distills common elements to form a concept
definition.
1.4. Theoretical Commitments, Claims, and Contribu-
tions
Our work is implemented in Soar and consequently,
brings to bear the theoretical postulates the architecture
implements. More specifically, we build upon the following
theoretical commitments:
1. Diverse representation of knowledge: In the past
decade, the CMC architectures have adopted the view
that architectures for general intelligence implement
diverse methods for knowledge representation and
reasoning. This view has been very productive in not
only studying an increasing variety of problems but
also in integrating advances in AI algorithmic research
in the CMC framework. We contribute to this view
by exploring how algorithms for analogical processing
can be integrated into a CMC architecture.
2. Deliberate access of conceptual knowledge: Follow-
ing CMC architectures, we assume that declarative,
conceptual knowledge is accessed through delibera-
tion over when and how to use that knowledge. The
architectures incorporates well-defined interfaces i.e.
buffers in working memory that contain information
as well as an operation the declarative memory must
execute on the information. Upon reasoning, informa-
tion may be stored, accessed, or projected (described
in further detail in Section 4).
2