
A Trainable Sequence Learner that Learns and
Recognizes Two-Input Sequence Patterns
Jan Hohenheim
University of Zurich
Zurich, Switzerland
jan@hohenheim.ch
Tommaso Stecconi
IBM Research GmbH
Zurich Research Laboratory
Work carried out at University of Zurich
Zurich, Switzerland
tec@zurich.ibm.com
Zhaoyu Devon Liu
CUHK BME
Chinese University of Hong Kong
Hong Kong, China
zhyliu@link.cuhk.edu.hk
Pietro Palopoli
D-ITET
ETH Zurich
Zurich, Switzerland
ppalopoli@student.ethz.ch
Abstract—We present two designs for an analog circuit that can
learn to detect a temporal sequence of two inputs. The training
phase is done by feeding the circuit with the desired sequence and,
after the training is completed, each time the trained sequence
is encountered again the circuit will emit a signal of correct
recognition. Sequences are in the order of tens of nanoseconds.
The first design can reset the trained sequence on runtime but
assumes very strict timing of the inputs. The second design can
only be trained once but is lenient in the input’s timing.
Index Terms—sequence, learning, analog, design, circuit, coin-
cidence detector
I. INTRODUCTION
Sequence pattern recognition is both central to how our
brain works and important for many modern AI applications
such as speech recognition, speaker identification, automatic
medical diagnosis or general classification. Since the human
brain performs pattern learning and recognition with extreme
energy efficiency, parallelism, and relatively good speed, the
need to replicate these advantages in silica becomes apparent.
Some similar works already exist. Liu et al. [1] designed
a multi-terminal transistor that can behave as a sequence
detector by tuning the time delay between pulses fed into the
transistor terminals. However, this system cannot train itself
using certain input sequences like our brain.
As a starting point, we present an idea of how a simple
sequence pattern learning and recognition chip could look like
and provide two possible implementation schematics.
II. LEARNING ALGORITHM
Inputs are expected to be pulses coming from two different
sources that arrive in some regular interval, determined by
the design used. We will call them Signal A (or just A) and
Signal B (or just B). They are instantly delayed by the same
default amount of time. The goal is to make them overlap in
time (learning phase) so that later the correct recognition can
occur. This is done by treating A as a reference and shifting
B’s delay up or down to make it coincide with A inside the
circuit.
This report was produced in collaboration with the Institute of Neuroinfor-
matics (University of Zurich and ETH Zurich)
Fig. 1. Overview of the shared logic behind the circuits
To do so, It is necessary to analyze the extent of the delay
between the inputs, which is done by using a coincidence
detector inspired by the Jeffress model of sound localization
[2]. This model works by letting the inputs move through a
series of delaying nodes from opposite sides. They will meet
at a node determined by their temporal offset. If they meet in
the middle it means that they came in simultaneously. Since
our inputs are assumed to always come with some offset, a
meeting in the middle must mean that we successfully tweaked
B’s delay so that the training is finished.
Before this is the case, the inputs will meet at another node,
indicating how off from our goal we are. E.g. if we wish to
detect the sequence “AB”, B will come in after A, so the
signals will meet on one of the nodes on the right in figure 1.
meaning that we must aim to decrease B’s delay. This task is
carried out by a delay adjustment unit specific to the design
in question. In any case, the delay modified is a row of 8
tunable delay [3] units. Their delay is inversely proportional
to a shared bias voltage.
III. SEQUENCE LEARNER DESIGNS
A. Design A
1) Usage:
Main inputs Vin1 and Vin2
Auxiliary inputs Vreset to reset the learned sequence
Constant inputs Vdelay at 1.18 V
arXiv:2210.12193v1 [cs.NE] 21 Oct 2022