
Figure 2: An example illustrating minutiae correspondences
between a pair of synthetic fingerprints [1] of the same
finger. A total of 23 minutiae points are in correspon-
dence shown with green lines. Correspondences were au-
tomatically established using the graph matching algorithm
from [6]
ent fingers of the same person). Rigorous statistical analy-
ses have demonstrated that these central tenants are indeed
backed by strong evidence [3, 4].
Both of these tenants have been established primarily
via the use of long-standing discriminative features widely
known as fingerprint minutiae [5]. Minutiae points are
anomalous key-points located throughout the fingerprint’s
friction ridge pattern. These anomalies occur as either i)
terminations or ii) bifurcations (see (b) of Figure 1 for ex-
amples). Furthermore, each minutiae point is a 3-tuple of
(x, y, θ)where x, y indicate the location of the minutiae
point and θis the direction of the ridge flow at the minu-
tiae point’s location.
In almost all state-of-the-art (SOTA) fingerprint recogni-
tion systems, a full minutiae set (shown in (a) of Figure 1)
is first extracted from a given fingerprint. Subsequently,
this variable length, unordered set of minutiae key-points
is compared to another set of minutiae key-points extracted
from an enrolled fingerprint image using graph matching
techniques (Figure 2). At the most basic level, if a simi-
larity score aggregated from corresponding points is more
than a specified threshold, the fingerprint pair is determined
to be a genuine match, otherwise a non-matching imposter
pair.
Although the success of the minutiae based features have
led to minutiae and automated fingerprint recognition be-
ing nearly synonymous terms, minutiae based fingerprint
matching systems do have several significant limitations:
Inefficiency: Minutiae matching is computationally expen-
sive. First minutiae must be detected. Oftentimes, descrip-
tors are also extracted for each minutiae point. Then, these
variable length, unordered sets need to be compared via ex-
pensive graph matching techniques. In contrast, most SOTA
face recognition systems which rely on deep feature repre-
sentations require only a dot product between fixed-length
embeddings of query and enrollment images to compute a
match score (dmultiplications and d−1additions for a d
dimensional embedding). Vulnerability: Matching finger-
print templates2in a secure encrypted manner is extremely
challenging. In contrast, fully homomorphic encryption
(FHE) schemes on deep embeddings have now shown the
ability to match biometric templates in the encrypted do-
main in real-time [7, 8].
These limitations of the minutiae template and also the
demonstrated success of SOTA deep networks to extract
highly discriminative biometric embeddings from faces [9,
10], has initiated exploration in alternative representations
to fingerprint minutiae. In particular, works from [11, 12,
13, 14, 15, 16] all explore the use of deep networks to
embed fingerprint images into a compact, discriminatory,
fixed-length fingerprint representation. These works show
tremendous promise in terms of both the accuracy and speed
needed to either supplant or at least complement the es-
tablished minutiae template. For example, Engelsma et al.
showed in [17] that a 192-dim deep fingerprint embedding
could reach near parity with COTS matchers in terms of
authentication and search accuracy on NIST SD4 [18] and
NIST SD14 [19] datasets while matching at 3 or 4 orders
of magnitude faster speed (300 milliseconds search time vs.
27,000 milliseconds search time on a gallery of 1.1 million
fingerprints). On resource constrained devices and for civil
ID and law enforcement databases with hundreds of million
images in the datasets, these improvements in search time
are invaluable.
Given the complementary strengths of the minutiae tem-
plate and the deep fingerprint templates (human inter-
pretability, statistical understanding, and interoperability of
the minutiae template) vs. (accuracy and speed of the deep
templates), a natural idea is to learn a fingerprint embedding
which somehow distills knowledge of fingerprint minutiae
into the parameters of the deep network. Rather than com-
pletely discarding the minutiae template, we lean on this
domain knowledge to learn more discriminative and gener-
alizable deep fingerprint embeddings. Indeed, the models
in [17, 13, 15] all aim to do just that using Convolutional
Neural Networks (CNN) in combination with distillation of
minutiae domain knowledge.
Inspired by the merging of minutiae domain knowledge
into deep CNNs, in this work, we explore the first ever use
of a Vision Transformer (ViT [20]) to learn a fixed length
embedding from a fingerprint image. Similar to prior work
(which utilized CNN building blocks), we build on top of
the vanilla ViT with a strategy for incorporating minutiae
domain knowledge into the network’s parameters. The use
of the multi-headed self attention blocks (MHSA [21]) built
into ViT for the extraction of fingerprint embeddings is
2We use the term templates, representations, and embeddings through-
out to denote a set of features extracted from a fingerprint image.