Minutiae-Guided Fingerprint Embeddings via Vision Transformers Steven A. Grosz Michigan State University

2025-05-02 0 0 3.39MB 10 页 10玖币
侵权投诉
Minutiae-Guided Fingerprint Embeddings via Vision Transformers
Steven A. Grosz
Michigan State University
groszste@msu.edu
Joshua J. Engelsma*
Rank One Computing
josh.engelsma@rankone.io
Rajeev Ranjan
Amazon
rvranjan@amazon.com
Naveen Ramakrishnan
Amazon
navramk@amazon.com
Manoj Aggarwal
Amazon
manojagg@amazon.com
Gerard G. Medioni
Amazon
medioni@amazon.com
Anil K. Jain
Michigan State University
jain@msu.edu
Abstract
Minutiae matching has long dominated the field of fin-
gerprint recognition. However, deep networks can be used
to extract fixed-length embeddings from fingerprints. To
date, the few studies that have explored the use of CNN ar-
chitectures to extract such embeddings have shown extreme
promise. Inspired by these early works, we propose the first
use of a Vision Transformer (ViT) to learn a discrimina-
tive fixed-length fingerprint embedding. We further demon-
strate that by guiding the ViT to focus in on local, minu-
tiae related features, we can boost the recognition perfor-
mance. Finally, we show that by fusing embeddings learned
by CNNs and ViTs we can reach near parity with a com-
mercial state-of-the-art (SOTA) matcher. In particular, we
obtain a TAR=94.23% @ FAR=0.1% on the NIST SD 302
public-domain dataset, compared to a SOTA commercial
matcher which obtains TAR=96.71% @ FAR=0.1%. Ad-
ditionally, our fixed-length embeddings can be matched or-
ders of magnitude faster than the commercial system (2.5
million matches/second compared to 50K matches/second).
We make our code and models publicly available to encour-
age further research on this topic: github.com/tba.
1. Introduction
Over the past several decades, fingerprint recognition
systems have become pervasive across the globe in a num-
ber of different applications, from mobile phone unlock to
national ID programs [2]. The widespread adoption of Au-
tomated Fingerprint Identification Systems (AFIS) can be
*This author’s affiliation was with Amazon at the time of writing this
paper, but is now with Rank One Computing.
Figure 1: The most prevalent fingerprint representation is
comprised of a variable length, unordered minutiae (key-
point) set. (a) A full minutiae set from a computer gener-
ated (synthetic) fingerprint [1]. Each minutia point has a
location (x, y)and an orientation θindicating the position
and direction, respectively. (b) Examples of the two types
of fingerprint minutiae (Termination and Bifurcation).
primarily attributed to two major tenants:
Accuracy: According to the ongoing Proprietary Finger-
print Template III (PFT III 1) evaluations conducted by the
National Institute of Standards and Technology (NIST), fin-
gerprint recognition systems are now able to obtain recog-
nition accuracies across multiple operational datasets (col-
lected for various use-cases) in excess of 99%.
Scientific Understanding: Fingerprints were long believed
to be both permanent (retaining the same high accuracy over
time) and unique (different for every person, even differ-
1https://www.nist.gov/itl/iad/image-group/
proprietary-fingerprint-template-pft-iii
arXiv:2210.13994v2 [cs.CV] 26 Oct 2022
Figure 2: An example illustrating minutiae correspondences
between a pair of synthetic fingerprints [1] of the same
finger. A total of 23 minutiae points are in correspon-
dence shown with green lines. Correspondences were au-
tomatically established using the graph matching algorithm
from [6]
ent fingers of the same person). Rigorous statistical analy-
ses have demonstrated that these central tenants are indeed
backed by strong evidence [3, 4].
Both of these tenants have been established primarily
via the use of long-standing discriminative features widely
known as fingerprint minutiae [5]. Minutiae points are
anomalous key-points located throughout the fingerprint’s
friction ridge pattern. These anomalies occur as either i)
terminations or ii) bifurcations (see (b) of Figure 1 for ex-
amples). Furthermore, each minutiae point is a 3-tuple of
(x, y, θ)where x, y indicate the location of the minutiae
point and θis the direction of the ridge flow at the minu-
tiae point’s location.
In almost all state-of-the-art (SOTA) fingerprint recogni-
tion systems, a full minutiae set (shown in (a) of Figure 1)
is first extracted from a given fingerprint. Subsequently,
this variable length, unordered set of minutiae key-points
is compared to another set of minutiae key-points extracted
from an enrolled fingerprint image using graph matching
techniques (Figure 2). At the most basic level, if a simi-
larity score aggregated from corresponding points is more
than a specified threshold, the fingerprint pair is determined
to be a genuine match, otherwise a non-matching imposter
pair.
Although the success of the minutiae based features have
led to minutiae and automated fingerprint recognition be-
ing nearly synonymous terms, minutiae based fingerprint
matching systems do have several significant limitations:
Inefficiency: Minutiae matching is computationally expen-
sive. First minutiae must be detected. Oftentimes, descrip-
tors are also extracted for each minutiae point. Then, these
variable length, unordered sets need to be compared via ex-
pensive graph matching techniques. In contrast, most SOTA
face recognition systems which rely on deep feature repre-
sentations require only a dot product between fixed-length
embeddings of query and enrollment images to compute a
match score (dmultiplications and d1additions for a d
dimensional embedding). Vulnerability: Matching finger-
print templates2in a secure encrypted manner is extremely
challenging. In contrast, fully homomorphic encryption
(FHE) schemes on deep embeddings have now shown the
ability to match biometric templates in the encrypted do-
main in real-time [7, 8].
These limitations of the minutiae template and also the
demonstrated success of SOTA deep networks to extract
highly discriminative biometric embeddings from faces [9,
10], has initiated exploration in alternative representations
to fingerprint minutiae. In particular, works from [11, 12,
13, 14, 15, 16] all explore the use of deep networks to
embed fingerprint images into a compact, discriminatory,
fixed-length fingerprint representation. These works show
tremendous promise in terms of both the accuracy and speed
needed to either supplant or at least complement the es-
tablished minutiae template. For example, Engelsma et al.
showed in [17] that a 192-dim deep fingerprint embedding
could reach near parity with COTS matchers in terms of
authentication and search accuracy on NIST SD4 [18] and
NIST SD14 [19] datasets while matching at 3 or 4 orders
of magnitude faster speed (300 milliseconds search time vs.
27,000 milliseconds search time on a gallery of 1.1 million
fingerprints). On resource constrained devices and for civil
ID and law enforcement databases with hundreds of million
images in the datasets, these improvements in search time
are invaluable.
Given the complementary strengths of the minutiae tem-
plate and the deep fingerprint templates (human inter-
pretability, statistical understanding, and interoperability of
the minutiae template) vs. (accuracy and speed of the deep
templates), a natural idea is to learn a fingerprint embedding
which somehow distills knowledge of fingerprint minutiae
into the parameters of the deep network. Rather than com-
pletely discarding the minutiae template, we lean on this
domain knowledge to learn more discriminative and gener-
alizable deep fingerprint embeddings. Indeed, the models
in [17, 13, 15] all aim to do just that using Convolutional
Neural Networks (CNN) in combination with distillation of
minutiae domain knowledge.
Inspired by the merging of minutiae domain knowledge
into deep CNNs, in this work, we explore the first ever use
of a Vision Transformer (ViT [20]) to learn a fixed length
embedding from a fingerprint image. Similar to prior work
(which utilized CNN building blocks), we build on top of
the vanilla ViT with a strategy for incorporating minutiae
domain knowledge into the network’s parameters. The use
of the multi-headed self attention blocks (MHSA [21]) built
into ViT for the extraction of fingerprint embeddings is
2We use the term templates, representations, and embeddings through-
out to denote a set of features extracted from a fingerprint image.
摘要:

Minutiae-GuidedFingerprintEmbeddingsviaVisionTransformersStevenA.GroszMichiganStateUniversitygroszste@msu.eduJoshuaJ.Engelsma*RankOneComputingjosh.engelsma@rankone.ioRajeevRanjanAmazonrvranjan@amazon.comNaveenRamakrishnanAmazonnavramk@amazon.comManojAggarwalAmazonmanojagg@amazon.comGerardG.MedioniAm...

展开>> 收起<<
Minutiae-Guided Fingerprint Embeddings via Vision Transformers Steven A. Grosz Michigan State University.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:3.39MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注