Charged Particle Tracking in Real-Time Using a Full-Mesh Data Delivery Architecture and Associative Memory Techniques_2

2025-04-30 0 0 4.63MB 33 页 10玖币
侵权投诉
Charged Particle Tracking in Real-Time Using a
Full-Mesh Data Delivery Architecture and Associative
Memory Techniques
Sudha Ajuhaa, Ailton Akira Shinodaa, Lucas Arruda Ramalhoa, Guillaume Baulieub,
Gaelle Boudoulb, Massimo Casarsac, Andre Cascadana, Emyr Clementd, Thiago Costa
de Paivaa, Souvik Dase, Suchandra Duttaf, Ricardo Eusebig, Giacomo Fedih, Vitor
Finotti Ferreiraa, Kristian Hahni, Zhen Huj, Sergo Jindarianij*, Jacobo Konigsberge,
Tiehui Liuj, Jia Fu Lowe, Emily MacDonaldk, Jamieson Olsenj, Fabrizio Pallah, Nicola
Pozzobonl, Denis Rathjensg, Luciano Ristorij, Roberto Rossinl, Kevin Sungi, Nhan
Tranj, Marco Trovatoi, Keith Ulmerk, Mario Vaza, Sebastien Viretb, Jin-Yuan Wuj,
Zijun Xum, and Silvia Zorzettii
aUNESP - Sao Paulo State University, Sao Paulo, Brazil
bInstitut de Physique Nucleaire de Lyon (IPNL), Lyon, France
cINFN Sezione di Trieste, Trieste, Italy
dUniversity of Bristol, Bristol, United Kingdom
eUniversity of Florida, Gainesville, Florida, USA
fSaha Institute of Nuclear Physics, HBNI, Kolkata, India
gTexas A&M University, College Station, Texas, USA
hINFN Sezione di Pisa, Pisa, Italy
iNorthwestern University, Evanston, Illinois, USA
jFermi National Accelerator Laboratory, Batavia, Illinois, USA
kUniversity of Colorado Boulder, Boulder, Colorado, USA
lINFN Sezione di Padova, Universita‘ di Padova, Padova, Italy
mPeking University, Beijing, China
October 7, 2022
Abstract
We present a flexible and scalable approach to address the challenges of charged
particle track reconstruction in real-time event filters (Level-1 triggers) in collider
physics experiments. The method described here is based on a full-mesh architecture
for data distribution and relies on the Associative Memory approach to implement
a pattern recognition algorithm that quickly identifies and organizes hits associated
to trajectories of particles originating from particle collisions. We describe a suc-
cessful implementation of a demonstration system composed of several innovative
hardware and algorithmic elements. The implementation of a full-size system relies
on the assumption that an Associative Memory device with the sufficient pattern
density becomes available in the future, either through a dedicated ASIC or a mod-
ern FPGA. We demonstrate excellent performance in terms of track reconstruction
efficiency, purity, momentum resolution, and processing time measured with data
from a simulated LHC-like tracking detector.
Corresponding author, sergo@fnal.gov
1
arXiv:2210.02489v1 [hep-ex] 5 Oct 2022
Contents
1 Introduction 2
2 Overview of the Approach 4
2.1 DataProcessingStages ............................ 4
2.2 Assumptions .................................. 5
3 Data Delivery 7
3.1 TriggerTowers ................................. 7
3.2 FullMeshArchitecture............................. 9
4 Pattern Recognition and Track Fitting 10
4.1 SuperstripDenition .............................. 11
4.2 Generation of the Pattern Bank . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Pattern Bank Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.4 TrackFitting .................................. 16
4.5 DuplicateRemoval ............................... 17
5 Demonstration System 17
6 Results 20
6.1 SystemLatency................................. 20
6.2 TrackingPerformance.............................. 21
7 Additional Studies 25
7.1 AM+Hough Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Using Local Bend Information in AM . . . . . . . . . . . . . . . . . . . . . 28
8 Summary 28
9 Acknowledgements 30
A Demonstration Hardware 31
1 Introduction
Most high-energy physics experiments, such as those carried out at hadron colliders,
are exposed to ever increasing luminosity. Due to the large size, complexity, and high
granularity of their various detector components, the amount of data acquired by these
experiments can be staggering. As a result, recording and processing every collision event
is practically impossible. It is also not necessary since most of the events are not of
interest to the physics program. Collider experiments have dealt with this difficulty by
implementing real-time event selection methods, called triggers, that reduce the amount
of data to levels manageable for storage and offline processing while recording most of the
events relevant for physics analyses. These triggers are critical to the success of the physics
program of the experiments, as they allow selecting only events with which important
measurements or discoveries can be made. To this end, the real-time reconstruction
of all physics objects in each event has to be carried out as fast and as accurately as
2
possible. Triggers are typically designed so that decisions are made in sequential steps.
Each successive step, or level, handles a significantly reduced event rate with respect to
the previous one and has access to more detailed information.
For example, the trigger systems of the current Large Hadron Collider (LHC) experi-
ments will continue to face increasingly complex requirements. In order to vastly expand
the physics reach of the ATLAS and CMS experiments, the LHC and the experimental
collaborations are pursuing an immense upgrade program with an expected completion
date of 2029. This will usher the so-called HL-LHC era (for High-Luminosity LHC) during
which the luminosity will increase by a factor of five relative to the Run 2 instantaneous
luminosity, resulting in an average number of proton-proton collisions per bunch crossing,
referred to as pileup, of approximately 200. To contend with such a serious increase in
proton-proton collision rates, and the correspondingly high detector occupancy, several
components of the ATLAS and CMS experiments will have to be upgraded. In particular,
the trigger systems need to undergo a significant redesign [1–3]. Looking even further into
the future, the anticipated detector complexity and beam background conditions at the
high-energy colliders proposed beyond the LHC (FCC-hh, SppC, muon collider) will be
orders of magnitude higher than those at the HL-LHC, further exacerbating the problem
of real-time reconstruction.
The ability to reconstruct trajectories of charged particles in the trigger system and
to measure their transverse momenta (pT) with high precision can be a powerful asset as
this significantly improves the pTmeasurement of leptons and jets, which are otherwise
measured much less accurately. This in turn means that the set of collision events with
such objects required to pass a certain pTthreshold in the trigger will be selected more
accurately, providing control over the trigger selection rate. An alternative to this would
be to require higher pTthresholds, which would lead to a reduction of events relevant
to the experiment’s physics program. Therefore, the implementation of a track trigger
system at HL-LHC and future collider experiments is important to the preservation, and
even the expansion, of programs that include extensive explorations at the electroweak
scale and the study of new processes with potential for discoveries.
The implementation of a Level-1 (L1) track trigger system is extremely challenging
due to the high rate of collision events, the amount of data produced in each event,
and the requirement for few microsecond processing time. The expected data rates from
the detector to the trigger electronics at the HL-LHC experiments are of the order of
100 Terabits per second (Tb/s) [1]. Prior to the start of track reconstruction, the data
need to be organized and distributed so that all energy deposits from charged particles,
referred to as hits, from a particular region of the tracker and from the same bunch
crossing are located in the same electronics processor board at the same time. To solve
this complex problem, a massively parallel data distribution and processing architecture
is needed. Such an architecture has to have the capability to simultaneously handle events
originating from different bunch crossings (time multiplexing) as well as different physical
detector regions within the same bunch crossing (regional multiplexing). It also has to
be flexible and scalable in order to accommodate possible future changes, upgrades, or
unforeseen operating conditions. The challenge is to reconstruct all tracks above a low pT
threshold (of about 2 GeV) with high efficiency, purity, and momentum resolution that
are as close as possible to the quality of the offline reconstruction.
In what follows, we describe a platform to demonstrate track reconstruction at the
L1 trigger for experiments running in an environment similar to what is expected at
the HL-LHC. Section 2 provides a general description of the approach and the specific
3
assumptions made for the development of the presented solution. Sections 3 and 4 provide
an overview of the full-mesh data delivery scheme and the Associative Memory (AM)+
FPGA approach, and include the studies and novel algorithmic concepts tried in order to
optimize the performance of the system. Section 5 describes the hardware and firmware
used for the demonstration setup, with its results presented in Section 6. Additional
simulation studies to further improve the performance of such a system are presented in
Section 7, and the paper concludes with a summary in Section 8.
The approach presented in this paper is one of three approaches described in the
Technical Design Report of the Phase-2 upgrade of the CMS tracker [1]. However, the
focus of this paper is more on the conceptual features of the approach and its scalability,
which can serve as guiding principles for addressing L1 tracking needs of future high-energy
collider experiments. The studies presented here were performed with a demonstration
setup at Fermilab.
2 Overview of the Approach
2.1 Data Processing Stages
The approach described in this paper is directly applicable to a generic tracking detector
with silicon sensors; however, it could also be applicable to other pattern recognition
problems in which execution speed is paramount.
Our approach utilizes a full-mesh architecture for efficient data distribution and relies
on AM, implemented in fast electronics chips [4], to handle pattern recognition and the
rapid growth of pattern complexity with hit occupancy. In this paper, we refer to this
approach as AM+FPGA; however, it should be noted that the full-mesh architecture is
generally compatible with other (non-AM) pattern recognition schemes.
An approach based on AM was implemented successfully in the CDF experiment [5]
during Run 2 of the Tevatron in a real-time trigger that selected tracks originating from
secondary vertices produced by the decay of bottom and charm hadrons [6, 7]. The
challenge is that real-time track reconstruction at the HL-LHC is much more complex
than what the CDF experiment had to contend with: not only is the collision rate much
higher, but also the number of electronic channels is orders of magnitude larger.
The AM+FPGA approach can be described as a sequence of three processing stages:
data delivery, pattern recognition, and track fitting. In the data delivery stage, the data
from the tracker back-end electronics are formatted and distributed so that all hits that
originate in a certain geometric region of the detector and belong to the same bunch cross-
ing, are brought simultaneously to a Data Organizer (DO) unit for common processing.
The pattern recognition stage involves selecting those hits that were potentially created by
the same charged particle. To do that, each layer of the tracking detector is subdivided
into coarse groupings of silicon strips (superstrips) which are then linked across layers
into patterns that correspond to probable trajectories of charged particles through the
detector. The set of all most likely patterns produced by charged tracks is obtained from
simulation and loaded into the Associative Memory, implemented in an ASIC or FPGA.
The AM provides a very fast way to simultaneously recognize all those patterns that were
produced by the tracks created in the collisions and reject all hits that do not belong to
any of those patterns. Each pattern recognized by the Associative Memory is grouped
together with all the hits that fall within that pattern to form a road. The roads are sent
4
Figure 1: One quadrant of the CMS tracker geometry used for the demonstration. Only
the Outer Tracker modules shown in red and blue are providing data for L1 track recon-
struction.
to the track fitting stage, where the hits are used to extract track parameters from the
coordinates of the stubs [7].
Track fitting is implemented in FGPAs [8] downstream from the pattern recognition
stage. The Combination Builder (CB) generates all possible combinations of hits within
each road. The hits from each combination are then input to a number of linearized χ2
track fitters (TF). The coefficients used in the linearized fits are pre-determined from a
principal component analysis (PCA) of simulated tracks. Mis-reconstructed tracks that
do not correspond to particles are referred to as fake tracks and suppressed by requiring
good quality of the fit. Some tracks may be reconstructed multiple times in this procedure.
Such duplicate tracks are removed by selecting the track with the best χ2probability from
sets of tracks that contain hits in common.
2.2 Assumptions
In this section, we describe the detector configuration and provide key specifications for
the demonstration. While it is important to state them here, we stress that the presented
approach is largely independent of the specifics of the tracker geometry and of the data
formatting schemes in the detector front-end electronics.
We use a right-handed coordinate system, with the origin at the nominal collision
point, the x-axis pointing to the center of the collider ring, the y-axis pointing up (perpen-
dicular to the collider plane), and the z-axis along the counterclockwise beam direction.
The polar angle θis measured from the positive z-axis and the azimuthal angle (φ) is
measured from the positive x-axis in the x-yplane. The radius (r) denotes the distance
from the z-axis and the pseudorapidity ηis defined as η=ln[tan(θ/2)].
For the purpose of the specific study presented here, we consider a geometry consist-
ing of six cylindrical barrel layers in the central region, with modules aligned along the
beam direction, and five endcap discs on each side of the barrel, with modules aligned
perpendicular to the beam direction. This geometry is shown in Fig. 1 and was initially
proposed for the HL-LHC upgrade of the CMS tracker and was later modified [1]. This
paper focuses primarily on track reconstruction in the barrel part of such a detector, cor-
responding to the pseudorapidity region |η|<1.0. However the techniques can be easily
extended into the endcap region.
The vast majority of hits in hadron collisions originate from particles with low trans-
5
摘要:

ChargedParticleTrackinginReal-TimeUsingaFull-MeshDataDeliveryArchitectureandAssociativeMemoryTechniquesSudhaAjuhaa,AiltonAkiraShinodaa,LucasArrudaRamalhoa,GuillaumeBaulieub,GaelleBoudoulb,MassimoCasarsac,AndreCascadana,EmyrClementd,ThiagoCostadePaivaa,SouvikDase,SuchandraDuttaf,RicardoEusebig,Giacom...

展开>> 收起<<
Charged Particle Tracking in Real-Time Using a Full-Mesh Data Delivery Architecture and Associative Memory Techniques_2.pdf

共33页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:33 页 大小:4.63MB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 33
客服
关注