DEPARTMENT OF PHYSICS MAY2021 Future of computing at the Large Hadron Collider_2

2025-05-06 0 0 4.22MB 26 页 10玖币
侵权投诉
DEPARTMENT OF PHYSICS
MAY 2021
Future of computing at the Large Hadron Collider
Author:
Dhananjay Saikumar
Academic Supervisors:
Dr Daniel O'Hanlon &
Prof Jonas Rademacker
FOR SUBMISSION IN 2021
arXiv:2210.13213v1 [hep-ph] 27 Sep 2022
2
Abstract
High energy physics (HEP) experiments at the LHC generate data at a rate of O(10) Terabits per second. This data rate
is expected to exponentially increase as experiments will be upgraded in the future to achieve higher collision energies. The
increasing size of particle physics datasets combined with the plateauing single-core CPU performance is expected to create
a four-fold shortage in computing power by 2030. This makes it necessary to investigate alternate computing architectures to
cope with the next generation of HEP experiments. This study provides an overview of different computing techniques used
in the LHCb experiment (trigger, track reconstruction, vertex reconstruction, particle identification). Furthermore, this research
led to the creation of three event reconstruction algorithms for the LHCb experiment. These algorithms are benchmarked on
various computing architectures such as the CPU, GPU, and a new type of processor called the IPU, each roughly containing
O(10),O(1000), and O(1000) cores respectively. This research indicates that multi-core architectures such as GPUs and IPUs
are better suited for computationally intensive tasks within HEP experiments.
3
Contents
1. Introduction 4
2. Physics at the LHCb 5
2.1. CP violation 5
2.2. The Cabibbo-Kobayashi-Maskawa matrix 6
2.3. BSM signatures 6
2.4. Schematic of the LHCb detector 7
3. Computing architectures 7
3.1. Central Processing unit 7
3.2. Graphics Processing Unit 8
3.3. Intelligence Processing Unit 8
3.4. Tensor Processing Unit 8
4. Trigger system 8
5. Event reconstruction 9
5.1. Track reconstruction 9
5.1.1. Vertex locator (Velo) 10
5.1.2. Upstream tracker (UT) 11
5.1.3. Sci-Fi tracker 11
5.1.4. Muon detector 12
5.1.5. K´
alm´
an filter 12
5.2. Vertexing 12
5.3. Particle identification 13
5.3.1. RICH 13
5.3.2. Calorimeter 13
6. Algorithms and Results 14
6.1. The predictive combinatorial seeding algorithm 14
6.1.1. Design 14
6.1.2. Results 15
6.2. The artificial retina algorithm 16
6.2.1. Design 16
6.2.2. Results 17
6.3. Convolutional neural network based track reconstruction 18
6.3.1. Supervised Machine Learning basics 18
6.3.2. Neural Networks 18
6.3.3. Design 19
6.3.4. Results 20
6.4. The trackless vertex finder 21
6.4.1. Design 21
6.4.2. Results 21
7. Discussion 22
7.1. Computing performance and limitations 22
7.1.1. ML optimization 22
7.1.2. Parallelism 22
7.1.3. TensorFlow (frontend vs backend) 23
7.2. Other computing avenues in HEP 23
8. Summary and Conclusions 24
References 25
4
1 Introduction
The Large Hadron Collider (LHC), located in Geneva
Switzerland, is the largest particle accelerator ever built to-
date, designed and constructed by the European Organization
for Nuclear Research (CERN). The LHC is made up of a 27-
kilometre underground tunnel filled with strong magnets that
guide and boost beams of protons up-to velocities 3.1 m/s
slower than the speed of light [1]. Protons beams travelling
in the opposite direction are designed to collide in four dis-
tinct regions corresponding to the four major particle physics
experiments at the LHC (see figure 1):
A Toroidal LHC Apparatus (ATLAS): Largest General pur-
pose experiment at the LHC, designed to investigate a wide
spectrum of physics, ranging from: the Higgs boson, CP
violation1, BSM2, Dark matter, and more [2].
Compact Muon Solenoid (CMS): Another General purpose
experiment at the LHC which works in conjunction with
the ATLAS Experiment (e.g: the joint discovery of the
Higgs boson in 2012). It has the same scientific goals as
ATLAS; however, the CMS detector operates on a com-
pletely different design approach [3].
Large Hadron Collider beauty (LHCb): The LHCb is a spe-
cialised experiment designed to study the effects of CP vio-
lation in beauty-hadron systems (shedding light on the mat-
ter anti-matter asymmetry observed in the universe), mea-
suring forward-backward asymmetry in FCNC3decays,
searching for BSM phenomena, and more [4].
A Large Ion Collider Experiment (ALICE): ALICE is de-
signed to study heavy-ion physics. Collisions of Pb-Pb nu-
clei allow ALICE to investigate the fifth state of matter: the
quark-gluon plasma (QGP). In this state, quarks and gluons
are disentangled, resulting in conditions similar to a frac-
tion of a second after the Big bang [5].
Figure 1: The top of the figure represents the LHC and positions of
the four major experiments. The bottom of the figure depicts small
accelerators which inject hadrons into the 27KM LHC ring. Figure
taken from Ref [6].
1Charge conjugation Parity symmetry
2Physics beyond the Standard Model
3Flavor-changing neutral currents
The energy of the colliding particle beams is transformed
into matter via Einstein’s mass-–energy relation eq(1) in the
COM4frame, creating a wide shower of particles.
E=mc2(1)
Each collision is referred to as an event. In every event, the
position (hits) and momentum of newly created particles are
measured by the detectors which surround the interaction re-
gion. The raw data from the detector is used to reconstruct the
collision events in a process known as event reconstruction,
which involves identifying the particles, their energy, momen-
tum, trajectory, and their point of origin, hence measuring the
process that took place at the collision [7]. The first run of
the LHC (2009) achieved 7 TeV of collision energy, subse-
quently increased to 13 TeV during Run 2 (2015). The LHC
was shutdown in 2018 for additional upgrades and is sched-
uled to resume operations in 2022 (Run 3) with an expected
collision energy of 14 TeV [8].
The amount of data generated at each event is expected to
increase as HEP5experiments are upgraded, since more mat-
ter is created at higher energies. The four-major experiments
at the LHC during Run 3 are expected to generate data at a
rate of O(10) Terabit/s [9]. Storing all of this data in real-
time for post-analysis is logistically infeasible, which is why
experiments at the LHC use a trigger system. By partially re-
constructing the events in real time, a trigger system selects
and stores interesting datasets for detailed post analysis. The
LHC is expected to generate 40 million bunch proton-proton
(pp) collisions per second (40 MHz). Hardware level triggers
in the ATLAS and CMS aim to reduce this throughput rate
down to O(1) KHz [7]. In contrast, the upgraded LHCb ex-
periment will use a software based trigger running entirely on
GPUs operating at the full bunch collision rate [10].
Figure 2: A pie chart of projected CPU resources (by computa-
tional task) required by ATLAS in 2030.
In addition to the immense computing resources required
by the trigger, HEP experiments also ubiquitously use Monte
Carlo (MC) simulations, mainly for modelling the collision
4Center of momentum
5High Energy Physics
5
events, detector response, detector calibration, and more (see
figure 2for CPU resource allocation by task). A steep rise in
computing resources is needed to keep up with the exponen-
tially increasing size of particle physics datasets. Future ex-
periments would not only face an engineering challenge, but
a computational one as well since these experiments would
be limited by how efficiently the computing resources can be
used. A four-fold shortage in computing power (figure 3) is
forecasted by 2030 [11]. This is because, although the transis-
tor density of a CPU6has been increasing over time; obeying
Moore’s law (albeit is decelerating), the single-core CPU per-
formance has plateaued since the mid-2000s due to the con-
straints on power density [12].
Figure 3: Estimated CPU resources needed (blue points) by the
ATLAS experiment from the years 2018 to 2028. The solid line rep-
resents the amount of resources expected to be available assuming
a flat funding scenario of 20% per year. Other experiments at the
LHC follow similar resource-forecast trend. Figure taken from [11]
Due to these performance constraints, particle physics ex-
periments such as ATLAS, COMET, ALICE, CMS, and
LHCb [1315] are investigating and implementing alternate
computing techniques and multi-core architectures (i.e soft-
ware based GPU trigger at LHCb) to cope up with the ever in-
creasing computing demands of HEP experiments. These in-
clude heterogeneous systems where computing architectures
such as the GPUs7and FPGAs8work in conjunction with the
CPUs. A modern CPU contains O(10) powerful cores built
on the MIMD9design, whereas the GPU contains O(1000)
simpler cores built on the SIMD10 design, making the GPU
ideal for running multiple identical computations in parallel.
On the other hand, FPGAs are integrated circuits that are not
hard etched; unlike CPUs and GPUs, this allows FPGAs to be
reprogrammed for specific computations, often resulting in far
superior performance compared to its traditional counterparts.
6Central Processing Unit
7Graphics Processing Unit
8Field programmable gate arrays
9Multiple Instruction, multiple Data
10Single instruction, multiple data
This urgency of coping up with an ever-increasing need for
computing resources in HEP experiments plays a central role
in this research project, which aims to:
– Give an overview of computing techniques used in HEP
experiments (with a focus on the LHCb experiment).
– Develop HEP algorithms, using traditional and machine
learning based techniques.
Implement and benchmark these algorithms on multiple
computing architectures such as: CPUs, GPUs, and a new
kind of processor: the IPU11.
The report is organised as follows: section 2 aims to give an
overview of the physics at the LHCb. Section 3 describes
the computing architectures used in this research project, fol-
lowed by the description of the trigger system in section 4.
Section 5 outlines different stages of event reconstruction, fol-
lowed by section 6, which describes the algorithms developed
in this study and their performance. Finally, section 7 dis-
cusses the implications of these results, the limitations, fur-
ther work that can be carried out, and section 8 concludes this
study.
2 Physics at the LHCb
The LHCb experiment is designed to investigate decay
channels and oscillations of beauty and charm hadron systems
with a particular focus on CP violating phenomena, as well as
searches for anomalies in rare decays which indicate physics
beyond the standard model.
2.1 CP violation
The standard model of particle physics (SM) encapsulates
our current best understanding of three out of the four funda-
mental forces in the universe (weak, strong, electromagnetic,
excluding gravity). The SM classifies elementary particles ac-
cording to their charges and describes their fundamental in-
teractions. In particle physics, the charge conjugation parity
symmetry states that the laws of physics must remain invari-
ant when a system’s spatial coordinates are flipped (parity in-
version) eq(2) and the particle is replaced by its antiparticle
(charge conjugation) eq(3). This symmetry was first observed
to be broken in 1964 in the decays of neutral kaons [16].
ˆ
P(ˆx, ˆy, ˆz)(ˆx, ˆy, ˆz)(2)
ˆ
C|Ψie=|Ψie+(3)
The three generations of quarks in the SM naturally gener-
ate CP violating phenomena in both weak and strong inter-
actions. CP violation in weak interactions is described via
11Intelligence Processing Unit
摘要:

DEPARTMENTOFPHYSICSMAY2021FutureofcomputingattheLargeHadronColliderAuthor:DhananjaySaikumarAcademicSupervisors:DrDanielOHanlon&ProfJonasRademackerFORSUBMISSIONIN20212AbstractHighenergyphysics(HEP)experimentsattheLHCgeneratedataatarateofO(10)Terabitspersecond.Thisdatarateisexpectedtoexponentiallyinc...

展开>> 收起<<
DEPARTMENT OF PHYSICS MAY2021 Future of computing at the Large Hadron Collider_2.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:26 页 大小:4.22MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注