The Software Stack That Won the Formula Student Driverless Competition Andres Alvarez1 Nico Denner2 Zhe Feng2 David Fischer1 Yang Gao1

2025-05-06 0 0 4.47MB 6 页 10玖币
侵权投诉
The Software Stack That Won the
Formula Student Driverless Competition
Andres Alvarez1, Nico Denner2, Zhe Feng2, David Fischer1, Yang Gao1,
Lukas Harsch2, Sebastian Herz1, Nick Le Large1, Bach Nguyen1, Carlos Rosero1,
Simon Schaefer1, Alexander Terletskiy1, Luca Wahl1, Shaoxiang Wang1, Jonona Yakupova1, Haocen Yu2
Abstract This report describes our approach to design and
evaluate a software stack for a race car capable of achieving
competitive driving performance in the different disciplines of
the Formula Student Driverless. By using a 360° LiDAR and
optionally three cameras, we reliably recognize the plastic cones
that mark the track boundaries at distances of around 35 m,
enabling us to drive at the physical limits of the car. Using a
GraphSLAM algorithm, we are able to map these cones with
a root-mean-square error of less than 15 cm while driving at
speeds of over 70 km/hon a narrow track. The high-precision
map is used in the trajectory planning to detect the lane bound-
aries using Delaunay triangulation and a parametric cubic
spline. We calculate an optimized trajectory using a minimum
curvature approach together with a GGS-diagram that takes
the aerodynamics at different velocities into account. To track
the target path with accelerations of up to 1.6 g, the control
system is split into a PI controller for longitudinal control and
model predictive controller for lateral control. Additionally, a
low-level optimal control allocation is used. The software is
realized in ROS C++ and tested in a custom simulation, as well
as on the actual race track.
I. INTRODUCTION
In the Formula Student competitions, based on extensive
rules and guidelines similar to Formula SAE, student teams
throughout the world design and manufacture an open-wheel,
single-seater race car. Originally consisting of only combus-
tion vehicles, the competition was since extended with an
electric category, and starting in 2017, with an autonomous
category (Formula Student Driverless) as well. Points are
awarded for various aspects, the most substantial of which
are the quality of the engineering design as well as the on-
track performance. One of the most technically challenging
disciplines, Autocross, consists of an unknown, closed-loop
and narrow track of around 200 m to 300 m length outlined
by yellow and blue plastic cones, which must be completed
as quickly as possible without hitting any of the cones.
While on track, any interaction with, or remote control of
the vehicle, is forbidden.
Founded in 2006 by students of the Karlsruhe Institute of
Technology, the team KA-RaceIng developed their 5th au-
tonomous car for the 2021 competition. The KIT21d is shown
in Figure 1. It features a carbon fiber-reinforced polymer
1Author and Researcher
2Researcher
1 2 Karlsruhe Institute of Technology and KA-RaceIng e.V.,
firstname.lastname@ka-raceing.de
Corresponding author, simon.schaefer@ka-raceing.de
Fig. 1. The KIT21d driving at Formula Student Germany 2021. Photo
credit: FSG Partenfelder.
(CFRP) chassis that is equipped with four electric motors
with a maximum power of 80 kW in total, a 470 V battery
with a capacity of 5.2 kWh, and weighs 214 kg.
II. DESIGN GOALS
After finishing 2nd overall three years in a row at Formula
Student Germany between 2017 and 2019, our main goal
for 2020/2021 was a 1st place overall at all events. In the
Autonomous System, we focused on two points to achieve
this goal.
Increased robustness in localization and path-planning
The analysis of data collected during the test days and events
showed that our car was regularly on the verge of taking a
wrong turn. The planned trajectory was corrected only in
the last second, meaning we drove at the absolute limit.
To drive any faster without making trade-offs in safety, we
needed a correct trajectory much further ahead. To achieve
this, improvements were needed in the first three modules of
the autonomous pipeline:
1) Perception: In 2019, cones were first detected at a
distance of approximately 30 m, with the median lying
at around 20 m. We set the goal to increase both figures
by at least 10 m, while maintaining a false-positive rate
near zero.
2) SLAM: To complete the 40 m perception range goal,
SLAM needed to be able to handle the increased num-
arXiv:2210.10933v1 [cs.RO] 20 Oct 2022
Fig. 2. System Overview
ber of landmarks by utilizing a parallelized architec-
ture.
3) Planning: The generation of a correct path depends on
interpreting the mapped landmarks correctly. Our goal
this year was to evaluate new algorithms and compare
them to last year’s method in terms of accuracy in
difficult situations and computation time.
Increasing average speed
1) On straights: To increase acceleration, we set the goal
of implementing a traction control system.
2) In corners: To use as much of the track width as
possible, the precision of the pose estimation and path
tracking had to be increased. Additionally, torque vec-
toring and active yaw rate control were required to
ensure stability in highly dynamic situations.
III. SYSTEM OVERVIEW
The autonomous system software runs centrally on a
multi-core x86 processing unit (Autonomous Computing
Unit, ACU), which provides the necessary computational
power to run our autonomous system in real time. If cameras
are used, this x86 CPU is complemented by the Coral Edge
TPU machine learning co-processor used for running an
image classification neural network. The sensors shown in
figure 2 are connected directly to the ACU via USB3, Ether-
net or CAN. All actuation values are sent via CAN directly
to the Electronics Control Unit (ECU), which manages the
electrical system of the car and continuously performs safety
checks on the complete system. Figure 2 provides a high-
level overview of the communication in our autonomous sys-
tem. The system is implemented using the Robot Operating
System (ROS) framework in the Melodic Morenia release.
Most components are implemented in C++, except for some
smaller modules realized in Python. The central processing
pipeline starts with perception. The cones detected by the
perception system are processed by the SLAM algorithm,
which localizes the vehicle and builds a map of its surround-
ings. On this map, the target trajectory is planned and then
realized by the motion control system. This whole process is
constantly monitored by the supervisor node that performs
health and sanity checks of the other nodes to ensure a safe
drive. Additionally, our Simulation is capable of testing all
the path planning and control parts of the pipeline outside
of the car, aiding us in fine-tuning the system and reducing
necessary test time.
IV. PERCEPTION
The perception system is responsible for recognizing the
position and color of the cones that define the race track. The
pipeline takes advantage of the precise location information
provided by the LiDAR. Additionally, rich semantic informa-
tion provided by the cameras can be included if necessary.
A. LiDAR System
The KIT21d uses one Hesai Pandar40P, a mechanically ro-
tating LiDAR operating at 10 Hz, to acquire precise position
2
摘要:

TheSoftwareStackThatWontheFormulaStudentDriverlessCompetitionAndresAlvarez1,NicoDenner2,ZheFeng2,DavidFischer1,YangGao1,LukasHarsch2,SebastianHerz1,NickLeLarge1,BachNguyen1,CarlosRosero1,SimonSchaefer1y,AlexanderTerletskiy1,LucaWahl1,ShaoxiangWang1,JononaYakupova1,HaocenYu2Abstract—Thisreportdescrib...

展开>> 收起<<
The Software Stack That Won the Formula Student Driverless Competition Andres Alvarez1 Nico Denner2 Zhe Feng2 David Fischer1 Yang Gao1.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:4.47MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注