Wheel Impact Test by Deep Learning Prediction of Location and Magnitude of Maximum Stress Seungyeon Shin1a Ah-hyeon Jin1a Soyoung Yoo13 Sunghee Lee3

2025-04-29 0 0 2.49MB 28 页 10玖币
侵权投诉
Wheel Impact Test by Deep Learning: Prediction of Location and
Magnitude of Maximum Stress
Seungyeon Shin1,a, Ah-hyeon Jin1,a, Soyoung Yoo1,3, Sunghee Lee3,
ChangGon Kim2, Sungpil Heo2,
Namwoo Kang1,3,*
1Cho Chun Shik Graduate School of Mobility, KAIST, 34051, Daejeon, South Korea
2Hyundai Motor Company, 445706, Hwaseong-Si, Gyeonggi-Do, South Korea
3Narnia Labs, 34051, Daejeon, South Korea
*Corresponding author: nwkang@kaist.ac.kr
a Contributed equally to this work.
Abstract
For ensuring vehicle safety, the impact performance of wheels during wheel development must be
ensured through a wheel impact test. However, manufacturing and testing a real wheel requires a
significant time and money because developing an optimal wheel design requires numerous iterative
processes to modify the wheel design and verify the safety performance. Accordingly, wheel impact
tests have been replaced by computer simulations such as finite element analysis (FEA); however, it
still incurs high computational costs for modeling and analysis, and requires FEA experts. In this study,
we present an aluminum road wheel impact performance prediction model based on deep learning that
replaces computationally expensive and time-consuming 3D FEA. For this purpose, 2D disk-view
wheel image data, 3D wheel voxel data, and barrier mass values used for the wheel impact test were
utilized as the inputs to predict the magnitude of the maximum von Mises stress, corresponding location,
and the stress distribution of the 2D disk-view. The input data were first compressed into a latent space
with a 3D convolutional variational autoencoder (cVAE) and 2D convolutional autoencoder (cAE).
Subsequently, the fully connected layers were used to predict the impact performance, and a decoder
was used to predict the stress distribution heatmap of the 2D disk-view. The proposed model can replace
the impact test in the early wheel-development stage by predicting the impact performance in real-time
and can be used without domain knowledge. The time required for the wheel development process can
be reduced by using this mechanism.
1. Introduction
To ensure vehicle safety, vehicle wheels sufficiently durable to meet safety requirements need
to be developed. Accordingly, a strict impact test must be performed during wheel development to test
impact damage. However, completing a wheel design involves inspecting the wheel safety through the
wheel impact test, which takes significant time and money owing to trial and error during wheel
development. Therefore, vehicle manufacturing companies need a solution to reduce the time in the
wheel design and manufacturing stages through a rapid impact analysis of the various design proposals.
Although the actual wheel impact test has been replaced by computer simulations such as finite
element analysis (FEA), as proposed by Chang and Yang (2009), it is still time-consuming because the
computationally expensive simulation process needs to be repeatedly executed, and FEA experts are
required to inspect the wheel performance. Accordingly, recent studies have suggested methods for
replacing FEA through deep learning methodologies in various applications such as stress distribution
of the aorta (Liang et al., 2018), stress prediction for bottom-up SLA 3D printing processes (Khadilkar
et al., 2019), stress prediction of arterial walls (Madani et al., 2019), stress field prediction of
cantilevered structures (Nie et al., 2020), and natural frequency prediction of 2D wheel images (Yoo et
al., 2021). These approaches can considerably contribute to accelerating the analysis process. However,
there are still limitations to their use in real-world problems owing to the application domain. A more
detailed review is presented in Section 2.
In this study, we present a 3D wheel impact performance prediction model based on deep
learning that can replace 3D FEA of the aluminum road wheel impact test used in real-world product
development processes. The objective of this study is to replace the 3D FEA process for wheel impact
analysis, which requires high computational cost, to provide the impact performance of a wheel design
in the conceptual design stage, thereby reducing the time required for wheel development. Synthetic 3D
wheel data were generated through the 3D wheel CAD automation process (Oh et al., 2019; Yoo et al.,
2021; Jang et al., 2022) using 2D disk-view images (spoke designs) and rim cross-sections, and the
impact performance results were collected through FEA impact test simulation. Hence, we constructed
a real-time prediction model that predicts the magnitude of the maximum von Mises stress, the
corresponding location, and the overall stress distribution of the 2D disk-view using this mechanism.
The novelty of this study is as follows. This is the first study to apply deep learning to vehicle
system impact tests. Second, various types of data were used as input and output through the multimodal
autoencoder architecture to improve prediction accuracy. Third, to overcome the data shortage problem,
a 3D convolutional variational autoencoder (cVAE) is used for transfer learning to extract important
features of the 3D wheels. Through the proposed model, the impact performance of a wheel design can
be checked in real-time, even in the conceptual design stage, by predicting the magnitude of the
maximum von Mises stress, and the location of the maximum stress can also be known, informing the
parts to be supplemented in the wheel design. The overall von Mises stress distribution of the 2D disk-
view was also predicted, providing more information to the designer. Accordingly, this method can be
easily utilized by general designers without engineering expertise, thereby enabling rapid impact
performance inspection of various design proposals. The same process can be applied to any product
that requires an impact test in addition to the wheels.
The remainder of this paper is organized as follows. Section 2 summarizes the related studies,
and Section 3 presents the data collection and preprocessing steps for training the model and the
architecture of the proposed model. The prediction results are discussed in Section 4. Finally, Section
5 presents conclusions, limitations, and directions for future work.
2. Related Work
Recently, studies to solve various real-world engineering problems using deep learning are
being conducted, such as autonomous driving (Grigorescu et al., 2020), smart factories (Essien and
Giannetti, 2020), and environmental engineering (Ostad‑Ali‑Askari et al., 2017; Ostad‑Ali‑Askari and
Shayan, 2021). In addition, deep learning can be used in engineering problems to efficiently develop
various products as the product development process requires repetitive iterations to obtain an optimal
design (Kim et al., 2022; Shin et al., 2022; Yoo and Kang, 2021). In particular, design optimization for
product development is time-consuming and computationally costly owing to repeated simulations such
as FEA and computational fluid dynamics (CFD), which are essential for inspecting the safety of a
product. Therefore, studies have recently emerged to replace the simulation process by applying various
deep learning methodologies (Deng et al., 2020; Lee et al., 2020; Qian & Ye, 2021; Zheng et al., 2021).
This section focuses on deep learning studies that predict the stress distribution in structures.
Liang et al. (2018), which is an early study that applied deep learning to replace FEA, predicted the
aortic wall stress distribution according to the shape of thoracic aorta. Shape encoding of the aorta shape
was performed through principal component analysis (PCA) for this purpose, and the stress distribution
of the aorta was predicted through a neural network. Madani et al. (2019) predicted the maximum von
Mises stress value and the corresponding location for 2D arterial cross-sectional images to replace finite
element simulation using machine learning. Nie et al. (2020) predicted the stress distribution of a 2D
linear elastic cantilevered structure subjected to a static load to accelerate structural analysis. In this
study, two networksSCSNet and StressNetwere proposed, and the von Mises stress distribution
was predicted by inputting various structures, external forces, and displacement boundary conditions.
However, the aforementioned studies are predictions for the 2D domain and have limitations when
applied to actual product development. In real-world problems, high-dimensional data such as 3D data
must be considered. However, high-dimensional data are difficult to train and require considerable
training data. Therefore, appropriate data representation and training methods for high-dimensional data
must be devised to replace 3D simulations with deep learning.
Similar to our study, Khadilkar et al. (2019) proposed two CNN-based networks to predict the
stress distribution by layer for the bottom-up SLA printing process among manufacturing methods. In
particular, the 2-stream CNN network, which exhibits the highest performance among them, uses a
binary image of the cross-section and a 3D model up to the previous layer as input, in the form of a
point cloud, to predict the stress distribution of a layer cross-section. The 2D image passes through the
convolutional layer, and the 3D point cloud enters the network by adding each feature vector that passes
through PointNet (Qi et al., 2017). This method has an input similar to that of our method. However,
our proposed method predicts the maximum von Mises stress value and the corresponding location as
well as the 2D stress distribution using voxel-based 3D data. Khadilkar et al. (2019) predicted the stress
distribution for 2D domains, whereas our proposed methodology could predict the 3D coordinates of
the maximum von Mises stress location for the 3D domain, enabling the replacement of the existing 3D
FEA.
Two major issues must be considered for deep learning in the 3D domain. A considerable
problem in the field of 3D deep learning is that it requires a large amount of training data. However,
collecting a sufficient amount of 3D data in practice is difficult; therefore, we need to construct a model
with high accuracy, even with limited data. Second, the representation of the 3D data is important when
dealing with 3D data. In the field of 3D deep learning, representation methods such as point clouds,
meshes, and voxels are commonly used.
The point-cloud-based method represents the shape through a set of points distributed near the
surface of a 3D shape (Bello et al., 2020). However, the point cloud method has the disadvantage that
expressing the details of a shape is difficult because it is sparse. The mesh-based method represents a
3D shape using a polygon-shaped face made of vertices. However, this method is sensitive to the quality
of the input mesh and the surface patch of the shape may not be stitched. The voxel-based method uses
volumetric data to express a 3D shape in a cube form. However, the voxel method requires a large
amount of memory storage because it expresses the occupied and non-occupied parts (Ahmed et al.,
2018). The resolution of the voxel needs to be increased to express the 3D shape in more detail, but the
problem is that the higher the resolution, the more the parameters increase. Several studies have been
conducted to solve the computational cost problem of the voxel method. For example, some studies
have been performed to express 3D shapes with high resolution using an octree-based method (Häne et
al., 2017; Riegler et al., 2017; Tatarchenko et al., 2017).
In this study, we propose a multimodal autoencoder architecture that uses multiple modalities
in parallel as input and output. As in Bachmann et al. (2022), it is typically used to handle multiple
sources such as images, text, and audio. We constructed a prediction model based on a multimodal
autoencoder architecture that uses various dimensions of inputs and outputs in parallel to overcome the
data shortage problem and reduce the computational cost by utilizing a latent vector of the input to
reduce the dimension of the high-dimensional data. To this end, we used voxel-based 3D wheel data to
extract the features of the 3D CAD data using a 3D CNN-based convolutional variational autoencoder
(cVAE). Training was carried out using the latent vector of 3D CAD data and 2D wheel image data
through the pretrained 3D cVAE model and the 2D convolutional autoencoder (cAE) model, and
accurate results were derived even with 2,501 3D CAD data.
3. Deep Learning Framework for Wheel Impact Test
3.1. Overall Framework
The entire study process comprises four steps. In Stage 1, the 3D road wheel CAD datasets
were automatically generated. Spoke designs were first collected from the Internet, provided by
Hyundai Motors, and then generated using topology optimization. The rim cross-sections of the 3D
wheels were also collected, and six representative designs were selected for use in 3D CAD generation.
In Stage 2, wheel impact analysis was performed using the generated 3D wheel data. Based on the
analysis results, post-processing was performed to remove outliers, and the magnitude of the maximum
von Mises stress and its location coordinates were extracted. Stage 3 involves developing 3D cVAE
and 2D cAE, which are dimensionality reduction models used to improve the performance of the
proposed model. These models were used to reduce the dimensions of the input data. Finally, Stage 4
is the phase of developing a deep learning model that predicts the magnitude of the maximum von Mises
stress, the corresponding location coordinates (x, y, and z), and the overall von Mises stress distribution
in the 2D disk-view. The proposed model architecture and hyperparameters were selected by conducting
various experiments while varying the input and output of the model. Finally, the performance of the
proposed model was confirmed by conducting transfer learning using an actual wheel used in real life.
The overall process proposed in this study is illustrated in Figure 1.
3.2. Stage 1: Generating 3D Wheel Data
Several detailed 3D wheel models used in reality are needed to construct an accurate wheel
impact performance prediction model. However, considerable synthetic concept wheel data was
generated and used for training because sufficiently detailed 3D wheel data are difficult to obtain. The
3D CAD automation framework proposed by Yoo et al. (2021) was used herein. The framework
comprises a stage that handles 2D spoke designs (disk-view images) and rim cross-sections and a stage
that creates these into 3D CAD. This process automatically generated a large amount of 3D roadwheel
CAD. First, 2D work dealing with spoke designs and rim cross-sectional images was performed.
Accordingly, spoke designs and rim-cross-section images were collected for this purpose, as explained
in Sections 3.2.1 and 3.2.2.
3.2.1 Disk-view spoke design data collection and preprocessing
First, 2D disk-view spoke design images for 3D CAD generation are collected in various ways.
The three main collection methods were as follows. First, 603 binary wheel images available on the
Internet were collected, and topology optimization using the collected images as the reference design
was performed to collect 177 generative design piece wheels. Topology optimization was performed
on the wheel pieces, as shown in Figure 2, and they were rotated to make a complete wheel, solving the
conventional problem of generative design wherein the symmetry of the generation result is not
guaranteed. Here, 10 types of equal wheel pieces from 4 to 13 pieces were used, and generative design
Figure 1. Overall framework of the proposed method
摘要:

WheelImpactTestbyDeepLearning:PredictionofLocationandMagnitudeofMaximumStressSeungyeonShin1,a,Ah-hyeonJin1,a,SoyoungYoo1,3,SungheeLee3,ChangGonKim2,SungpilHeo2,NamwooKang1,3,*1ChoChunShikGraduateSchoolofMobility,KAIST,34051,Daejeon,SouthKorea2HyundaiMotorCompany,445706,Hwaseong-Si,Gyeonggi-Do,SouthK...

展开>> 收起<<
Wheel Impact Test by Deep Learning Prediction of Location and Magnitude of Maximum Stress Seungyeon Shin1a Ah-hyeon Jin1a Soyoung Yoo13 Sunghee Lee3.pdf

共28页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:28 页 大小:2.49MB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 28
客服
关注