Topology Optimization via Machine Learning and Deep Learning A Review Seungyeon Shin1a Dongju Shin12a and Namwoo Kang12

2025-05-06 0 0 2.25MB 50 页 10玖币
侵权投诉
Topology Optimization via Machine Learning and Deep Learning:
A Review
Seungyeon Shin1,a, Dongju Shin1,2,a, and Namwoo Kang1,2,*
1 Cho Chun Shik Graduate School of Mobility,
Korea Advanced Institute of Science and Technology
2 Narnia Labs
*Corresponding author: nwkang@kaist.ac.kr
a Contributed equally to this work.
A previous version of this manuscript was presented at the 2021 World Congress on Advances in Structural
Engineering and Mechanics (ASEM21) (Seoul, Korea, August 23-26, 2021) (Shin et al., 2021)
Abstract
Topology optimization (TO) is a method of deriving an optimal design that satisfies a given load and boundary
conditions within a design domain. This method enables effective design without initial design, but has been
limited in use due to high computational costs. At the same time, machine learning (ML) methodology including
deep learning has made great progress in the 21st century, and accordingly, many studies have been conducted to
enable effective and rapid optimization by applying ML to TO. Therefore, this study reviews and analyzes
previous research on ML-based TO (MLTO). Two different perspectives of MLTO are used to review studies:
(1) TO and (2) ML perspectives. The TO perspective addresses whyto use ML for TO, while the ML perspective
addresses howto apply ML to TO. In addition, the limitations of current MLTO research and future research
directions are examined.
1. Introduction
Topology optimization (TO) is a field of design optimization that determines the optimal material layout under
certain constraints on loads and boundaries within a given design space. This method allows the optimal
distribution of materials with desired performance to be determined while meeting the design constraints of the
structure (Bendsøe, 1989). TO is meaningful in that, compared with conventional optimization approaches,
designing is possible without meaningful initial design. Due to these advantages, various TO methodologies have
been studied to date (Bendsøe, 1989; Rozvany et al., 1992; Mlejnek, 1992; Allaire et al., 2002; Wang et al., 2003;
Xie & Steven, 1993). The following four TO methodologies are described in detail in Appendix A: density-based
method (i.e., the solid isotropic material with penalization (SIMP) method), evolutionary structural optimization
(ESO) method, level-set method (LSM), and moving morphable component (MMC) method.
Recent TO methods aim to solve various industrial applications. Examples include TO for patient-specific
osteosynthesis plates (Park et al., 2021), microscale lattice parameter (i.e., the strut diameter) optimization for TO
(Cheng et al., 2019), homogenization of 3D TO with microscale lattices (Zhang et al., 2021b), and multiscale TO
for additive manufacturing (AM) (Kim et al., 2022). Other notable works for more complex TO problems include
a multilevel approach to large-scale TO accounting for linearized buckling criteria (Ferrari & Sigmund, 2020),
the localized parametric level-set method applying a B-spline interpolation method (Wu et al., 2020), the
systematic TO approach for simultaneously designing morphing functionality and actuation in three-dimensional
wing structures (Jensen et al., 2021), the parametrized level-set method combined with the MMA algorithm to
solve nonlinear heat conduction problems with regional temperature constraints (Zhuang et al., 2021b), and the
parametric level-set method for non-uniform mesh of fluid TO problems (Li et al., 2022).
However, although the aforementioned TO methodologies can produce good conceptual designs, one of the
main challenges in performing TO is its high computational cost. The overall cost of the computational scheme is
dominated by finite element analysis (FEA), which computes the sensitivity for each iteration of the optimization
process. The required FEA time increases as the mesh size increases (e.g., when the mesh size is increased by 125
times, the required time increases by 4,137 times (Liu & Tovar, 2014)). Amid this computational challenge,
performing TO for a fine (high-resolution) topological mesh can take a few hours to days (Rade et al., 2020).
Furthermore, the 3D TO process requires much higher computational costs with increasing demands in the order
SIMP, BESO, and level set method in terms of the number of iterations (Yago et al., 2022). Therefore, various
studies have aimed to reduce the computation of solving these analysis equations in TO (Amir et al., 2009; Amir
et al., 2010). For instance, by using an approximate approach to solve the nested analysis equation in the minimum
compliance TO problem, Amir and Sigmund (2011) reduced the computational cost by one order of magnitude
for an FE mesh with 40,500 elements.
Similarly, aiming to improve this computational challenge, various methods have been developed to accelerate
TO (e.g., Limkilde et al. (2018) discussed the computational complexity of TO, while Ferrari and Sigmund (2020)
conducted large-scale TO with reduced computational cost. Martínez-Frutos et al. (2017) performed efficient
computation using GPU, while Borrvall and Petersson (2001) and Aage et al. (2015) attempted to accelerate TO
by parallel computing. Despite the aforementioned efforts to reduce TO computing time, the computational costs
remain high. This challenge has encouraged several researchers to develop ML-based TO to accelerate TO.
Figure 1 Overview of AI, ML, and DL (Kaluarachchi et al., 2021)
Artificial intelligence (AI) includes any technology that allows machines to emulate human behavior, and
machine learning (ML) is a subset of AI, aimed at learning meaningful patterns from data by using statistical
methods (Figure 1). Deep learning (DL) is a subset of ML, inspired by the neuron structure of the human brain,
seeks to improve learning ability through methods of training hierarchical neural network (NN) structures
consisting of multiple layers from the data itself (LeCun et al., 2015; Goodfellow et al., 2016).
Data-based TO methodologies complement the problem of conventional TO methodologies with computational
cost problems due to design iterations over thousands of times, and many studies have been conducted to improve
TO algorithms through AI (Sosnovik & Oseledets, 2019; Oh et al., 2019; Banga et al., 2018, Guo et al., 2018;
Cang et al., 2019; Chandrasekhar & Suresh, 2021c). TO methods applying AI can quickly and effectively optimize
the initial topology. In this study, ML-based TO (MLTO) is expressed throughout studies that incorporate ML or
DL. In particular, we focus more on various methodologies by using DL.
This material intends to review various MLTO studies currently developed and analyze each characteristic.
Section 2 and Section 3 present the existing studies from a TO point of view and an ML point of view, respectively,
and analyzed the characteristics of the study. Section 2 groups and introduces studies focusing on the purpose,
i.e., whyML is applied to TO from a TO perspective. Section 3 groups and introduces studies focusing on the
ML method, i.e., how TO can be converted to ML problems from an ML perspective. In Section 4, the
limitations of MLTO studies and the desirable future directions of the field are presented. Appendixes A and B
introduce the fundamental background of TO and basic DL methodologies required to understand this study.
Figure 2 presents various analytical perspectives on MLTO to be described in Sections 2 and 3.
Figure 2 MLTO Review Framework
2. TO Perspective: Why Use ML
From the perspective of TO, studies that focus on ML/DL to improve the existing TO techniques have seven
main purposes: acceleration of iteration, non-iterative optimization, meta-modeling, dimensionality reduction of
design space, improvement of optimizer, generative design (design exploration), and postprocessing. Each
purpose is shown in Figure 3, and the studies corresponding to each purpose are organized in Table 1.
Figure 3 Visualization of the methodology by MLTO purpose
Table 1 Classification of relevant studies by MLTO purpose
Purpose of MLTO
in TO perspective
Corresponding Research
Acceleration of
Iteration
Banga et al., 2018; Lin et al., 2018; Sosnovik & Oseledets, 2019; Bi et al., 2020; Kallioras
et al., 2020; Kallioras & Lagaros, 2021a
Non-Iterative
Optimization
Rawat & Shen, 2018; Cang et al., 2019; Li et al., 2019a; Rawat & Shen, 2019a; Rawat &
Shen, 2019b; Sharpe & Seepersad, 2019; Yu et al., 2019; Zhang et al., 2019; Abueidda et al.,
2020; Almasri et al., 2020; Kollmann et al., 2020; Rade et al., 2020; Keshavarzzadeh et al.,
2021; Nie et al., 2021; Lew & Buehler, 2021; Qiu et al., 2021a
Meta-Modeling
Patel & Choi, 2012; Aulig & Olhofer, 2014; Xia et al., 2017; Zhou & Saitou, 2017; Doi et
al., 2019; Li et al., 2019b; Sasaki & Igarashi, 2019a; Sasaki & Igarashi, 2019b; Takahashi et
al., 2019; White et al., 2019; Asanuma et al., 2020; Deng et al., 2020; Keshavarzzadeh et al.,
2020; Lee et al., 2020; Chi et al., 2021; Kim et al., 2021; Qian & Ye, 2021; Zheng et al., 2021
Dimensionality
Reduction
of Design Space
Ulu et al., 2016; Guo et al., 2018; Li et al., 2019c
Improvement of
Optimizer
Bujny et al., 2018; Hoyer et al., 2019; Lei et al., 2019; Lynch et al., 2019; Deng & To, 2020;
Jiang et al., 2020; Chandrasekhar & Suresh, 2021a; Chandrasekhar & Suresh, 2021b;
Chandrasekhar & Suresh, 2021c; Halle et al., 2021; Zehnder et al., 2021; Zhang et al., 2021a
Generative Design
Oh et al., 2018; Gillhofer et al., 2019; Jiang & Fan, 2019a; Jiang & Fan, 2019b; Jiang et al.,
2019; Oh et al., 2019; Shen & Chen, 2019; Wen et al., 2019; Greminger, 2020; Ha et al., 2020;
Kallioras & Lagaros, 2020; Malviya, 2020; Sun & Ma, 2020; Wang et al., 2020; Wen et al.,
2020; Blanchard-Dionne & Martin, 2021; Kallioras & Lagaros, 2021b; Kudyshev et al., 2021;
Li et al., 2021; Sim et al., 2021; Yamasaki et al., 2021; Jang et al., 2022
Postprocessing
Yildiz et al., 2003; Chu et al., 2016; Strömberg, 2019; Karlsson et al., 2020; Napier et al.,
2020; Strömberg, 2020; Wang et al., 2021; Xue et al., 2021
Others
Liu et al., 2015; Gaymann & Montomoli, 2019; Hayashi & Ohsaki, 2020; Kumar & Suresh,
2020; Qiu et al., 2021b; Brown et al., 2022
2.1. Acceleration of Iteration
The optimization process of iterative TO can be divided into two stages: the early stage, in which the design
change is significant, and the latter stage, in which the design changes slowly until it converges. Studies applying
ML techniques supplement the latter step of the optimization process, which is especially time-consuming. Thus,
studies that aim at accelerating iterative TO consist of two stages as follows:
(1) Until the intermediate stage of the iteration, the iterative TO is performed.
(2) In the intermediate stage, the optimal topology can be directly predicted by ML by using the intermediate
topology obtained from (1).
This process can accelerate the time-consuming and costly latter stage in the conventional optimization process.
Figure 4 Architecture of acceleration of iterative optimization from a representative paper
(Sosnovik & Oseledets, 2019)
摘要:

TopologyOptimizationviaMachineLearningandDeepLearning:AReviewSeungyeonShin1,a,DongjuShin1,2,a,andNamwooKang1,2,*1ChoChunShikGraduateSchoolofMobility,KoreaAdvancedInstituteofScienceandTechnology2NarniaLabs*Correspondingauthor:nwkang@kaist.ac.kraContributedequallytothiswork.Apreviousversionofthismanus...

展开>> 收起<<
Topology Optimization via Machine Learning and Deep Learning A Review Seungyeon Shin1a Dongju Shin12a and Namwoo Kang12.pdf

共50页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:50 页 大小:2.25MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 50
客服
关注