DEEP NURBSADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS HAMED SAIDAOUI LUIS ESPATH1 RA UL TEMPONE234 Abstract. In this study we propose a new numerical scheme for physics-informed neural networks PINNs

2025-05-06 0 0 1.52MB 18 页 10玖币
侵权投诉
DEEP NURBS—ADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS
HAMED SAIDAOUI, LUIS ESPATH1& RA ´
UL TEMPONE2,3,4
Abstract. In this study, we propose a new numerical scheme for physics-informed neural networks (PINNs)
that enables precise and inexpensive solutions for partial differential equations (PDEs) in case of arbitrary
geometries while strongly enforcing Dirichlet boundary conditions. The proposed approach combines ad-
missible NURBS parametrizations (admissible in the calculus of variations sense, that is, satisfying the
boundary conditions) required to define the physical domain and the Dirichlet boundary conditions with
a PINN solver. Therefore, the boundary conditions are automatically satisfied in this novel Deep NURBS
framework. Furthermore, our sampling is carried out in the parametric space and mapped to the physical
domain. This parametric sampling works as an importance sampling scheme since there is a concentration of
points in regions where the geometry is more complex. We verified our new approach using two-dimensional
elliptic PDEs when considering arbitrary geometries, including non-Lipschitz domains. Compared to the
classical PINN solver, the Deep NURBS estimator has a remarkably high accuracy for all the studied prob-
lems. Moreover, a desirable accuracy was obtained for most of the studied PDEs using only one hidden
layer of neural networks. This novel approach is considered to pave the way for more effective solutions for
high-dimensional problems by allowing for a more realistic physics-informed statistical learning framework
to solve PDEs.
AMS subject classifications: ·35L65 ·
Contents
1. Introduction 1
2. Mathematical background 3
2.1. Construction of admissible NURBS parameterizations 4
3. Deep NURBS 5
3.1. Partial differential equation and method overview 6
3.2. Physics - Informed Neural Networks (PINNs) 7
4. Numerical results 10
4.1. Physical Domain with Corner Singularity 10
4.2. Annulus geometry 13
4.3. Square with a hole 14
5. Conclusion 16
6. Declarations: Funding and/or Conflicts of interests/Competing interests & Data availability 17
7. Additional Declaration 17
References 17
1. Introduction
Deep Learning (DL) exhibits unprecedented advancement in the last two decades [1]. Its usage in numerous
disciplines has resulted in diverse successful implementations such as language processing [2, 3] and image
1School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, United Kingdom
2Department of Mathematics, RWTH Aachen University, Geb¨
aude-1953 1.OG, Pontdriesch 14-16, 161, 52062
Aachen, Germany.
3King Abdullah University of Science & Technology (KAUST), Computer, Electrical and Mathematical Sci-
ences & Engineering Division (CEMSE), Thuwal 23955-6900, Saudi Arabia.
4Alexander von Humboldt Professor in Mathematics for Uncertainty Quantification, RWTH Aachen Univer-
sity, Germany.
E-mail address:hamed.saidaoui@kaust.edu.sa.
Date: July 30, 2024.
1
arXiv:2210.13900v2 [math.NA] 29 Jul 2024
2 DEEP NURBS—ADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS
recognition [4, 5]. With the exception of a few studies, the application of machine learning (ML) along
with neural networks (NNs) in general, in particular to scientific problems, was not as structured as in the
previous topics (for example [6, 7, 8, 9, 10, 11]).
Among these few efforts was the seminal work of Lagaris [6], who applied very simplistic NN models
to solve several ordinary and partial differential equations (PDEs) of different orders (the accuracy was
order-dependent though). This concept has been revived very recently by many researchers [12, 13, 14, 15]
who used a very similar concept and coined it as ”physics-informed neural networks (PINNs)” (two such
examples were deep Ritz and deep Galerkin for the last two references, respectively). Since then, PINNs
have gained considerable fame because of their simple implementation and their concept that allows for a
combination of NNs and the already established theories. PINNs have enabled the shift to the unsupervised
ML scheme owing to the laws and constraints implemented within its framework. Targeting the solution of
PDEs, PINNs have been proven to be very efficient for many complicated and challenging problems ([16, 17])
and for different types of PDEs. Furthermore, the convenience of the accompanying sampling methods made
it adequate for problems with high-dimensional domains. Because PINNs have been specifically proposed
for methods that consider the collocation points as training data with a discrete loss term; note that other
variants relying on the same principle coexisted, e.g., deep Ritz [14, 18] and deep Galerkin [15]. The first
method uses the weak form of the PDE in which the variational problem (energy minimization) can be
solved using stochastic optimizers and Monte Carlo sampling techniques. In the deep Galerkin scheme, the
gradients have been taken care of using a tailored stochastic approach.
Many variations have been proposed and proven to be effective in specific cases, in addition to these
three primary methods. Inverse problems have been tackled using stochastic differential equations (SDEs)
in this reference [19]. Lu et al. [20] proposed a method for learning non-linear operators, while Cai et
al. [16] have implemented a modified version of PINNs to infer electroconvection in multiphysics systems.
A comprehensive review on ML for fluid mechanics is presented in [21]. The mathematical formulation of
PINNs has been discussed in certain relevant studies from an uncertainty quantification perspective. Mishra
et al. [22] provided an estimate for its generalization error. They primarily provided bounds for PINNs
based on quadrature and random sampling based-PINNs. Fang et al. [23] established convergence rates for
the NNs-based PDE solver. To develop an approximation for the derivatives, they combined the NNs and
the differential operator (as it is used in Finite element methods).
As a trend in applying ML tools to scientific and engineering problems, we naturally wonder what makes
PINNs and their variants (deep Ritz and deep Galerkin) as efficient and vulnerable as they are (or considered
to be). To address this question, one should examine what makes these methods different from answering
this question. Surely its simple concept makes it easy to implement, particularly for complicated tasks in
which simplicity plays an important role in its success. One has to admit that the nature of its loss function,
which incorporates the system’s physical laws, has to do with its convergence rate to the sought solution.
The auto-differentiation, being an exact, differentiating tool, is, in turn, at the heart of PINNs success.
However, PINNs have observed many limitations when it comes to problems with discontinuities and
computationally demanding problems [14]. These problems are very demanding in terms of NN depth and
computational work; the latter two aspects make the PINNs usage challenging. From this perspective, one
might question what has to change to allow PINNs to cope with discontinuities without being extremely
expensive. Is it the way the loss function is presented (weak form [14, 18] or strong form [12]), or is it related
to the way the gradients are calculated (entirely auto-differentiation - based or hybrid [23]). If we examine
the previous references, one can conclude that none of the previous aspects is important in mitigating the
complexity of the NN architecture.
Shin et al. [24] demonstrated the sequence of minimizers generated in the stochastic optimization of
PINNs strongly converges to the solution of the PDE in C0. However, if the boundary conditions are satis-
fied, the minimizers converge to the sought solution in H1instead of C0. This insight has been supported
by recent works [25, 13] and genuinely by the original work of Lagaris [6]. The Lagaris approach was to
use functions that naturally fulfill the boundary conditions. Although this approach proved to be compu-
tationally very efficient, in terms of accuracy, this approach is not practical regarding complex geometries
with non-homogeneous boundaries. From this last point, we follow the concept described in Lagaris’ paper.
Nevertheless, we use admissible (in the sense of calculus of variations, that is, satisfying the boundary condi-
tions) non-uniform rational B-splines (NURBS) parameterizations, which are similarly used in IsoGeometric
Analysis, as an efficient approach to impose Dirichlet boundary conditions. To our knowledge, our work
DEEP NURBS—ADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS 3
(a) Admissible
(ansatz) NURBS φ(x)
(b) Neural Network
NN(x)
(c) Admissible Neural
Network N N (x)φ(x)
Figure 1. Exemplification of Deep NURBS’ method. From left to right: Admissible
NURBS (ansatz) φ(x), Neural Network NN(x), and the product of the two first, that
is, Admissible Neural Network NN(x)φ(x), for the homogeneous Dirichlet case.
is the first to impose boundary conditions using admissible NURBS in the PINNs framework. We call our
method Deep-NURBS, which is a combination of deep neural networks (DNNs) and NURBS. We use the
weak form of the loss function (energy term), and we implement our algorithms in Tensorflow [26]. Figure 1
shows the admissible NURBS (which works as an ansatz), the NN, and the product of the two, rendering the
Admissible NN that ultimately leads to the Deep-NURBS method. Moreover, despite being an ML-based
method, we demonstrate that our method is robust against slight changes in the NN parameters. This fact
allows us to overcome computational costs coming with hyperparameter optimization. Lastly, note that the
NURBS parameterization does not change the overall computational cost, that is, our Deep NURBS method
increases the approximability of the Neural Networks without increasing the overall cost for a fix Neural
Network.
In this study, we overview NURBS parameterizations in Section 2. In Section 3, we discuss our approach
based on combining DNNs with NURBS. Section 4 will be devoted to discussing our result of applying deep
NURBS to various problems with non-trivial complex geometries. Finally, we wrap up this study with a
conclusion.
2. Mathematical background
Our notation is as follows: scalar and vectorial fields are denoted by lower-case letters and lower-case
boldface letters, respectively. The gradient and Laplacian operators are denoted by and , respectively.
The physical domain is denoted as Dwith boundary D. Moreover, we let D:=DDDNwhere the
Dirichlet boundary conditions are considered on DDwhile the Neumann type of boundary conditions are
considered on DN. We construct our approximation on the Sobolev space of all square-integrable functions
on Dwith square-integrable derivatives. Thus, we denote the Lebesgue space as L2and L2for scalar-valued
and vector-valued functions, respectively. Similarly, we denote the Sobolev space of scalar- and vector-valued
functions as Hkand Hk, respectively, where the norm of the kth gradient is L2.
Let us consider two cases. Let L[u(x;θ)] be (a) the pointwise residual of a PDE or (b) the energy density
of the underlying system given an ansatz uparametrized by θRQwhere Qis the total number of degrees
of freedom (e.g., the total number of neural network parameters). Next, let ube the solution of this PDE
with values uD, on DDwhile its normal derivative nutakes the value gNon DN. In a learning type of
framework, we may state problems (a) and (b) as follows,
(1) θ:= arg min
θRQ∥L[u(·;θ)]2
L2(D)+ Dirichlet terms on DD+ Neumann terms on DN.
Note that the Dirichlet term is imposed in a weak sense where authors in Ref. [14] use a penalization term
to impose the Dirichlet boundary conditions. Instead, our approach is to impose the Dirichlet boundary
conditions in a strong sense. To this end, we consider a parameterization of our domain D,
(2) x:=χ(ξ),
4 DEEP NURBS—ADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS
where we assume that χis a bijective mapping with x∈ D and ξis a parameter that lives in the parameter
space Ξ. We here use the notion of admissible fields. A family of functions is said to be admissible
if Dirichlet boundary conditions are satisfied. Consider the following family of admissible vector-valued
parameterizations
(3) Z:={ζ|ζHk(D)ζ(x) = uD(x),x=χ(ξ)DD}.
Moreover, we define a family of auxiliary smooth scalar-valued functions Pwith vanishing boundary, i.e.,
φ:χ(Ξ)7→ Rwith φ(χ(ξ)) = 0 for all ξ=χ1(x) such that x=χ(ξ)DD. Furthermore, we require
φHk(D). Thus,
(4) P:={φ|φHk(D)φ(x)=0,x=χ(ξ)DD}.
The problem statement (1) reads
(5)
θ:= arg min
θRQ∥L[φ(·)u(·;θ) + ζ(·)]2
L2(D)+ Neumann terms on DN,
given an arbitrary φ∈ P and ζ∈ Z.
Furthermore, for the scalar case, that is, we replace the vector-valued function uwith the scalar valued
function u, expression (5) takes the following form
(6)
θ:= arg min
θRQ∥L[φ(·)u(·;θ) + ζ(·)])2
L2(D)+ Neumann terms on DN,
given an arbitrary φ∈ P and ζ∈ Z,
where here Zis
(7) Z:={ζ|ζHk(D)ζ(x) = uD(x),x=χ(ξ)DD}.
If we instead restrict attention to the scalar case with homogeneous boundary conditions, the function φ
becomes our admissible field such that the expression (6) specializes to
(8)
θ:= arg min
θRQ∥L[φ(·)u(·;θ)])2
L2(D)+ Neumann terms on DN,
given an arbitrary φ∈ P.
We will demonstrate how φand ζmay be developed using NURBS parameterizations.
2.1. Construction of admissible NURBS parameterizations. Now, let us consider the subset of
NURBS functions Pλ⊂ P and Zλ⊂ Z which will work as an ansatz. The Cox–deBoor recursive for-
mulation [27, 28] is usually adopted to evaluate B-spline basis functions based on [29]. In ddimensions,
B-splines are obtained considering a given knot vector Ξ=Nd
ȷ=1 Ξȷdefined over the parametric space with
polynomial degree pȷalong the parametric direction ξȷ. Finally, to map from the parametric space to the
physical one, we use nȷ+ 1 control points. The NURBS basis functions and physical domain are discussed
below:
Bases: B-splines in the ȷth spatial direction
Ni,0(ξ) = (1 if ξiξ < ξi+1,
0 otherwise.
Ni,p (ξ) = ξξi
ξi+pξi
Ni,p1(ξ) + ξi+p+1 ξ
ξi+p+1 ξi+1
Ni+1,p1(ξ)
(9)
with
(10) Ξȷ={0,...,0
| {z }
pȷ+1
, ξȷ
pȷ+1, . . . , ξȷ
sȷpȷ1,1,...,1
| {z }
pȷ+1 },
where sȷ=nȷ+pȷ+ 1. nȷis the number of basis functions and pȷits degree along direction ȷ.
摘要:

DEEPNURBS—ADMISSIBLEPHYSICS-INFORMEDNEURALNETWORKSHAMEDSAIDAOUI♯,LUISESPATH1&RA´ULTEMPONE2,3,4Abstract.Inthisstudy,weproposeanewnumericalschemeforphysics-informedneuralnetworks(PINNs)thatenablespreciseandinexpensivesolutionsforpartialdifferentialequations(PDEs)incaseofarbitrarygeometrieswhilestrongl...

展开>> 收起<<
DEEP NURBSADMISSIBLE PHYSICS-INFORMED NEURAL NETWORKS HAMED SAIDAOUI LUIS ESPATH1 RA UL TEMPONE234 Abstract. In this study we propose a new numerical scheme for physics-informed neural networks PINNs.pdf

共18页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:18 页 大小:1.52MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 18
客服
关注