A Novel Adaptive Causal Sampling Method for Physics- Informed Neural Networks Jia Guo1 Haifeng Wang1and Chenping Hou1

2025-04-30 0 0 904.75KB 20 页 10玖币
侵权投诉
A Novel Adaptive Causal Sampling Method for Physics-
Informed Neural Networks
Jia Guo1, Haifeng Wang1and Chenping Hou1,*
1College of Science, National University of Defense Technology, Changsha 410073,
P.R. China.
Abstract. Physics-Informed Neural Networks (PINNs) have become a kind of attrac-
tive machine learning method for obtaining solutions of partial differential equations
(PDEs). Training PINNs can be seen as a semi-supervised learning task, in which only
exact values of initial and boundary points can be obtained in solving forward prob-
lems, and in the whole spatio-temporal domain collocation points are sampled without
exact labels, which brings training difficulties. Thus the selection of collocation points
and sampling methods are quite crucial in training PINNs. Existing sampling meth-
ods include fixed and dynamic types, and in the more popular latter one, sampling is
usually controlled by PDE residual loss. We point out that it is not sufficient to only
consider the residual loss in adaptive sampling and sampling should obey temporal
causality. We further introduce temporal causality into adaptive sampling and propose
a novel adaptive causal sampling method to improve the performance and efficiency
of PINNs. Numerical experiments of several PDEs with high-order derivatives and
strong nonlinearity, including Cahn Hilliard and KdV equations, show that the pro-
posed sampling method can improve the performance of PINNs with few collocation
points. We demonstrate that by utilizing such a relatively simple sampling method,
prediction performance can be improved up to two orders of magnitude compared
with state-of-the-art results with almost no extra computation cost, especially when
points are limited.
Key words: Partial differential equation, Physics-Informed Neural Networks, Residual-based
adaptive sampling, Causal sampling.
1 Introduction
Many natural phenomena and physics laws can be described by partial differential equa-
tions (PDEs), which are powerful modeling tools in quantum mechanics, fluid dynamics
and phase field, etc. Traditional numerical methods play an important role in solving
Corresponding author. Email addresses: guojia14@nudt.edu.cn (J. Guo), wanghaifeng20@nudt.edu.cn
(H. Wang), hcpnudt@hotmail.com (C. Hou)
http://www.global-sci.com/ Global Science Preprint
arXiv:2210.12914v1 [cs.LG] 24 Oct 2022
2
many kinds of PDEs in the science and engineering fields. However, complex nonlinear
problems always require the delicate design of schemes, heavy preparation for mesh gen-
eration, and expensive computational costs. Due to the rapid development of machine
learning, data-driven methods attract much attention not only in traditional areas of the
computer field, such as computer vision (CV) and natural language processing (NLP) but
also scientific computing field, which motivates a new field of scientific machine learning
(SciML) [1] [2] [3] [4] [5]. Data-driven methods for solving PDEs utilize machine learning
tools to learn the nonlinear mapping from inputs (spatio-temporal data) to outputs (so-
lution of PDEs), which omits heavy preparation and improves computational efficiency.
Physics-Informed Neural Networks (PINNs) [6] are a kind of representative work in this
field. They have received extensive attention and much recent work [7] [8] [9] [10] [11]
based on PINNs are put forward immediately after them.
PINNs are a class of machine learning algorithms where the loss function is specially
designed based on the given PDEs with initial and boundary conditions. Automatic-
differentiation technique [12] is utilized in PINNs to calculate the exact derivatives of the
variables. By informing physics prior information into machine learning method, PINNs
enhance interpretability and thus are not a class of pure black-box models. According
to the classification of machine learning, PINNs can be seen as semi-supervised learning
algorithms. PINNs are trained to not only minimize the mean squared error between
the prediction of initial and boundary points and their given exact values but also satisfy
PDEs in collocation points. The former is easy to be implemented, while the latter needs
to be explored further. Therefore the selection and sampling of collocation points are
vital for the prediction and efficiency of PINNs. The traditional sampling method of
PINNs is to sample uniform or random collocation points before training, which is a
kind of fixed sampling method. Then several adaptive sampling methods have been
proposed, including RAR [13], adaptive sampling [14], bc-PINN method [15], importance
sampling [16], RANG [17], RAD and RAR-D [18], Evo and Causal Evo [19].
Though the importance of sampling for PINNs has been enhanced to a certain extent
in these works, temporal causality has not been emphasized in sampling, especially for
solving time-dependent PDEs. Wang et al. [20] proposed the causal training algorithm for
PINNs by informing designed residual-based temporal causal weights into the loss func-
tion. This algorithm can make sure that loss in the previous time should be minimized in
advance, which respects temporal causality. However, in [20], the collocation points are
sampled evenly and fixedly in each spatio-temporal sub-domain, which is not suitable in
many situations. We indicate that collocation points should also be sampled under the
foundation of respecting temporal causality. This argument stems from traditional nu-
merical schemes. Specifically, the designed iterative schemes calculate the solution from
the initial moment to the next moment according to the time step. Similar to this, the
sampling method should also obey this temporal causality guideline.
Motivated by traditional numerical schemes and temporal causality, we propose a
novel adaptive causal sampling method that collocation points are adaptively sampled
according to both PDE residual loss and temporal causal weight. This idea is original
3
from the adaptation mechanism of the finite element method (FEM), which can better
improve computational efficiency, and from the temporal causality of traditional numer-
ical schemes, which obeys the temporal order.
In this paper, we mainly focus on the sampling methods for PINNs. Our specific
contributions are as follows:
We analyze the failure of adaptive sampling and figure out that sampling should
obey temporal causality, otherwise leading to sampling confusion and trivial solu-
tion of PINNs.
We introduce temporal causality into sampling and propose a novel adaptive causal
sampling method.
We investigate our proposed method in several numerical experiments and gain
better results than state-of-the-art sampling methods with almost no extra compu-
tational cost and with few points, which shows the high efficiency of our proposed
method and potential in computationally complex problems, such as large-scale
problems and high-dimensional problems.
The structure of this paper is organized as follows. Section 2 briefly introduces the
main procedure of PINNs and analyzes the necessity of introducing temporal causality in
sampling. Section 3 proposes a novel adaptive causal sampling method. In Section 4, we
investigates several numerical experiments to demonstrate the performance of proposed
method. The final section concludes the whole paper.
2 Background
In this section, we first provide a brief overview of PINNs. Then, by investigating an
illustrative example, we analyze the necessity of temporal causality in sampling.
2.1 Physics-Informed Neural Networks
PINNs are a class of machine learning algorithms where the prior physics information
including initial and boundary conditions, and corresponding PDEs form are informed
into the loss function. Here we consider the general form of nonlinear PDEs with initial
and boundary conditions
ut+N[u] = 0,t[0,T],x,
u(0,x) = g(x),x,
B[u] = 0,t[0,T],x,
(2.1)
where xand tare the space and time coordinates respectively, uis the solution of PDEs
system of Equation (2.1). is the computational domain and represents the bound-
ary. N[.]is a nonlinear differential operator, g(x)is the initial function, and B[.]repre-
4
sents boundary operator, including periodic, Dirichlet, Neumann boundary conditions
etc.
The universal approximation theory [21] can guarantee that, there exists a deep neural
network ˆ
uθ(t,x)with nonlinear activation function σand tunable parameters θnamely
weights and bias, such that the PDEs solution u(t,x)can be approximated by it. Then the
prediction of the solution u(t,x)can be converted to the optimization problem of training
a deep learning model. The aim of training PINNs is to minimize the loss function and
find the optimal parameters θ. The usual form of loss function in PINNs is composited
by three parts with tunable coefficients
L(θ) = λicLic(θ)+λbcLbc(θ)+λresLres(θ)(2.2)
where
Lic(θ) = 1
Nic
Nic
i=1
|ˆ
uθ(0,xi
ic)g(xi
ic)|2,
Lbc(θ) = 1
Nbc
Nbc
i=1
|B[ˆ
uθ](ti
bc,xi
bc)|2,
Lres(θ) = 1
Nres
Nres
i=1
|ˆ
uθ
t(ti
res,xi
res)+N[ˆ
uθ](ti
res,xi
res)|2.
(2.3)
{0,xi
ic}Nic
i=1,{ti
bc,xi
bc}Nbc
i=1and {ti
res,xi
res}Nres
i=1are initial data, boundary data and residual col-
location points respectively, which are the inputs of PINNs. In the loss function, the
calculation of derivatives, e.g. ˆ
uθ
tcan be obtained via automatic differentiation [12].
Besides, the gradients with respect to network parameters θare also computed via this
technique. Moreover, the hyper-parameters λic,λbc,λres are usually tuned by users or by
automatic algorithms in order to balance training of different loss terms.
2.2 Analysis of sampling
2.2.1 Existing sampling methods
In original PINNs [6], collocation points are uniformly or randomly sampled before the
whole training procedure, which can be seen as a fixed type of sampling. This type is
quite useful for solving some PDEs, however, for more complicated PDEs, it has diffi-
culties in both prediction accuracy and convergence efficiency. To improve the sampling
performance for PINNs, the adaptive idea is utilized in PINNs, which forms the adaptive
type of sampling. Residual-based adaptive refinement (RAR) in PINNs is first proposed
by Lu [13] which adds new collocation points in areas with large PDE residuals. This
kind of adaptive sampling methods pursue improved prediction accuracy by automati-
cally sampling more collocation points.
Compared with these sampling methods with increasing number of sampled points,
we aim to achieve better accuracy under the foundation of limited and few points, which
摘要:

ANovelAdaptiveCausalSamplingMethodforPhysics-InformedNeuralNetworksJiaGuo1,HaifengWang1andChenpingHou1,*1CollegeofScience,NationalUniversityofDefenseTechnology,Changsha410073,P.R.China.Abstract.Physics-InformedNeuralNetworks(PINNs)havebecomeakindofattrac-tivemachinelearningmethodforobtainingsolution...

展开>> 收起<<
A Novel Adaptive Causal Sampling Method for Physics- Informed Neural Networks Jia Guo1 Haifeng Wang1and Chenping Hou1.pdf

共20页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:20 页 大小:904.75KB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 20
客服
关注