An ecient neural-network and nite-dierence hybrid method for elliptic interface problems with applications Wei-Fan Hu14 Te-Sheng Lin24 Yu-Hau Tseng3 and Ming-Chih Lai2

2025-04-24 0 0 1.74MB 17 页 10玖币
侵权投诉
An efficient neural-network and finite-difference hybrid
method for elliptic interface problems with applications
Wei-Fan Hu1,4, Te-Sheng Lin2,4, Yu-Hau Tseng3, and Ming-Chih Lai2
1Department of Mathematics, National Central University, Taoyuan 32001, Taiwan
2Department of Applied Mathematics, National Yang Ming Chiao Tung University,
Hsinchu 30010, Taiwan
3Department of Applied Mathematics, National University of Kaohsiung,
Kaohsiung 81148, Taiwan
4National Center for Theoretical Sciences, National Taiwan University, Taipei
10617, Taiwan
March 6, 2023
Abstract
A new and efficient neural-network and finite-difference hybrid method is developed
for solving Poisson equation in a regular domain with jump discontinuities on embed-
ded irregular interfaces. Since the solution has low regularity across the interface,
when applying finite difference discretization to this problem, an additional treatment
accounting for the jump discontinuities must be employed. Here, we aim to elevate
such an extra effort to ease our implementation by machine learning methodology.
The key idea is to decompose the solution into singular and regular parts. The neural
network learning machinery incorporating the given jump conditions finds the singu-
lar solution, while the standard five-point Laplacian discretization is used to obtain
the regular solution with associated boundary conditions. Regardless of the interface
geometry, these two tasks only require supervised learning for function approximation
and a fast direct solver for Poisson equation, making the hybrid method easy to im-
plement and efficient. The two- and three-dimensional numerical results show that
the present hybrid method preserves second-order accuracy for the solution and its
derivatives, and it is comparable with the traditional immersed interface method in
the literature. As an application, we solve the Stokes equations with singular forces to
demonstrate the robustness of the present method.
Key words: Neural networks, sharp interface method, fast direct solver, elliptic interface
problem, Stokes equations
1
arXiv:2210.05523v4 [math.NA] 3 Mar 2023
1 Introduction
In this paper, we aim to solve a d-dimensional (d= 2 or 3) elliptic interface problem defined
in a regular domain Ω Rd, which is separated by an embedded interface Γ such that the
subdomains inside and outside the interface are denoted by Ωand Ω+, respectively. Along
the interface Γ, there exists jump discontinuities that the solution must be satisfied. With
the associated boundary condition, the problem takes the form
u(x) = f(x),x+,(1)
Ju(x)K=γ(x),Jnu(x)K=ρ(x),xΓ,(2)
u(x) = ub(x),x.(3)
Here, the jump J·Kindicates the quantity approaching from Ω+side minus the one from Ω
side; the shorthand nurepresents the normal derivative u·nin which nis the normal
vector pointing from Ωto Ω+. Notice that, here the underlying differential equation is
subject to the Dirichlet-type boundary condition for illustration purpose, while other types
of boundary condition (Neumann or Robin) will not change the main ingredients presented
here. Since the Poisson equation is considered in Eq. (1), we simply call the above problem
as the Poisson interface problem hereafter.
As seen from Eqs. (1)-(3), the solution and its partial derivatives have jumps across the
interface. So, when applying the finite difference discretization to this problem, an addi-
tional treatment accounting for those jump discontinuities must be employed at the grid
points near the interface. Over the past few decades, different discretization methodologies
have been successfully developed to capture those jump conditions sharply or to improve
the overall numerical accuracy, such as the immersed interface method (IIM) [12, 13, 16, 18],
ghost fluid method (GFM) [6, 22], Voronoi interface method [7], to name a few. Differ-
ent approaches for solving interface problems such as the immersed finite element method
(IFEM) [8, 10] or other methods can be found in [20] and the references therein.
On the other hand, much attention has recently been paid to applying deep neural
networks (DNNs) to solve elliptic interface problems, rather than using traditional numer-
ical methods to solve such problems. Despite the success of the two mainstream deep
learning approaches (Physics-Informed Neural Networks (PINNs) [25, 26] and the deep
Ritz method [5]) in solving partial differential equations with smooth solutions, learning
methods based on these two frameworks for solving elliptic interface problems with jump
discontinuities remain to be improved. The main and intrinsic difficulty may be attributed
to the fact that the usual activation functions used in DNNs are generally smooth; thus,
DNN function approximators seem to be incapable of representing discontinuous functions.
To approximate such discontinuous solutions (or functions) and tackle the elliptic inter-
face problems, multiple independent networks need to be established and linked with each
other by imposing the jump conditions, see, e.g., piecewise DNNs [11], interfaced neural
networks [27], and deep unfitted Nitsche method [9]. The resulting prediction errors in
2
their test examples reach the magnitude O(103) to O(104) in relative L2norm. More-
over, training these DNN models comes at the cost of having to train a separate neural
network in each subdomain independently. Until very recently, the authors of this paper
proposed a Discontinuity Capturing Shallow Neural Network (DCSNN) [14] that allows a
single network to represent piecewise smooth functions via a simple augmentation tech-
nique. The network is completely shallow (one hidden layer), so the resulting number of
trainable parameters is moderate (only a few hundred) and attains prediction accuracy
as low as O(107) in relative L2norm for all tests in both 2D and 3D elliptic interface
problems. Note that the above neural network methods are all completely mesh-free, but
their convergence still requires further investigation.
In this work, we propose a novel hybrid method that combines neural network learning
machinery and traditional finite difference methods to solve the Poisson interface problem
(1)-(3). The entire computation only comprises a supervised learning task of function
approximation and a fast direct solver of the Poisson equation, which can be easily and
directly implemented regardless of interface geometry. Here, we want to emphasize that
it is not our intention to replace traditional numerical methods such as the immersed
interface method (IIM) or immersed finite element method (IFEM) nor to compete with
them in every aspect. Instead, we want to provide an alternative (especially from the
implementation aspect) to solve Poisson interface problems with non-homogeneous jump
conditions in which the advantages of using fast Poisson solver and machine learning can
be fully exploited. As known, the IIM and non-bodyfitted IFEM need some complicated
treatments to handle the non-homogeneous jump conditions near the interface, especially in
3D case. However, in the present hybrid method, these interface conditions can be easily
incorporated into a function constructed by supervised learning and thus regular finite
difference scheme can be exploited. The numerical experiments for 2D and 3D Poisson
interface problems in Section 3 indicate that the proposed method can achieve a similar
accuracy with the IIM.
The rest of the paper is organized as follows. In Section 2, we present the methodology
and list some features, including error analysis of the hybrid scheme. Numerical results
for the Poisson interface problems and Stokes equations with singular forces are given in
Sections 3 and 4, respectively, followed by some concluding remarks and future works in
Section 5.
2 Hybrid neural-network and finite-difference methodology
By taking advantage of the machine learning techniques, our goal is to design an easy-to-
implement fast solver for the Poisson interface problem (1)-(3). To this end, we propose
a novel hybrid method that exploits the advantages of neural network learning machinery
and traditional finite difference method. As we can see from the jump conditions in (2), the
solution uis non-smooth across the interface. Thus, we start by decomposing the solution
3
into
u(x) = v(x) + w(x),(4)
where vand wrepresent the singular (non-smooth) and regular (smooth) parts of u, re-
spectively. More precisely, we require wto be fairly smooth over the entire domain Ω,
so that the zero jumps JwK=JnwK=JwK= 0 on the interface are all satisfied. Now
the singular solution vis responsible for having all the discontinuities across the interface;
hereby, we construct this discontinuous function by assuming
v(x) = V(x)x,
0x+,(5)
where Vis a smooth function to be found. Using the above definition and plugging the
decomposition (4) into the jump conditions (2) and differential equation (1), the unknown
function Vmust satisfy the following constraints along the interface:
V(x) = γ(x), ∂nV(x) = ρ(x),V(x) = Jf(x)K,xΓ.(6)
Note that this function is not unique, in the sense that there exist infinitely many functions
defined in the domain Ω that satisfy the restrictions (6). To find V, we leverage the power
of function expressibility of neural networks. Here, we simply employ a shallow (one hidden
layer) fully-connected feedforward neural network to approximate V, and learn the function
via the supervised learning model. Specifically, given a dataset with Mtraining data points
{xi
ΓΓ}M
i=1 and the target outputs γ(xi
Γ), ρ(xi
Γ) and Jf(xi
Γ)K, we find V(x) by minimizing
the following mean squared error loss consisting of the residuals of conditions in Eq. (6):
Loss(p) = 1
M
M
X
i=1 hV(xi
Γ;p) + γ(xi
Γ)2+nV(xi
Γ;p) + ρ(xi
Γ)2+V(xi
Γ;p) + Jf(xi
Γ)K2i,
(7)
where pcollects all trainable parameters (weights and biases) in the network. To train the
above loss model, we adopted the Levenberg-Marquardt (LM) method [23], a full-batch
optimization algorithm which is particularly efficient for least squares losses. We should
also mention that the partial derivatives of the target function V(x) in the loss function (7)
can be computed easily by automatic differentiation [2].
Once Vis available, we can obtain wby solving the following Poisson equation:
w(x) = ∆u(x)v(x) = f(x)V(x)x,
f(x)x+,(8)
w(x) = ub(x),x.(9)
Notice that, using the last jump constraint for Vin Eq. (6), one can immediately see
that the right-hand side function of (8) is continuous on the entire domain. Moreover, wis
4
摘要:

Anecientneural-networkand nite-di erencehybridmethodforellipticinterfaceproblemswithapplicationsWei-FanHu1,4,Te-ShengLin2,4,Yu-HauTseng3,andMing-ChihLai21DepartmentofMathematics,NationalCentralUniversity,Taoyuan32001,Taiwan2DepartmentofAppliedMathematics,NationalYangMingChiaoTungUniversity,Hsinchu3...

展开>> 收起<<
An ecient neural-network and nite-dierence hybrid method for elliptic interface problems with applications Wei-Fan Hu14 Te-Sheng Lin24 Yu-Hau Tseng3 and Ming-Chih Lai2.pdf

共17页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:17 页 大小:1.74MB 格式:PDF 时间:2025-04-24

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 17
客服
关注