As advocated in [53, 31], data-driven scientific machine learning problems can be viewed in terms of the
amount of data that is available and the amount of physics that is known. They are broadly classified into
three categories in [31]: (i) those with “lots of physics and small data” (e.g. forward PDE problems), (ii)
those with “some physics and some data” (e.g. inverse PDE problems), and (iii) those with “no physics and
big data” (e.g. general PDE discovery). The authors of [31] point out that those in the second category are
typically the more interesting and representative in real applications, where the physics is partially known
and sparse measurements are available. One illustrating example is from multiphase flows, where the conser-
vation laws (mass/momentum conservations) and thermodynamic principles (second law of thermodynamics,
Galilean invariance) lead to a thermodynamically-consistent phase field model, but with an incomplete sys-
tem of governing equations [15, 14]. One has the freedom to choose the form of the free energy, the wall
energy, the form and coefficients of the constitutive relation, and the form and coefficient of the interfacial
mobility [12, 13, 58]. Different choices will lead to different specific models, which are all thermodynami-
cally consistent. The different models cannot be distinguished by the thermodynamic principles, but can be
differentiated with experimental measurements.
The development of machine learning techniques for solving inverse PDE problems has attracted a great
deal of interest recently, with a variety of contributions from different researchers. In [44] a method for
estimating the parameters in nonlinear PDEs is developed based on Gaussian processes, where the state
variable at two consecutive snapshots are assumed to be known. The physics informed neural network
(PINN) method is introduced in the influential work [45] for solving forward and inverse nonlinear PDEs.
The residuals of the PDE, the boundary and initial conditions, and the measurement data are encoded
into the loss function as soft constraints, and the neural network is trained by gradient descent or back
propagation type algorithms. The PINN method has influenced significantly subsequent developments and
stimulated applications in many related areas (see e.g. [36, 46, 29, 38, 11, 50, 9, 35, 56, 30, 43], among
others). A hybrid finite element and neural network method is developed in [1]. The finite element method
(FEM) is used to solve the underlying PDE, which is augmented by a neural network for representing the
PDE coefficient [1]. A conservative PINN method is proposed in [29] together with domain decomposition
for simulating nonlinear conservation laws, in which the flux continuity is enforced along the sub-domain
interfaces, and interesting results are presented for a number of forward and inverse problems. This method
is further developed and extended in a subsequent work [28] with domain decompositions in both space and
time; see a recent study in [30] of this extended technique for supersonic flows. Interesting applications
are described in [46, 9], where the PINN technique is employed to infer the 3D velocity and pressure fields
based on scattered flow visualization data or Schlieren images from experiments. In [20] a distributed PINN
method based on domain decomposition is presented, and the loss function is optimized by a gradient descent
algorithm. For nonlinear PDEs, the method solves a related linearized equation with certain variables fixed
at their initial values [20]. An auxiliary PINN technique is developed in [59] for solving nonlinear integro-
differential equations, in which auxiliary variables are introduced to represent the anti-derivatives and thus
avoiding the integral computation. We would also like to refer the reader to e.g. [11, 53, 37, 33] (among
others) for inverse applications of neural networks in other related fields.
In the current work we consider the use of randomized neural networks, also known as extreme learning
machines (ELM) [25] (or random vector functional link (RVFL) networks [42]), for solving inverse PDE
problems. ELM was originally developed for linear classification and regression problems. It is characterized
by two ideas: (i) randomly assigned but fixed (non-trainable) hidden-layer coefficients, and (ii) trainable
linear output-layer coefficients determined by linear least squares or by using the Moore-Penrose inverse [25].
This technique has been extended to scientific computing in the past few years, for function approximations
and for solving ordinary and partial differential equations (ODE/PDE); see e.g. [57, 41, 21, 16, 17, 10, 22, 51,
19], among others. The random-weight neural networks are universal function approximators. As established
by the theoretical results of [27, 26, 39], a single-hidden-layer feed-forward neural network (FNN) having
random but fixed (not trained) hidden units can approximate any continuous function to any desired degree
of accuracy, provided that the number of hidden units is sufficiently large.
In this paper we present a method for computing inverse PDE problems based on randomized neural
networks. This extends the local extreme learning machine (locELM) technique originally developed in [16]
for forward PDEs to inverse problems. Because of the coupling between the unknown PDE parameters
(referred to as the inverse parameters hereafter) and the solution field, the inverse PDE problem is fully
nonlinear with respect to the unknowns, even though the associated forward PDE may be linear. We
2