2
many kinds of PDEs in the science and engineering fields. However, complex nonlinear
problems always require the delicate design of schemes, heavy preparation for mesh gen-
eration, and expensive computational costs. Due to the rapid development of machine
learning, data-driven methods attract much attention not only in traditional areas of the
computer field, such as computer vision (CV) and natural language processing (NLP) but
also scientific computing field, which motivates a new field of scientific machine learning
(SciML) [1] [2] [3] [4] [5]. Data-driven methods for solving PDEs utilize machine learning
tools to learn the nonlinear mapping from inputs (spatio-temporal data) to outputs (so-
lution of PDEs), which omits heavy preparation and improves computational efficiency.
Physics-Informed Neural Networks (PINNs) [6] are a kind of representative work in this
field. They have received extensive attention and much recent work [7] [8] [9] [10] [11]
based on PINNs are put forward immediately after them.
PINNs are a class of machine learning algorithms where the loss function is specially
designed based on the given PDEs with initial and boundary conditions. Automatic-
differentiation technique [12] is utilized in PINNs to calculate the exact derivatives of the
variables. By informing physics prior information into machine learning method, PINNs
enhance interpretability and thus are not a class of pure black-box models. According
to the classification of machine learning, PINNs can be seen as semi-supervised learning
algorithms. PINNs are trained to not only minimize the mean squared error between
the prediction of initial and boundary points and their given exact values but also satisfy
PDEs in collocation points. The former is easy to be implemented, while the latter needs
to be explored further. Therefore the selection and sampling of collocation points are
vital for the prediction and efficiency of PINNs. The traditional sampling method of
PINNs is to sample uniform or random collocation points before training, which is a
kind of fixed sampling method. Then several adaptive sampling methods have been
proposed, including RAR [13], adaptive sampling [14], bc-PINN method [15], importance
sampling [16], RANG [17], RAD and RAR-D [18], Evo and Causal Evo [19].
Though the importance of sampling for PINNs has been enhanced to a certain extent
in these works, temporal causality has not been emphasized in sampling, especially for
solving time-dependent PDEs. Wang et al. [20] proposed the causal training algorithm for
PINNs by informing designed residual-based temporal causal weights into the loss func-
tion. This algorithm can make sure that loss in the previous time should be minimized in
advance, which respects temporal causality. However, in [20], the collocation points are
sampled evenly and fixedly in each spatio-temporal sub-domain, which is not suitable in
many situations. We indicate that collocation points should also be sampled under the
foundation of respecting temporal causality. This argument stems from traditional nu-
merical schemes. Specifically, the designed iterative schemes calculate the solution from
the initial moment to the next moment according to the time step. Similar to this, the
sampling method should also obey this temporal causality guideline.
Motivated by traditional numerical schemes and temporal causality, we propose a
novel adaptive causal sampling method that collocation points are adaptively sampled
according to both PDE residual loss and temporal causal weight. This idea is original