A Reduced Basis Ensemble Kalman Method Francesco A. B. Silva1 Cecilia Pagliantini1 Martin Grepl2 and Karen Veroy1

2025-04-27 0 0 1.51MB 32 页 10玖币
侵权投诉
A Reduced Basis Ensemble Kalman Method
Francesco A. B. Silva1*, Cecilia Pagliantini1, Martin Grepl2
and Karen Veroy1
1*Department of Mathematics and Computer Science, Eindhoven
University of Technology, Eindhoven, 5600 MB, The Netherlands.
2Institute of Geometry and Practical Mathematics, RWTH
Aachen University, Aachen, 52056, Germany.
*Corresponding author(s). E-mail(s): f.a.b.silva@tue.nl;
Contributing authors: c.pagliantini@tue.nl;
grepl@igpm.rwth-aachen.de;k.p.veroy@tue.nl;
Abstract
In the process of reproducing the state dynamics of parameter dependent
distributed systems, data from physical measurements can be incorpo-
rated into the mathematical model to reduce the parameter uncertainty
and, consequently, improve the state prediction. Such a Data Assimila-
tion process must deal with the data and model misfit arising from experi-
mental noise as well as model inaccuracies and uncertainties. In this work,
we focus on the ensemble Kalman method (EnKM), a particle-based iter-
ative regularization method designed for a posteriori analysis of time
series. The method is gradient free and, like the ensemble Kalman filter
(EnKF), relies on a sample of parameters or particle ensemble to identify
the state that better reproduces the physical observations, while preserv-
ing the physics of the system as described by the best knowledge model.
We consider systems described by parameterized parabolic partial differ-
ential equations and employ model order reduction (MOR) techniques to
generate surrogate models of different accuracy with uncertain parame-
ters. Their use in combination with the EnKM involves the introduction
of the model bias which constitutes a new source of systematic error.
To mitigate its impact, an algorithm adjustment is proposed accounting
for a prior estimation of the bias in the data. The resulting RB-EnKM
is tested in different conditions, including different ensemble sizes and
increasing levels of experimental noise. The results are compared to those
obtained with the standard EnKF and with the unadjusted algorithm.
1
arXiv:2210.02279v1 [math.NA] 5 Oct 2022
2A Reduced Basis Ensemble Kalman Method
Keywords: Inverse Problems, Ensemble Kalman Method, Model Order
Reduction, Representation Error
1 Introduction
The problem of estimating model parameters of static and dynamical systems
is encountered in many applications from earth sciences to engineering. In this
work we focus on the parameter estimation of dynamical systems described
by parameterized parabolic partial differential equations (pPDEs). Here, we
assume that a limited and polluted knowledge of the solution is available at
multiple time instances through noisy local measurements.
For solving this kind of inverse problem, countless deterministic and
stochastic methods have been proposed. Among them, a widely used tech-
nique is the so-called ensemble Kalman filter [1], a recursive filter employing a
series of measurements to obtain improved estimates of the variables involved
in the process. The idea of using the EnKF for reconstructing the parameters
of dynamical systems traces back to [2,3], in which trivial artificial dynam-
ics for the parameters was assumed to make the estimation possible. This was
naturally accompanied by efforts for improving the performance of the method
in terms of stability, by introducing covariance inflation [4,5] and localization
[4,6], and in terms of computational cost. Relevant to the latter have been
the development of multi- level methods [7], the use of model order reduction
techniques [8], and the introduction of further surrogate modeling techniques
[9]. The use of approximated models inevitably led to the study of the impact
of model error on the EnKF [10,11], alongside with other data assimilation
methods [12,13].
Although ensemble Kalman methods were originally meant for sequential
data assimilation, i.e., for real-time applications, they proved to be reliable also
for asynchronous data assimilation [14]. The first paper proposing to adapt
the EnKF to a retrospective data analysis was [15]. For analysis, the data
are employed all at once at the end of an assimilation window, which is in
common with a series of methods, e.g., variational methods [16] such as 4D-
VAR [17] and other smoothers [18]. Compared to those approaches, the EnKF
is particularly appealing since it does not require the computation of Fechet
derivatives, a major complication for data assimilation algorithms.
In [19], Iglesias et al. introduced what they called the ensemble Kalman
method, an EnKF-based asynchronous data assimilation algorithm. Depending
on the design of the algorithm, this method has connections to Bayesian data
assimilation [20] and to maximum likelihood estimation [21]. In particular, in
the latter case, the method constitutes an ensemble-based implementation of
so-called iterative regularization methods [22]. In the case of perfect models,
the EnKM has already been analyzed in depth in [20,23] and convergence and
identifiability enhancements have been proposed in [24,25]. Due to the itera-
tive nature of the EnKM, dealing with high-dimensional parametric problems
A Reduced Basis Ensemble Kalman Method 3
is often computationally challenging. In [26] a multi-level strategy has been
proposed to improve the computational performance of the method.
In this work we propose an algorithm, called Reduced Basis Ensemble
Kalman Method (RB-EnKM), that leverages the computational efficiency of
surrogate models obtained with MOR techniques to solve asynchronous data
assimilation problems via ensemble Kalman methods. The use of the EnKM
allows us to avoid adjoint problems that are often difficult to reduce and intrin-
sically depend on the choice of measurement positions. Model order reduction,
already employed in other data assimilation problems [27,28], is used as a
key tool for accelerating the method. However, the use of approximate models
within the EnKM introduces a model error that could hinder the convergence
of the method. In this work, we propose to deal with this error by including a
prior estimation of the bias in the data. Specifically, we incorporate empirical
estimates of the mean and covariance of the bias in the Kalman gain. In some
instances, those quantities can be computed at a negligible cost by employing
the same training set used for the construction of the reduced model.
The paper is structured as follows: in Section 2we introduce the asyn-
chronous data assimilation problem together with the standard ensemble
Kalman method (Algorithm 1). Subsequently, in Section 3.1, we present an
overview on reduced basis (RB) methods and describe how to use them in
combination with the ensemble Kalman method to derive the RB-EnKM
(Algorithm 2). In Section 4, we test the new method on two numerical
examples. In the first example, we estimate the diffusivity in a linear advection-
dispersion problem in 2D (Section 4.1), while in the second, we estimate
the hydraulic log-conductivity in a non-linear hydrological problem (Section
4.2). In both cases, we compare the behavior of the full order and reduced
order models in different conditions. Section 5provides conclusions and
considerations on the proposed method and on its numerical performances.
2 Problem Formulation
Let Ube a given function space and let P RNp, with NpN+, be a set
of model parameters. We consider the pPDE: for any parameter µ∈ P, find
u(·,·;µ)∈ U such that tu(x, t;µ) = Fµu(x, t;µ) for any xRd
and tI:= (0, T ]R+. Here Fµis a generic parameterized differential
operator and tis the first order partial time derivative. This pPDE provides
the constraint to the inverse problem of estimating the unknown parameter
µ?∈ P from data or observations given by
y(µ?,η) = Lu(x, t;µ?) + η
s.t. ∂tu(x, t;µ?) = Fµ?u(x, t;µ?).(1)
Here, L:U RNm, with NmN+, maps the space of the solutions to the
space of the measurements, simulating the observation process, and ηis an
unknown realization of a Gaussian random variable with zero mean and given
4A Reduced Basis Ensemble Kalman Method
covariance, ΣRNm×Nm. Note that both the observed data yand additive
noise ηare Nm-dimensional vector-valued quantities and that Σis a symmetric
positive-definite matrix defining the inner product k·k2
Σ1:=kΣ1/2· k2on
RNm, where k·k2is the Euclidean norm.
To solve this inverse problem, we must explicitly solve the pPDE (1). This
is done using a suitable discretization, in space and time, of the differential
operator Fµ. To this end, we introduce an approximation space Vh⊂ U so
that the approximate problem reads:
find uh(µ) = uh(·,·;µ)∈ Vhs.t. tuh(x, t;µ) = Fh
µuh(x, t;µ).(2)
The discretization of the pPDE can be chosen according to the specific problem
of interest. In all numerical examples proposed in this work, we employ a space-
time Petrov–Galerkin discretization of (1) with piecewise polynomial trial and
test spaces, as described in Section 4, and we assume (2) to be sufficiently
accurate such that we can take y(µ?,η) = Luh(x, t;µ?) + η.
To characterize the observation of the solution, we introduce the forward
response map G:P RNmdefined as G(µ):=Luh(x, t;µ) for any solution
of the pPDE (2). Although the use of the map Gresults in a more compact
notation, omitting its dependence on the solution of the pPDE conceals a
key aspect of the method, i.e., the mapping from the parameter vector to the
corresponding space-time pPDE solution. For this reason, and because it makes
it harder to introduce the problem discretization, it will be used with caution.
2.1 The Ensemble Kalman Method
The data assimilation problem presented above can be recast as a minimization
problem for the cost functional, Φ(µ|y):=ky(µ?,η)− Luh(x, t;µ)k2
Σ1, rep-
resenting the misfit between the experimental data, y(µ?,η), and the forward
response. The optimal parameter estimate µopt(y) is thus given by
µopt(y) = arg min
µ∈P Φ(µ|y)
s.t. ∂tuh(x, t;µ) = Fh
µuh(x, t;µ).
(3)
This is equivalent to a maximum likelihood estimation, given the likelihood
function, l(µ|y) = exp{−1
2Φ(µ|y)}, associated with the probability density
function of the data, y|µ, i.e., the probability of observing yif µis the para-
metric state. The shape of the function follows from the probability density
function of the Gaussian noise realization.
Among various methods proposed to solve this optimization problem, the
EnKM relies on a sequence of parameter ensembles En, with nN+, to
estimate the minimum of the cost functional. Each ensemble consists of a col-
lection {µ(j)
n}J
j=1 of JN+parameter vectors µ(j)
n, hereby named ensemble
members or particles, whose interaction, guided by the experimental measure-
ments, causes them to cluster around the solution of the problem as iterations
A Reduced Basis Ensemble Kalman Method 5
proceed. At the beginning of each iteration, the solution of the pPDE and its
observations are computed for each j∈ {1, . . . , J}. Subsequently, the ensemble
is updated based on the empirical correlation among parameters and between
parameters and measurements, as well as on the misfits between the experi-
mental measurements y(µ?,η) and the particle measurements Luh(x, t;µ(j)
n).
A single iteration, equivalent to the one in [19], is formalized in the following
pseudo algorithm:
Algorithm 1 Iterative ensemble method for inverse problems.
Let E0be the initial ensemble with elements {µ(j)
0}J
j=1 sampled from a distribu-
tion Π0(µ). For n= 0,1, . . .
(i) Prediction step. Compute the measurements of the solution for each particle
in the last updated ensemble:
G(µ(j)
n) = Luh(x, t;µ(j)
n)
s.t. ∂tuh(x, t;µ(j)
n) = Fh
µ(j)
n
uh(x, t;µ(j)
n) for all j∈ {1,...,J}.(4)
(ii) Intermediate step. From the last updated ensemble measurements and
parameters, define the sample means and covariances:
Pn=1
J
J
X
j=1
G(µ(j)
n)G(µ(j)
n)>− GnG>
nwith Gn=1
J
J
X
j=1
G(µ(j)
n) (5)
Qn=1
J
J
X
j=1
µ(j)
nG(µ(j)
n)>µnG>
nwith µn=1
J
J
X
j=1
µ(j)
n.(6)
(iii) Analysis step. Update each particle in the ensemble:
µ(j)
n+1 =µ(j)
n+Qn(Pn+Σ)1(y(j)
n− G(µ(j)
n)) with y(j)
n N (y,Σ).(7)
In the last step of the algorithm, the cross correlation matrices Pnand Qn
are used to compute the Kalman gain Kn:=Qn(Pn+Σ)1. This modulates
the extent of the correction: a low-gain corresponds to conservative behavior,
i.e., small changes in the particle positions, while a high-gain involves a larger
correction. Note that the experimental data are perturbed with artificial noise
sampled from the same distribution assumed for the experimental noise η.
This leads to an improved estimate over the unperturbed case.
A termination criterion for the algorithm is essential for the proper imple-
mentation of the method. The one presented in [19] is based on the discrepancy
principle and consists in stopping the algorithm when the error between the
experimental data and the measurements is comparable to the experimental
noise, that is, when ky− G(µn)k2
Σ1σkηk2
Σ1for some σ1. An alterna-
tive approach is to set a threshold for the norm of the parameter update, i.e.,
to terminate the algorithm when kµn+1 µnk2τkµn+1k2for some τ1.
The latter criterion is more robust to model errors and is therefore used in our
numerical experiments.
摘要:

AReducedBasisEnsembleKalmanMethodFrancescoA.B.Silva1*,CeciliaPagliantini1,MartinGrepl2andKarenVeroy11*DepartmentofMathematicsandComputerScience,EindhovenUniversityofTechnology,Eindhoven,5600MB,TheNetherlands.2InstituteofGeometryandPracticalMathematics,RWTHAachenUniversity,Aachen,52056,Germany.*Corre...

展开>> 收起<<
A Reduced Basis Ensemble Kalman Method Francesco A. B. Silva1 Cecilia Pagliantini1 Martin Grepl2 and Karen Veroy1.pdf

共32页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:32 页 大小:1.51MB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 32
客服
关注