On multiscale quasi-interpolation of scattered scalar- and manifold-valued functions Nir Sharon Rafael Sherbu Cohen Holger Wendland

2025-05-06 0 0 2MB 22 页 10玖币
侵权投诉
On multiscale quasi-interpolation of scattered
scalar- and manifold-valued functions
Nir Sharon, Rafael Sherbu Cohen, Holger Wendland
Abstract
We address the problem of approximating an unknown function from its discrete samples given at arbitrarily
scattered sites. This problem is essential in numerical sciences, where modern applications also highlight the
need for a solution to the case of functions with manifold values. In this paper, we introduce and analyze a
combination of kernel-based quasi-interpolation and multiscale approximations for both scalar- and manifold-
valued functions. While quasi-interpolation provides a powerful tool for approximation problems if the data
is defined on infinite grids, the situation is more complicated when it comes to scattered data. Here, higher-
order quasi-interpolation schemes either require derivative information or become numerically unstable. Hence,
this paper principally studies the improvement achieved by combining quasi-interpolation with a multiscale
technique. The main contributions of this paper are as follows. First, we introduce the multiscale quasi-
interpolation technique for scalar-valued functions. Second, we show how this technique can be carried over
using moving least-squares operators to the manifold-valued setting. Third, we give a mathematical proof that
converging quasi-interpolation will also lead to converging multiscale quasi-interpolation. Fourth, we provide
ample numerical evidence that multiscale quasi-interpolation has superior convergence to quasi-interpolation. In
addition, we will provide examples showing that the multiscale quasi-interpolation approach offers a powerful
tool for many data analysis tasks, such as denoising and anomaly detection. It is especially attractive for cases
of massive data points and high dimensionality.
1 Introduction
We consider the problem of approximating an unknown function from its discrete samples. This classical problem is
at the backbone of many scientific and engineering questions. For example, the samples could represent a physical
quantity, a part of a biological system, or a simulation process. As it turns out, the scattered data sites, especially
when dealing with a large amount of data points of high dimensionality, make this reconstruction problem extremely
challenging. In this paper, we are interested in approximation methods that allow us to approximate scalar-valued
functions and functions that take their values in manifolds. We start by describing the former.
We define our scalar-valued approximation problem as follows. Let X={xj:jJ} ⊆ Rdbe data sites
and {vj}jJRbe the corresponding data values. Here, Jis a possibly infinite index set and Ω Rdis the region
of interest. We assume that the data values are samples, possibly in the presence of noise, of an unknown smooth
enough function f, that is vj=f(xj) + εj, where εjis the noise term. Under these settings, a standard approach
is to use a kernel K:×Rand its induced approximant,
s(x) = X
jJ
αjK(x, xj), x .(1)
Often, the kernel is translation invariant or even radial, meaning it has either the form K(x, y) = Φ(xy) =
φ(kxyk2) with Φ :RdRand φ:[0,)R, respectively, where k·k2is the Euclidean norm in Rd.
Determining the coefficients αjis then the main barrier in constructing the approximation. A natural way to
determine the coefficients of sof (1) is by interpolation, which requires sto satisfy s(xj) = vj,jJ. If Jis finite,
this leads to a linear system =vwith matrix A= (K(xi, xj)) RN×Nwith N=|J|. Such systems are often
full and hence become numerically too expensive to solve for larger N. A possible remedy to this is to assume that
Kis translation-invariant, i.e. of the form K(x, y) = Φ(xy) and that Φ has compact support. Then, for each
1iN, the number of data sites that fit inside the support of Φ(xi− ·) determines the number of non-zero
entries in row iof the matrix A. Consequently, if the support radius of Φ is chosen such that the number of non-zero
entries per row is small, then we essentially have a sparse system, which is also known to be well-conditioned. For
quasi-uniform data sets this is possible by choosing the support radius proportional to the so-called mesh-norm or
fill distance of the data set. Unfortunately, it is also known that adding more and more points to the system while
1
arXiv:2210.14333v2 [math.NA] 13 May 2023
keeping the number of non-zero entries fixed by scaling does not lead to a converging process. For convergence it is
necessary to keep the support radius fixed, meaning that eventually the matrix Afills up again and the advantage
of sparsity and well-conditioning is lost.
A remedy to this problem has been suggested in [54, 20, 9], and the convergence has first been proven in [22,
40, 41, 63]. Recent publication comprise for example [31, 55, 64]. The idea is based on the assumption of a
sequence of increasingly denser, usually nested, data sets X1, X2,..., and a simple residual correction scheme. The
process starts when the target function is approximated on the coarsest grid X1. Next, the error of the first step
is computed on X2and then approximated using a smaller support radius of Φ and so on. This approach has not
only the advantages of sparse, well-conditioned interpolation matrices in each step and fast, stable evaluations; it
also allows us to capture different scales, if present, in the target function. We refer to this approach as multiscale.
Another computational alluring alternative to interpolation is quasi-interpolation, where the coefficients of s
of (1) are set as the values of samples, αj=vj. The quasi-interpolation concept often also requires polynomial
reproduction of a certain degree. It is used successfully in various areas and is well studied, particularly in spline
spaces, see, e.g., [11, Chapter 12] and [7]. Quasi-interpolation on scattered data is also possible, see for example
[6, 18, 58]. One popular multivariate technique is given by moving least-squares, see for example [42, 61]. However,
particular for higher approximation orders the numerical stability depends heavily on the geometry of the data sites
and is often practically not achievable.
Here, we suggest using quasi-interpolation in a residual correction scheme, that is, in a multiscale fashion. Our
quasi-interpolation consists of a compactly supported radial basis function (RBF) [60]. So the resulted scheme
provides an appealing computational technique to process massively large datasets in high dimensions. While
similar approaches were mainly studied numerically and in slightly different settings [17, 44, 8], we suggest both
numerical treatment and analytic consideration. Moreover, one of the main contributions of this paper is to carry
the concepts over to manifold-valued functions.
Manifold-valued functions are of the form F:→ M, where Mis a Riemannian manifold. The simplicity
and efficiency of the quasi-interpolation when equipped with a compactly supported Φ, makes this method partic-
ularly appealing for approximating manifold-valued functions, see, e.g., [24]. The manifold expresses both a global
nonlinear structure together with local, constrained, high-dimensional elements. However, classical computational
methods for approximation cannot cope with manifold-valued functions due to manifolds’ non-linearity, and even
fundamental tasks like integration, interpolation, and regression become challenging, see, e.g., [4, 36, 66].
The problem of approximating functions with manifold values has risen in various research areas, ranging
from signal processing [5] and modern statistics [50] to essential applications such as brain networks and autism
classification [19], structural dynamics and its application for aerodynamic problem solving [2, 43], to name a
few. For example, in reduced-order models for simulations [3, 27] they drastically decrease the calculation time
of simulating processes using interpolation of nonlinear structures at scattered locations and within high levels of
accuracy [52].
In this paper, we show how to use our multiscale quasi-interpolation approach for manifold-values approximation
based on the Riemannian center of mass. First, we provide a rigorous theoretical discussion, followed by a compre-
hensive numerical study that includes both the fine details of implementation together with an illustration of the
multiscale approximation over scattered manifold data and its application. The alluring computational nature of
the quasi-interpolation multiscale becomes even more essential for approximating manifold-values functions as the
dimensionality and complexity of the manifold must be considered. In the numerical part, we provide comparison
tests between the direct quasi-interpolation approximation and the multiscale approach and show its attractivity
for an application of manifold-data processing.
The paper is organized as follows. Section 2 is dedicated to introducing essential notation, definitions, and
background. In Section 3, we prove new error bounds for quasi-interpolation of scalar-valued functions. Then,
Section 4 discusses the combination of quasi-interpolation and multiscale techniques in the scalar-valued setting.
Here, we prove a very preliminary result, which shows that multiscale quasi-interpolation converges at roughly
the same rate as quasi-interpolation itself, if the latter converges. In Section 5, we show how the construction of
the multiscale quasi-interpolation method is formed for manifold data and provide the algorithm and a theoretical
discussion. We summarize the manuscript with the numerical results that are given in Section 6. We first compare
the performance of the multiscale quasi-interpolation to standard quasi-interpolation. Here, the numerical examples
show two extraordinary features of multiscale quasi-interpolation. On the one hand they show that non-converging
quasi-interpolation can become convergent if combined with the multiscale technique. On the other hand, they
show that the speed of convergence of quasi-interpolation can be improved by combining quasi-interpolation with
the multiscale approach. There is some theoretical evidence for the first case in [21], though the situation described
there does not apply to our scattered data problems and hence needs further theoretical backing. The same is true
2
for the latter case. While it is easy to see that multiscale quasi-interpolation must converge if quasi-interpolation
converges, see Section 4, there is still no rigorous proof for this yet. Finally, we study other aspects that highlight
the advantages of the multiscale approach. Finally, we present the multiscale approximation for manifold data,
including its application for denoising a field of rotations that is contaminated with noise.
2 Preliminaries
We call an open, nonempty set Ω Rdadomain. Noting, however, that the term domain often also includes
connectivity. We will look at classes of differentiable functions.
Definition 2.1. Let Rdbe a bounded domain. Let kN0. The spaces of k-times continuously differentiable
functions are defined as
Ck(Ω) :={u:R:DαuC(Ω),|α| ≤ k},
Ck(Ω) :={uCk(Ω) :Dαuhas a continuous extension to ,|α| ≤ k}.
On the latter space we define the norm
kukCk(Ω) := max
|α|≤ksup
x
|Dαu(x)|, u Ck(Ω).(2)
We will first collect the necessary material on quasi-interpolation. In the following, πm(Rd) denotes the space of
all d-variate polynomials of (total) degree less than or equal to mN0. For quasi-interpolation, we use the moving
least squares method.
Definition 2.2. Let X={x1, . . . , xN} ⊆ Rdbe given. Then, the Moving Least-Squares (MLS) approximation
QX(f) of degree mN0to a function fC(Ω) is defined as follows. We choose weight functions wi:R,
1iN. For each xΩ we set QX(f)(x):=p(x), where pπm(Rd) is the solution of
min
N
X
i=1
|f(xi)p(xi)|2wi(x):pπm(Rd)
.(3)
Next, we will assume that the weight functions wi:Rare compactly supported and introduce the index
sets
I(x):={j∈ {1, . . . , N}:wj(x)6= 0}.
We present the quasi-interpolation operator and its polynomial reproduction property.
Definition 2.3. Let QXbe a sample-based approximation functional of the form,
QX(f) =
N
X
i=1
f(xi)ai,(4)
where aiare weight functions. We say that QXreproduces polynomials up to degree mif for all pπm(Rd), we
have QX(p) = p, i.e.
p(x) =
N
X
i=1
p(xi)ai(x), x .(5)
Existence and certain properties of the MLS approximant are summarized in the following theorem. Its proof
can be found, for example, in [62]. In its formulation we use the concept of πm(Rd)-unisolvent sets. A set Xis
called πm(Rd)-unisolvent, if the only function pπm(Rd) that vanishes on all points from Xis the zero function.
Theorem 2.4. Let X={x1, . . . , xN} ⊆ Rd. Let wiC(Ω),1iN, be non-negative. Assume that for
each xthe set X(x) = {xi:iI(x)}is πm(Rd)-unisolvent. Then, the MLS approximation QX(f)of (4)
is well-defined for every fC(Ω), and QXreproduces polynomials up to degree m, see Definition 2.3. Moreover,
there are unique functions a1, . . . , aNfor (4), that can be written as
ai(x) = wi(x)
Q
X
`=1
λ`(x)p`(xi),1iN, (6)
3
with a basis {p1, . . . , pQ}of πm(Rd)and certain values λ1(x), . . . , λQ(x), which are determined by the linear system
Q
X
j=1
N
X
i=1
wi(x)pj(xi)p`(xi)
λj(x) = p`(x),1`Q. (7)
We will assume that the weight functions of (3) are of the form
wi(x)=Φxxi
δi,(8)
with a given function Φ :RdR. Typically, this function is either a radial function, i.e. a function of the form
Φ(x) = φ(kxk2), φ :[0,)R,
or of tensor product type, i.e.
Φ(x) =
d
Y
j=1
φj(χj), φj:RR, x = (χ1, . . . , χd).
For us this particular form is not important. We will, however, make the following assumption.
Assumption 2.5. For X={x1, . . . , xN}the weight functions wi:RdRare given by (8) with δi>0and a
function ΦCr(Rd)having compact support supp Φ = B(0,1) and being positive on B(0,1) = {xRd:kxk2<1}.
If the data set Xis chosen quasi-uniform, i.e. if the fill distance hX,and the separation radius qX, given by
hX,= sup
x
min
1jNkxxjk2, qX=1
2min
j6=ikxjxik2(9)
satisfy qXhX,cqqXwith a small constant cq1 then it is usual to use the same support radius δi=δ > 0
for all weight functions wi. Moreover, this support radius can then be chosen proportional to hX,.
Lemma 2.6. Let Xbe a quasi-uniform data set. Let the weights satisfy Assumption 2.5, where the support
radii are chosen as δi=cδhX,with a sufficiently large constant cδ>0. Then, the sets X(x)are πm(Rd)-unisolvent
and the functions aifrom Theorem 2.4 belong to Cr(Ω) and satisfy
|Dαai(x)| ≤ Ch−|α|
X,, x ,1iN,
for αNd
0with |α| ≤ r. Here, C > 0is a constant independent of X,xand ai.
Proof. This is proven in [47] and [48].
3 Error estimates for quasi-interpolation
We will now derive error estimates of the quasi-interpolation process, if the target functions are from Ck(Ω). The
proofs are very similar to the proofs for Sobolev spaces in [47, 48].
Remark 3.1. In the following proof we require that the target function fis defined on the line connecting two
arbitrary points x, y . This can be achieved by either assuming that is convex or that fis defined on all of
Rd. For more general domains Rdit is possible to extend functions from to all of Rd, provided has a
sufficiently smooth boundary, see for example [65]. Hence, for practical purposes it is no significant restriction to
stick to convexity in the following theorem.
Recall that mN0denotes the degree of the polynomials which are reproduced and that rN0is the
smoothness of the weights and kNis the smoothness of the target function.
Theorem 3.2. Let Rdbe a bounded, convex domain. Under all the above assumptions there is a constant
C > 0such that
kfQX(f)kC`(Ω) Chmin{m+1,k}−`
X,kfkCmin{m+1,k}(Ω)
for all fCk(Rd)and all 0`min{r, m + 1, k}.
4
Proof. As Xis quasi-uniform, there is a constant M > 0 independent of Nsuch that #I(x)Mfor all xΩ.
This follows as usual from a geometric argument. We have jI(x) if and only if kxxjk2< δj=cδhX,=:δ,
i.e. if and only if xj∈ B(x, δ). This implies
[
jI(x)
B(xj, qX)⊆ B(x, δ +qX),
and as the balls on the left-hand side are disjoint, we can conclude
#I(x)qd
X(δ+qX)d.
Quasi-uniformity qXhX,cqqXthen gives the upper bound
#I(x)δ
qX
+ 1d
cδcq+ 1d=:M.
Next, we use the notation X(x) = {xj:jI(x)}. By the polyonmial reproduction of the moving least-squares
process, we have for any pπm(Rd), any bxΩ and any αNd
0with |α| ≤ min{r, k},
|Dαf(bx)DαQX(f)(bx)|=|Dαf(bx)Dαp(bx) + Dαp(bx)DαQX(f)(bx)|
≤ |Dαf(bx)Dαp(bx)|+X
jI(bx)
|Dαaj(bx)||p(xj)f(xj)|
≤ |Dαf(bx)Dαp(bx)|+kfpk`(X(bx))
1 + X
jI(bx)
Cαh−|α|
X,
≤ |Dαf(bx)Dαp(bx)|+kfpk`(X(bx))(1 + CαM)h−|α|
X,.
Next, we choose pas the Taylor polynomial Tmin{m,k1}fof fabout bx, i.e.
p(x) = Tmin{m,k1}f(x) = X
|β|≤min{m,k1}
Dβf(bx)
β!(xbx)β.
For |α| ≤ min{m, k 1}, it is well-known and shown by a straight-forward calculation that
DαTmin{m,k1}f=Tmin{m,k1}−|α|Dαf.
This immediately shows for |α| ≤ min{m, k 1}that Dαf(bx) = Dαp(bx) holds. Moreover, as Ω is convex, we can
use the Lagrange remainder formula in the form
f(x)p(x) = X
|β|=min{m+1,k}
Dβf(ξ)
β!(xbx)β,
with ξon the line connecting xand bx. This shows for xX(bx), the bound
|f(x)p(x)| ≤ Cδmin{m+1,k}kfkCmin{m+1,k}(Ω).
Combining this with the estimate above, we find for |α| ≤ min{r, m, k 1},
kDαfDαQX(f)kC(Ω) Chmin{m+1,k}−|α|
X,kfkCmin{m+1,k}(Ω).(10)
We finally need to extend this bound to the situation of |α| ≤ min{r, m + 1, k}. Hence, we only have to show
something in addition if min{m+ 1, k} ≤ r. In this case, we have for the Taylor polynomial p=Tmin{m,k1}and
|α|= min{m+ 1, k}= min{m, k 1}+ 1 obviously Dαp(bx) = 0. This then shows
|Dαf(bx)DαQX(f)(bx)|≤|Dαf(bx)| ≤ kfkCmin{m+1,k}(Ω),
which is the desired extension of (10).
Obviously, it makes sense to align the three parameters m, k, r as follows.
Corollary 3.3. If the smoothnesses are chosen as k=r=m+ 1 then,
kfQX(f)kC`(Ω) Ck,`hk`
X,kfkCk(Ω),(11)
for all fCk(Ω) and all 0`k.
5
摘要:

Onmultiscalequasi-interpolationofscatteredscalar-andmanifold-valuedfunctionsNirSharon,RafaelSherbuCohen,HolgerWendlandAbstractWeaddresstheproblemofapproximatinganunknownfunctionfromitsdiscretesamplesgivenatarbitrarilyscatteredsites.Thisproblemisessentialinnumericalsciences,wheremodernapplicationsals...

展开>> 收起<<
On multiscale quasi-interpolation of scattered scalar- and manifold-valued functions Nir Sharon Rafael Sherbu Cohen Holger Wendland.pdf

共22页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:22 页 大小:2MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 22
客服
关注