where φ:RN→R,D=z∈RN,|z| ≤ 1is the closed unit ball and we fix t∈[0, T ]. Here ν(dz)is the
Lévy measure of WL, and up to a positive multiplicative constant is of the form ν(dz) = |z|−(N+2α)dz (see,
e.g., [20, Theorem 30.1]). The link between the equations in (1) and (2) is provided by Theorem 7 (ii)below
(see also the book [15] for related results), where it is shown that the time–dependent Markov transition
semigroup E[φ(Xs,x
t)] associated with (1) satisfies (2) in the closed interval [0, t]for every φ∈C3
bRN.
Moreover, we are able to extend the validity of this connection in [0, t)to every function φ∈ BbRN
through an original procedure based on regularization–by–noise and a mild, integral formulation of (2) (see
Remark 1).
In the present work, we are precisely interested in these expected values, with particular attention to
the case φ(x)=1{|x|>R}(for some threshold R > 0), where one has E[φ(Xs,x
t)] = P(|Xs,x
t|> R). Hence
we want to describe a method which allows to compute probabilities related to the solution of the SDE
(1). Trying to get an estimate of them by numerically solving the integro–differential equation (2) is a
typical example of curse of dimensionality (CoD), and since we intend to deal with a high dimension (in the
simulations we take N= 100), this is an unfeasible way to proceed. The canonical approach to tackle our
problem is the Monte Carlo method: several paths of Xs,x are simulated by the Euler–Maruyama scheme
with a fine time step, and then the final points of these trajectories are averaged to get an approximation of
the desired expected values by virtue of the strong law of large numbers. However, if we were to follow this
scheme (which is known to be free of the CoD), then we would have to start over the procedure every time
we change the starting point xand the starting time s, the noise strength σand even the nonlinearity B0, a
practice that is very common in a wide range of applications including weather forecasts and calibration of
financial models (see [1] and references therein). In order to overcome this setback, we aim to extend to our
framework the ideas developed in the papers [9, 10] for the Gaussian case, namely we search for an iterative
scheme which relies on a single bulk of Monte Carlo simulations independent from the aforementioned
parameters. Specifically, to approximate the value of the iterates vn
s(t, x), n ∈N∪ {0},we just need
to simulate once and for all, using the Euler–Maruyama scheme, a large number of sample paths of the
subordinator Land of the stochastic convolution e
Z0
t=Rt
0e(t−r)AdWLr, t ∈[0, T ], which is the unique (up
to indistinguishability) solution of the linear SDE
de
Z0
t=Ae
Z0
tdt +dWLt,e
Z0
0= 0.
The main novelty of the approach that we propose consists in the structure of the noise WL, which is a
2α−stable, rotation–invariant Lévy process (cfr. [20, Example 30.6]). In particular, the introduction of L
considerably complicates the framework compared to the Brownian one treated in [9, 10]. This fact leads us
to develop an original procedure –essentially based on conditioning with respect to the σ−algebra generated
by the subordinator– to get an expression for the iterates which is suitable for applications. Moreover, the
theoretical foundation of the iterative method analyzed in this work, Theorem 3, has a remarkable interest
on its own. Indeed, it establishes a connection between the time–dependent Markov transition semigroup
associated with (1) and a mild, integral formulation of (2) (see Equation (11)) that, at the best of our
knowledge, is new when it comes to isotropic Lévy processes.
The paper is structured as follows. Section 2 describes the setting and recalls the main concepts that
will be widely used in the rest of the paper. In addition, it introduces the integral formulation of the
Kolmogorov equation (2) and shows its well–posedness. Next, in Section 3 (see Theorem 3) we provide
the probabilistic interpretation of (2) in mild form, along with other interesting regularization–by–noise
results for SDEs driven by subordinated Wiener processes. In Section 4 we define the iterative scheme
and prove its convergence to the expected values that we are trying to approximate. Next, Section 5 is
concerned with the computation of the first iterate v1
s(t, x); it is divided into two subsections referring to
the deterministic and random time–shifts, respectively. Its results are used in Section 6 as the base case
for the induction argument that allows to calculate vn
s(t, x)(see Theorem 17). The last part (Section 7) is
devoted to numerical experiments in dimension N= 100 for two choices of the nonlinear vector field B0,
with particular attention on the improvements provided by the first iteration over the linear approximation
corresponding to the Ornstein–Uhlenbeck (hereafter OU) processes. Finally, Appendix A contains the proof
of Lemma 4.
2