Noise Reinforced L evy Processes L evy-It o Decomposition and Applications Alejandro Rosales-Ortiz

2025-05-02 0 0 878.92KB 48 页 10玖币
侵权投诉
Noise Reinforced L´evy Processes:
L´evy-Itˆo Decomposition and Applications
Alejandro Rosales-Ortiz
Institute of Mathematics, University of Z¨urich
Abstract
A step reinforced random walk is a discrete time process with memory such that at each time
step, with fixed probability p(0,1), it repeats a previously performed step chosen uniformly at
random while with complementary probability 1 p, it performs an independent step with fixed law.
In the continuum, the main result of Bertoin in [7] states that the random walk constructed from the
discrete-time skeleton of a L´evy process for a time partition of mesh-size 1/n converges, as n↑ ∞
in the sense of finite dimensional distributions, to a process ˆ
ξreferred to as a noise reinforced L´evy
process. Our first main result states that a noise reinforced L´evy processes has rcll paths and satisfies
anoise reinforced evy Itˆo decomposition in terms of the noise reinforced Poisson point process of
its jumps. We introduce the joint distribution of a L´evy process and its reinforced version (ξ, ˆ
ξ) and
show that the pair, conformed by the skeleton of the L´evy process and its step reinforced version,
converge towards (ξ, ˆ
ξ) as the mesh size tend to 0. As an application, we analyse the rate of growth
of ˆ
ξat the origin and identify its main features as an infinitely divisible process.
1 Introduction
The L´evy-Itˆo decomposition is one of the main tools for the study of L´evy processes. In short, any real
L´evy process ξhas rcll sample paths and its jump process induces a Poisson random measure – called
the jump measure Nof ξ– whose intensity is described by its L´evy measure Λ. Moreover, it states that
ξcan be written as the sum of tree process
ξt=ξ(1)
t+ξ(2)
t+ξ(3)
t, t 0,
of radically different nature. More precisely, the continuous part of ξis given by ξ(1) = (at +qBt:t0)
for a Brownian motion Band reals a, q, while ξ(2) is a compound Poisson process with jump-sizes
greater than 1 and ξ(3) is a purely discontinuous martingale with jump-sizes smaller than 1. Moreover,
the processes ξ(2),ξ(3) can be reconstructed from the jump measure N. It is well known that Nis
characterised by the two following properties: for any Borel Awith Λ(A)<, the counting process of
jumps ∆ξsAthat we denote by NAis a Poisson process with rate Λ(A), and for any disjoint Borel
sets A1, . . . , Akwith Λ(Ai)<, the corresponding Poisson processes NA1, . . . , NAkare independent.
We refer to e.g. [5, 16, 23] for a complete account on the theory of L´evy processes.
In this work, we shall give an analogous description for noise reinforced L´evy processes (abbreviated
NRLPs). This family of processes has been recently introduced by Bertoin in [7] and correspond to weak
limits of step reinforced random walks of skeletons of L´evy process. In order to be more precise, let
us briefly recall the connection between these discrete objects and our continuous time setting. Fix a
L´evy process ξand denote, for each fixed n, by X(n)
k:= ξk/n ξ(k1)/n the k-th increment of ξfor a
partition of size 1/n of the real line. The process S(n)
k:= X(n)
1+· · · +X(n)
k=ξk/n for k1 is a random
walk, also called the n-skeleton of ξ. Now, fix a real number p(0,1) that we call the reinforcement
or memory parameter and let ˆ
S(n)
1:= X(n)
1. Then, define recursively ˆ
S(n)
kfor k2 according to the
following rule: for each k2, set ˆ
S(n)
k:= ˆ
S(n)
k1+ˆ
X(n)
kwhere, with probability 1p, the step ˆ
X(n)
kis the
increment X(n)
kwith law ξ1/n – and hence independent from the previously performed steps – while with
probability p,ˆ
X(n)
kis an increment chosen uniformly at random from the previous ones ˆ
X(n)
1,..., ˆ
X(n)
k.
alejandro.rosalesortiz@math.uzh.ch
Research supported by the Swiss National Science Foundation (SNSF).
1
arXiv:2210.00564v1 [math.PR] 2 Oct 2022
When the former occurs, the step is called an innovation, while in the latter case it is referred to as a
reinforcement. The process ( ˆ
S(n)
k) is called the step-reinforced version of (S(n)
k). It was shown in [7] that,
under appropriate assumptions on the memory parameter p, we have the following convergence in the
sense of finite dimensional distributions as the mesh-size tends to 0
(ˆ
S(n)
bntc)t[0,1]
f.d.d.
ˆ
ξtt[0,1],(1.1)
towards a process ˆ
ξidentified in [7] and called a noise reinforced L´evy process. It should be noted that
the process ˆ
ξconstructed in [7] is a priori not even rcll, and this will be one of our first concerns.
We are now in position to briefly state the main results of this work. First, we shall prove the
existence of a rcll modification for ˆ
ξ. In particular, this allow us to consider the jump process (∆ˆ
ξs); a
proper understanding of its nature will be crucial for this work. In this direction, we introduce a new
family of random measures in R+×Rof independent interest under the name noise reinforced Poisson
point processes (abbreviated NRPPPs) and we study its basic properties. This lead us towards our first
main result, which is a version of the L´evy-Itˆo decomposition in the reinforced setting. More precisely,
we show that the jump measure of ˆ
ξis a NRPPP and that ˆ
ξcan be written as
ˆ
ξt=ˆ
ξ(1)
t+ˆ
ξ(2)
t+ˆ
ξ(3)
t, t 0,
where now, ˆ
ξ(1) = (at +qˆ
Bt:t0) for a continuous Gaussian process ˆ
B, the process ˆ
ξ(2) is a rein-
forced compound Poisson process with jump-sizes greater than one, while ˆ
ξ(3) is a purely discontinuous
semimartingale. The continuous Gaussian process ˆ
Bis the so-called noise reinforced Brownian motion,
a Gaussian process introduced in [8] with law singular with respect to B, and arising as the universal
scaling limit of noise reinforced random walks when the law of the typical step is in L2(P) – and hence
plays the role of Brownian motion in the reinforced setting, see also [4] for related results. Needless to
say that if the starting L´evy process ξis a Brownian motion, the limit ˆ
ξobtained in (1.1) is a noise
reinforced Brownian motion. As in the non-reinforced case, ˆ
ξ(2) and ˆ
ξ(3) can be recovered from the jump
measure ˆ
N, but in contrast, they are not Markovian. The terminology used for the jump measure of ˆ
ξ
is justified by the following remarkable property: for any Borel Awith Λ(A)<, the counting process
of jumps ∆ˆ
ξsAthat we denote by ˆ
NAis a reinforced Poisson process and, more precisely, it has the
law of the noise reinforced version of NA(hence, the terminology ˆ
NAis consistent). Moreover, for any
disjoint Borel sets A1, . . . , Akwith Λ(Ai)<, the corresponding ˆ
NA1,..., ˆ
NAkare independent noise
reinforced Poisson processes. Informally, the reinforcement induces memory on the jumps of ˆ
ξ, and these
are repeated at the jump times of an independent counting process. When working on the unit interval,
this counting process is the so-called Yule-Simon process.
The second main result of this work consists in defining pathwise, the noise reinforced version ˆ
ξof
the L´evy process ξ. We always denote such a pair by (ξ, ˆ
ξ). This is mainly achieved by transforming the
jump measure of ξinto a NRPPP, by a procedure that can be interpreted as the continuous time analogue
of the reinforcement algorithm we described for random walks. More precisely, the steps X(n)
kof the
n-skeleton are replaced by the jumps ∆ξsof the L´evy process; each jump of ξis shared with its reinforced
version ˆ
ξwith probability 1 p, while with probability pit is discarded and remains independent of ˆ
ξ.
We then proceed to justify our construction by showing that the skeleton of ξand its reinforced version
(S(n)
bn·c,ˆ
S(n)
bn·c) converge weakly towards (ξ, ˆ
ξ), strengthening (1.1) considerably.
Section 6 is devoted to applications: on the one hand, in Section 6.1 we study the rates of growth at
the origin of ˆ
ξand prove that well know results established by Blumenthal and Getoor in [9] for L´evy
processes still hold for NRLPs. On the other hand, in Section 6.2 we analyse NRLPs under the scope
of infinitely divisible processes in the sense of [21]. We shall give a proper description of ˆ
ξin terms of
the usual terminology of infinitely divisible processes, as well as an application, by making use of the
so-called Isomorphism theorem for infinitely divisible processes.
Let us mention that in the discrete setting, reinforcement of processes and models has been subject of
active research for a long time, see for instance the survey by Pemantle [19] as well as e.g. [6, 3, 1, 18, 2, 11]
and references therein for related work. However, reinforcement of time-continuous stochastic processes,
which is the topic of this work, remains a rather unexplored subject.
The rest of the work is organised as follows: in Section 2 we recall the basic building blocs needed for
the construction of NRLPs and recall the main results that will be needed. Notably, we give a brief
overview of the features of the Yule-Simon process and present some important examples of NRLPs. In
Section 3 we show that a NRLP has a rcll modification. In Section 4 we construct NRPPPs, study their
2
main properties of interest, and in Section 4.3 we prove that the jump measure of a NRLP is a NRPPP
– a result that we refer to as the ”reinforced L´evy-Itˆo decomposition”. In Section 5 we show that the
pair conformed by the n-skeleton of a L´evy process and its reinforced version converge in distribution,
as the mesh size tends to 0, towards (ξ, ˆ
ξ). To achieve this, first we start by proving in Section 5.1 that
a NRLP can be reconstructed from its jump measure – a result that we refer to as the ”reinforced L´evy
Itˆo synthesis”. Making use of this result in Section 5.2 we define the joint law (ξ, ˆ
ξ) and in Section 5.3
we establish our convergence result. Finally, Section 6 is devoted to applications. Particular attention
is given through this work at comparing, when possible and pertinent, our results for NRLPs to the
classical ones for L´evy processes.
Contents
1 Introduction 1
2 Preliminaries 3
2.1 Yule-Simonprocesses................................... 3
2.2 Noise reinforced L´evy processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Building blocks: noise reinforced Brownian motion and noise reinforced compound
Poissonprocess...................................... 6
3 Trajectorial regularity 7
4 Reinforced L´evy-Itˆo decomposition 10
4.1 The jumps of noise reinforced Poisson processes . . . . . . . . . . . . . . . . . . . . 10
4.2 Construction of noise reinforced Poisson point processes by decoration . . . . . . . 12
4.3 Proof of Theorem 4.1 and compensator of the jump measure . . . . . . . . . . . . 16
5 Weak convergence of the pair of skeletons 19
5.1 ProofofTheorem5.1 .................................. 21
5.2 The joint law (ξ, ˆ
ξ) of a L´evy process and its reinforced version . . . . . . . . . . . 23
5.3 ProofofTheorem5.4 .................................. 28
6 Applications 37
6.1 Rates of growth at the origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.2 Noise reinforced L´evy processes as infinitely divisible processes . . . . . . . . . . . 40
7 Appendix 46
2 Preliminaries
2.1 Yule-Simon processes
In this section, we recall several results from [7] concerning Yule-Simon processes needed for defining
NRLPs. These results will be used frequently in this work and are re-stated for ease of reading.
A Yule-Simon process on the interval [0,1] is a counting process, started from 0, with first jump time
uniformly distributed in [0,1], and behaving afterwards as a (deterministically) time-changed standard
Yule process. More precisely, for fixed p(0,1), if Uis a uniform random variable in [0,1] and Za
standard Yule process,
Y(t) := 1{Ut}Zp(ln(t)ln(U)), t [0,1],(2.1)
is a Yule-Simon process with parameter 1/p. Its law in D[0,1], the space of R-valued rcll functions
in the unit interval endowed with the Skorokhod topology, will be denoted by Q. It readily follows
from the definition that this is a time-inhomogeneaous Markov process, with time-dependent birth rates
given at time tby λ0(t) = 1/(1 t) and λk(t) = pk/t for k∈ {1,2, . . . }. Remark as well that we have
P(Y(t)1) = t. In our work, only p(0,1) will be used, and it always corresponds to the reinforcement
parameter. The Yule-Simon process with parameter 1/p is closely related to the Yule-Simon distribution
with parameter 1/p, i.e. the probability measure supported on {1,2, . . . }with probability mass function
given in terms of the Beta function B(x, y) by
p1B(k, 1/p + 1) = p1Z1
0
up(1 u)k1du, for k1.(2.2)
The relation with the Yule process is simply that Y(1) is distributed Yule-Simon with parameter 1/p.
In this work, we refer to p(0,1) as a reinforcement or memory parameter, for reasons that will be
3
explained shortly. In the following lemma we state for further use the conditional self-similarity property
of the Yule-Simon process, a key feature that will be used frequently.
Lemma 2.1. [7, Corollary 2.3]
Let Ybe a Yule-Simon process with parameter 1/p and fix t(0,1]. Then, the process (Y(rt))r[0,1]
conditionally on {Y(t)1}has the same distribution Qas Y.
In particular, conditionally on {Y(t)1}, Y (t) is distributed Yule-Simon with parameter 1/p and it
follows that for every t[0,1], Y(t) has finite moments only of order r < 1/p. Moreover, by the previous
lemma and the Markov property of the standard Yule process Z, we deduce that if Yis a Yule-Simon
process with parameter 1/p with p(0,1) and k1, we have
E[Y(t)] = (1 p)1tand E[Y(t)|Y(s) = k] = k(t/s)pfor any 0 < s t1,(2.3)
while if 1/p > 2,
E[Y(s)Y(t)] = 1
(1 p)(1 2p)s1ptp.(2.4)
More details on these statements can be found in Section 2 of [7].
2.2 Noise reinforced L´evy processes
Now, we turn our attention to the main ingredients involved in the construction of NRLPs. For the rest
of the section, fix a real valued L´evy process ξof characteristic triplet (a, q2,Λ), where Λ is the L´evy
measure, and recall that its characteristic exponent Ψ(λ) := log Eeiλξ1is given by the L´evy-Khintchine
formula
Ψ(λ) = iaλ q2
2λ2+ZReiλx 1ixλ1{|x|≤1}Λ(dx).(2.5)
The constraints on the reinforcement parameter pare given in terms of the following two indices intro-
duced by Blumenthal and Getoor: the Blumenthal-Getoor (upper) index β(Λ) of the L´evy measure Λ is
defined as
β(Λ) := inf r > 0 : Z[0,1]
|x|rΛ(dx)<,(2.6)
while the Blumenthal-Getoor index βof the L´evy process ξis defined by the relation
β:= (β(Λ) if q2= 0
2 if q26= 0.(2.7)
When ξhas no Gaussian component, we have β=β(Λ) and both notations will be used indifferently.
We say that a memory parameter p(0,1) is admissible for the triplet (a, q2,Λ) if < 1. Now, fix p
an admissible memory parameter for ξ. If (S(n)
k) is the n-skeleton of the L´evy process ξ, the sequence of
reinforced versions with parameter p,
(ˆ
S(n)
bntc)t[0,1], n 1,
converge in the sense of finite dimensional distributions, as the mesh-size tends to 0, towards a process
whose law was identified in [7] and called the noise reinforced L´evy process ˆ
ξof characteristics (a, q2,Λ, p).
In the sequel, when considering a NRLP with parameter p, it will be implicitly assumed that pis
admissible for the corresponding triplet. For instance, when working with a memory parameter p1/2
it is implicitly assumed that q= 0. It was shown in [7, Corollary 2.11] that the finite-dimensional
distributions of ˆ
ξcan be expressed in terms of the Yule-Simon process Ywith parameter 1/p and the
characteristic exponent Ψ as follows:
E"exp (i
k
X
i=1
λiˆ
ξsi)#= exp ((1 p)E"Ψ k
X
i=1
λiY(si)!#),(2.8)
for 0 < s1<· · · < sk1. Now we turn our attention at defining NRLPs in R+. Notice that the
construction given in the unit interval in [7] can not be directly extended to the real line since it relies
on Poissonian sums of Yule-Simon processes, and these are only defined on the unit interval.
4
Proposition 2.2. (NRLPs in R+)
Let (a, q2,Λ) be the triplet of a L´evy process of exponent Ψand consider an admissible memory parameter
p(0,1). There exists a process ˆ
ξ= (ˆ
ξs)sR+whose finite dimensional distributions satisfy that, for
any 0< s1<· · · < skt,
E"exp (i
k
X
i=1
λiˆ
ξsi)#= exp ((1 p)tE"Ψ k
X
i=1
λiY(si/t)!#),(2.9)
where the right-hand side does not depend on the choice of t. The process ˆ
ξis called a noise reinforced
L´evy process with characteristics (a, q2,Λ, p).
Proof. First, let us show that the right-hand side of (2.9) does not depend on t. To prove this, pick
another arbitrary T > t and write ri=si/t [0,1]. From conditioning on {Yt/T 1}, an event with
probability t/T , by Lemma 2.1 we get
TE"Ψ k
X
i=1
λiY(si/T )!#=t(T /t)E"Ψ k
X
i=1
λiY(ri·(t/T ))!#
=tE"Ψ k
X
i=1
λiY(ri·(t/T ))!Y(t/T )1#
=tE"Ψ k
X
i=1
λiY(si/t)!#,(2.10)
proving our claim, and where in the second equality we used that Ψ(0) = 0. Now, let us establish
the existence of a process with finite-dimensional distributions characterised by (2.9). Remark that
by Kolmogorov’s consistency theorem, it suffices to show that for arbitrary 1 S < T , there exists
processes ˆ
XS= ( ˆ
XS
t)t[0,S],ˆ
XT:= ( ˆ
XT
t)t[0,T ]with finite dimensional distributions characterised by
the identity (2.9) for (si) in [0, S], t=Sand (si) in [0, T ], t=Trespectively – and hence satisfying that
(ˆ
XT
t)t[0,S]
L
= ( ˆ
XS
t)t[0,S]. Write ˆ
ξS= (ˆ
ξS
t)t[0,1] the reinforced version of the L´evy process (ξtS )t[0,1],
remark that the latter has characteristic exponent SΨ, and set ( ˆ
XS
t)t[0,S]:= (ˆ
ξS
t/S )t[0,S]. From the
identity (2.8), we deduce that, for any 0 < s1<· · · < skin the interval [0, S], we have:
E"exp (i
k
X
i=1
λiˆ
XS(si))#= exp ((1 p)SE"Ψ k
X
i=1
λiY(si/S)!#).(2.11)
In particular ˆ
XSrestricted to the interval [0,1] has the same distribution as (ˆ
ξt)t[0,1] by the first part of
the proof and (2.8). If we consider the restriction of ( ˆ
XT)t[0,T ]to the interval [0, S], we obtain similarly
and by applying (2.10) that, for any 0 < s1<· · · < skS,
E"exp (i
k
X
i=1
λiˆ
XT(si))#= exp ((1 p)TE"Ψ k
X
i=1
λiY(si/T )!#)
= exp ((1 p)SE"Ψ k
X
i=1
λiY(si/S)!#),
and it follows that ˆ
XTrestricted to [0, S] has the same distribution as ˆ
XS. Since this holds for any
1S < T , we deduce by Kolmogorov’s consistency theorem the existence of a process satisfying for any
0< s1<· · · < skt, the identity (2.9). In particular, from taking the value t= 1, it follows that this
process satisfies that its restriction to [0,1] has the same law as ˆ
ξby (2.8).
For later use, notice from (2.9) that for any fixed tR+, we have the following equality in law
(ˆ
ξst)s[0,1]
L
= (ξ·t
b
)s[0,1] (2.12)
where the right-hand side stands for the noise-reinforced version of the L´evy process (ξst)s[0,1]. In
particular, (ˆ
ξst)s[0,1] is the NRLP associated to the exponent tΨ with same reinforcement parameter.
5
摘要:

NoiseReinforcedLevyProcesses:Levy-It^oDecompositionandApplicationsAlejandroRosales-Ortiz*InstituteofMathematics,UniversityofZurich„AbstractAstepreinforcedrandomwalkisadiscretetimeprocesswithmemorysuchthatateachtimestep,with xedprobabilityp2(0;1),itrepeatsapreviouslyperformedstepchosenuniformlyatr...

展开>> 收起<<
Noise Reinforced L evy Processes L evy-It o Decomposition and Applications Alejandro Rosales-Ortiz.pdf

共48页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:48 页 大小:878.92KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 48
客服
关注