2
for mathematical simplicity, we assume that (Bt)and (WH
t)are independent, and
this rules out leverage effects between price and volatility. The case of leverage effect
is considered in our companion paper [16].
Historically, the rough volatility property was discovered using an empirical ap-
proach, where data sets of interest are time series of daily measurement of historical
volatility over many years and for thousands of assets, see [11,35]. Its daily values
were estimated by filtering high-frequency price observations, using various up-to-
date inference methods for high-frequency data, all of them leading to analogous
results. Several natural empirical statistics were computed on these volatility time
series, in a model-agnostic way. Then it was shown in [35] that strikingly similar
patterns are observed when computing the same statistics in the simple Model (1)
(actually a version of (1) where one considers a piecewise constant approximation
of the volatility). For example, among the statistics advocating for rough volatility,
empirical means of the structure function
(2) ∆7→|log(σt+∆)−log(σt)|q, q > 0,
play an important role, for ∆going from one day to several hundreds of days. For
every value of q, the empirical counterpart of (2) systematically behaves like ∆Aq,
where Aof order 10−1, for the whole range of ∆. This scaling invariance is obvi-
ously reproduced if the volatility dynamics follow (1) with Hof order 10−1, thanks
to the scaling property of fractional Brownian motion. In addition, the fact that this
empirical fact also holds for large ∆somehow discards alternate stationary model
candidates, where the moments of the increments no longer depend on ∆for large
∆. It also kind of rules out the idea that this empirical scaling of the log-volatility
increments could be an artefact due to the estimation error in the volatility process.
1.2. Rough volatility in the literature. At first glance, the relevance of the parameter
value in Model (1) may be surprising. It is in stark contrast with the first generation
of fractional volatility models (FSV) where H > 1/2in a stationary environment, see
[20]. The goal of FSV models was notably to reproduce long memory properties of
the volatility process and we know that fractional Brownian motion increments ex-
hibit long memory when H > 1/2. However, it turns out that when His very small,
it remains consistent with the behaviour of financial data even on very long time
scales, see [35]. In addition to the stylised facts obtained from historical volatility,
the data analysis obtained from implied volatility surfaces also support the rough
volatility paradigm, see [5,8,30,50]. In other words, rough volatility models are, in
financial terms, compatible with both historical and risk-neutral measures. Further-
more, rough volatility models can be micro-founded: in fact, only a rough nature
for the volatility can allow financial markets to operate under no-statistical arbitrage
conditions at the level of high-frequency trading, see [22,48]. This has paved the way
over the last few years to several new research directions in quantitative finance.
Among other contributions, we mention risk management of complex derivatives,
as considered for instance in [1,6,23,25,31,36,42,45], numerical issues as addressed
in [2,10,15,34,51,57], asymptotic expansions are provided in [17,24,26–28,46] and
theoretical considerations about the probabilistic structure of rough volatility mod-
els as in [3,9,21,29,33,37,38].
Beyond the popularity of rough volatility models due to their remarkable ability
to mimic data, the domain is certainly mature enough to take a step back with a view