Granger Causality for Predictability in Dynamic Mode Decomposition G. Revati Syed Shadab K. Sonam S. R. Wagh and N. M. Singh Abstract The dynamic mode decomposition DMD tech-

2025-05-06 0 0 1007.55KB 11 页 10玖币
侵权投诉
Granger Causality for Predictability in Dynamic Mode Decomposition
G. Revati, Syed Shadab, K. Sonam, S. R. Wagh, and N. M. Singh
Abstract The dynamic mode decomposition (DMD) tech-
nique extracts the dominant modes characterizing the innate
dynamical behavior of the system within the measurement data.
For appropriate identification of dominant modes from the
measurement data, the DMD algorithm necessitates ensuring
the quality of the input measurement data sequences. On that
account, for validating the usability of the dataset for the DMD
algorithm, the paper proposed two conditions: Persistence of
excitation (PE) and the Granger Causality Test (GCT). The
virtual data sequences are designed with the hankel matrix rep-
resentation such that the dimensions of the subspace spanning
the essential system modes are increased with the addition of
new state variables. The PE condition provides the lower bound
for the trajectory length, and the GCT provides the order
of the model. Satisfying the PE condition enables estimating
an approximate linear model, but the predictability with the
identified model is only assured with the temporal causation
among data searched with GCT. The proposed methodology
is validated with the application for coherency identification
(CI) in a multi-machine power system (MMPS), an essential
phenomenon in transient stability analysis. The significance of
PE condition and GCT is demonstrated through various case
studies implemented on 22 bus six generator system.
Index Terms Coherency Identification, Dynamic Mode De-
composition (DMD), Granger causality, Hankel, Persistence of
Excitation (PE).
I. INTRODUCTION
With the growing emphasis on data-driven modeling, un-
derstanding the interactions and connections among the time
series drawn from observational data is a field of interest.
Causality is the intersection of philosophy and sciences
[1], deriving the generalizations and theories from specific
observations by analyzing the cause and effects among the
observational data. A primary approach for understanding
the information flow amongst the time series is to determine
the cross-correlation [2] among the two time series and
to discover the existence of a peak in the correlation at
some non-zero lag. The causal inferences drawn from the
correlation are misleading since the correlation reveals only
whether the two variables are statistically linked. The causal
relationship amongst two variables can be direct, or indirect
due to confounding effect [3] i.e., besides the variables under
study there is some additional unnoticed variable correlated
with the considered variables. Furthermore, the correlation
being a symmetric measure fails to provide any information
about the causality direction.
G. Revati, Syed Shadab, S. R. Wagh, and N. M. Singh are with Control
and Decision Research Centre (CDRC), EED, Veermata Jijabai Tech-
nological Institute, Mumbai 400019, India cdrc@ee.vjti.ac.in.
K. Sonam is with the Computer Science and Engineering Department,
University of South Carolina, USA
As per the principle of time asymmetry of causation, clas-
sical physics employs the precedence of causes over effects.
Accounting for the direction of causation, a dynamical model
identification is another approach. The concept of dynamical
model identification is fundamentally developed on the fact
that law drives the system and enables the evolution of the
same state in a similar manner [4] i.e., similar effects are
produced by the same causes mentioned as per physical
determination. The laws defining the system dynamics are
identified from the regression of observational data achieved
through the evaluation of correlation. For detecting and
quantifying the temporal causality amidst the time series,
a powerful statistical test known as Granger Causality (GC)
was first proposed in [5]. The widespread applications of the
GC in neuroscience [6], economy [7], and climate modelling
[8] are mentioned in the literature. The fundamental notion
behind GC is the enhancement in the prediction of one
variable with the introduction of past information of another
variable along with the past information of the considered
variable itself.
Conventionally the control applications extensively opted
for the system identification methods fitting the data to the
model parameterized priori [9]. The growing complexity and
huge amount of available system data challenged the standard
strategies for learning the dynamical system. Alternatively,
the paradigm shift occurred towards the identification of a
dynamical system from the raw measurement data of the
system. The data-driven modeling approaches are generally
dependent on searching for the accurate combination of the
known trajectory in order to achieve a reliable prediction
which is usually an ill-conditioned problem [10]. For deal-
ing with such problems, the Moore-Penrose pseudoinverse
[11] solving the least norm problems is preferred due to
computational simplicity. One such data-driven subspace
prediction strategy is the Dynamic Mode Decomposition
(DMD) [12] which decomposes the high dimensional data
into spatiotemporal coherent modes.
DMD is a dimensionality reduction technique [13] pio-
neered in the fluid dynamics community by Peter Schmid for
identifying the linear approximation from the data compris-
ing the dominant modes describing the dynamical behavior
of the system [14]. The quality identification of the dominant
modes capturing the dynamics depends on the quality of the
measurement data exploited for the strategy. For capturing
the modes of the system, the cardinality of the measurement
sequences utilized in the DMD should be greater than or
equal to the underlying system modes. Hence the dimensions
of the subspace spanning the essential dynamical modes
are increased with the Hankel matrix [15] introducing the
arXiv:2210.12737v1 [eess.SY] 23 Oct 2022
new state variables. The temporal evolution of the system
lies in the column space and the row space defines the
spatial structure of the system modes hence for the estima-
tion of an accurate approximate linear model a sufficient
number of rows and columns of the input data matrices are
necessary. Hence for identifying the accurate linear model
with predictability, the paper has proposed two conditions:
Persistence of excitation (PE) and Granger Causality Test
(GCT). The PE is a necessary condition and provides the
lower bound on the trajectory length and the GCT which
being a sufficient condition is informative to detect the
causation among the measurement sequences and to find the
appropriate order of the model ensuring the predictability of
the identified linear model.
The suitability of the proposed approach is verified with
the application of coherency identification in a multimachine
power system (MMPS). Coherency is a property of gen-
erators to swing together the coherency identification is a
necessary phenomenon for the transient stability analysis in
MMPS. The relevance of PE condition for capturing the
dominant dynamical modes of the system and the signifi-
cance of GCT to establish the vital role of causation to ensure
predictability is demonstrated with the various experimental
case studies implemented on the 22 bus 6 generator power
system.
The remaining paper is structured as follows: The concepts
including Dynamic mode decomposition, the persistence of
excitation, vector autoregression, and Granger causality are
discussed briefly as preliminaries in Section II. The proposed
methodology explaining the details of the PE condition and
the Granger Causality Test along with the DMD algorithm is
presented in Section III. The results for the comprehensive
case study along with the test and error analysis are illus-
trated in Section IV. The paper is concluded with the future
work in Section V.
II. PRELIMINARIES
A. Dynamic Mode Decomposition (DMD)
DMD is designed to extract spatially coherent modes,
oscillate at a fixed frequency, and decay or growth at a fixed
rate [16]. DMD is a data-driven technique that emerged from
the fluid dynamics community [17] identifying a dynamical
system from the observational data. DMD is strongly related
to the Koopman operator theory (KOT) which provides an
infinite dimensional linear representation Kof the nonlinear
system dynamics ϕacting on the finite-dimensional manifold
M. DMD is an approach that seeks the Amatrix such that
its spectrum approximates the spectrum of the Koopman
operator. The dominant eigenvalues and eigenvectors of this
Amatrix are very informative [18] about the dynamical
attributes of the system such as the frequency, decay, growth,
and flow modes.
Consider a set of pairs of observations xjand yj,j =
1,2,· · · ,nrelated temporally such that
yj(z) = xj(ϕ(z)) (1)
where z∈ M,ϕ:M → M is a dynamical system, and
xj,yj:M → R.yjis one step ahead in time that of xji.e.
if xjdenotes the observation at time t, then yjrepresents the
measurement at time t + ∆t. From these measurement time
series, two data matrices Xand Yare constructed as
X = x1x2· · · xn,Y = y1y2· · · yn(2)
also X,YRm×n, where mis the dimension of the
manifold M. The objective of the DMD algorithm is to
evaluate the approximation Asuch that
Y = AX(3)
The analytical solution of the above problem is given by
A= YX(4)
As the input matrices involved in the computation of A
matrix are rectangular the Moore-Penrose pseudo-inverse X
is used. The solution 4 originates from the least square
problem of minimizing the error
kY− AXkF(5)
k.kFis a Frobenius norm given as
kZkF=v
u
u
t
p
X
j=1
q
X
k=1
Z2
jk (6)
Practically the computation of Ais challenging due to
the very large size of observations i.e. m>nresulting in
an under-determined system. To alleviate this difficulty, the
spatial dimensions of the input data are reduced through
Proper Orthogonal Decomposition (POD). The linear sub-
space spanned by the set of rorthogonal modes approximates
the space Rmsufficiently to achieve the dimensionality
reduction. The proper orthogonal modes are evaluated via
the Singular Value Decomposition (SVD) of Xgiven as
X = USV(7)
with URm×rwhose each column represents the eigen-
vector of XXT,SRr×ris the diagonal matrix having
eigenvalues of Xin descending order and VRn×ris
the matrix in which each row represents the eigenvector of
XXT. The rproper orthogonal modes correspond to the
dominant left singular vectors (Ur) of SVD associated with
the dominant singular values.
The SVD (7) aids to search for the reduced order subspace
containing the dominant system modes through the projec-
tion of Aonto the POD modes. ˜
A= UAU = UYVS1.
The DMD modes and eigenvalues are evaluated from the
eigendecomposition of ˜
A, i.e., ˜
AW = WΛ. The exact
DMD modes Φare evaluated by transforming back to the
original space of higher dimensions, i.e. Φ = YVS1W.
The temporal evolution of the modes is identified with the
eigenvalues from the diagonal of matrix Λ.
B. Persistence of excitation (PE)
For the linear time-invariant system which is controllable,
if a component of a state is PE of an adequately higher
order, then the sections of the trajectory span the space
characterizing the behavior of the system [19].
In order to ensure the consistency of the model estimated
during the identification experiments [20] [21], the data
sequences used for the subspace identification must be PE
of significant order. For efficiently identifying the system
modes from the deterministic input data sequences [22], the
virtual data sequences analogous to the multi-variable system
fulfilling the PE condition are designed with the hankel block
matrix representation of the input data sequence.
Let xRnbe a state vector denoted as x[k,k+S], where
kXis the time instant for a first sample and SN
are the total samples taken. The state vector in the interval
[k, k +S]Xis defined as
x[k,k+S]=
x(k)
x(k+ 1)
.
.
.
x(k+S)
(8)
The state vector x[k,k+S]organized in the Hankel matrix is
represented as
x(k)x(k+ 1) · · · x(k+SM+ 1)
x(k+ 1) x(k+ 2) · · · x(k+SM+ 2)
.
.
..
.
.....
.
.
x(k+M1) x(k+l)· · · x(k+S)
(9)
where kX, S, M N. The total number of system
eigenvalues determines the value of M
Definition 1: A signal trajectory x[k,k+S]is PE of the
order L, if the block hankel matrix in (9) has a full rank
nL
Basically, the definition indicates that to satisfy the PE
condition, the section of the trajectory of length Lmust
be adequately long enough to excite the controllable sys-
tem modes in the window of Land reproduce them. The
Definition 1 gives the lower bound on the trajectory length
such that m >= (n+ 1)L1, the number of rows will
be lesser or equal to the columns i.e. there will be more
temporal samples than the spatial samples [23].
C. Vector Auto regression (VAR)
A multivariate time series x1, x2,· · · , xm, with each
measurement being a n-dimensional vector consisting the
elements x1t, x2t,· · · , xnt is realised with a vector stochastic
process X1, X2,· · · . This time series can be modeled with
a Vector Autoregressive (VAR) of order prepresented as
Xt=
p
X
k=1
AkXtk+t(10)
AkRn×nis a regression coefficient matrix and tis
the white noise corresponding to the residuals. VAR models
detect the simultaneous evolution patterns of multivariate
time series. The objective of the VAR model is to find the
coefficient matrix demonstrating the temporal correlation be-
tween the multivariate time series. The predictive VAR model
represents the value of xtat any time tas a combination of its
past values. The predictable patterns in the data are captured
with the coefficient matrix whereas the unpredictable part is
accounted for with the residual terms.
GC analysis draws causal inferences among various vari-
ables based on their VAR representation. To obtain a valid
analysis of GC, the coefficients of the VAR model (10)
should be square summable and stable [24] [2]. Square
summability implies that Pp
k=1 kAkk2<i.e. even for
the infinite order of the model, the regression coefficients
are not blowing up. The stability is related to the char-
acteristic polynomial of coefficient matrix Ak,ϕ(z) =
IPPp
k=1 Akzk
where ZC. If the characteristic
polynomial of the coefficient matrix is invertible on the unit
disc |z| ≤ 1in the complex plane [2], then the coefficients
of the VAR model are stable.
D. Granger Causality (GC)
Granger causality (GC) is a framework established on
the temporal precedence implying that causes precede their
effects. Temporal precedence being the innate characteristic
of the time series, the GC helps to establish the causal
relationship among the time series inferring that the past
is causing the future. The physical interpretation of the
causality is that the causes are responsible for the unique
changes in the corresponding effects, i.e. the causal series
contains the unique information about the effect series which
is not available otherwise [5]. In fact, GC does not actually
indicate the true causality, rather it examines the influence
of one series on the forecasting ability of another series.
Considering two time series xtand ytthe GC finds the
causal relationship amidst xtand ytdepending on whether
the past values of xtassists in the prediction of ytconditional
on having already considered the effect of the past values of
yon the prediction of yt.
Definition 2: Suppose H<t be representing the history of
all relevant information available up to (t1),Pr(xt|H<t)
be denoting the prediction of xtgiven H<t, the GC suggests
yto be causal for xif
var [xtPr(xt|H<t)] <var [xtPr(xt|H<t \y<t)]
(11)
whereas H<t \y<t specifies that the values of y<t are
excluded from H<t. Equation (11) describes that the variance
of the prediction error of xwith the inclusion of the history of
yis reduced [25] consequently inferring that the past values
of yenhance the prediction of x.
The primary argument of GC is based on the identification
of a unique linear model from the data. Let xand ybe the
two time series of length N, and described by the uni-variate
vector auto-regressive (VAR) model of order p<N, given
摘要:

GrangerCausalityforPredictabilityinDynamicModeDecompositionG.Revati,SyedShadab,K.Sonam,S.R.Wagh,andN.M.SinghAbstract—Thedynamicmodedecomposition(DMD)tech-niqueextractsthedominantmodescharacterizingtheinnatedynamicalbehaviorofthesystemwithinthemeasurementdata.Forappropriateidenticationofdominantmode...

展开>> 收起<<
Granger Causality for Predictability in Dynamic Mode Decomposition G. Revati Syed Shadab K. Sonam S. R. Wagh and N. M. Singh Abstract The dynamic mode decomposition DMD tech-.pdf

共11页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:11 页 大小:1007.55KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 11
客服
关注