Temporal Spatial Decomposition and Fusion Network for Time Series Forecasting 1stLiwang Zhou

2025-05-02 0 0 3.19MB 10 页 10玖币
侵权投诉
Temporal Spatial Decomposition and Fusion
Network for Time Series Forecasting*
1st Liwang Zhou
Zhejiang University
China
21731005@zju.edu.cn
2nd Jing Gao
Anhui University
China
jingles980@gmail.com
Abstract—Feature engineering is required to obtain better
results for time series forecasting, and decomposition is a cru-
cial one. One decomposition approach often cannot be used
for numerous forecasting tasks since the standard time series
decomposition lacks flexibility and robustness. Traditional feature
selection relies heavily on preexisting domain knowledge, has no
generic methodology, and requires a lot of labor. However, most
time series prediction models based on deep learning typically
suffer from interpretability issue, so the ”black box” results
lead to a lack of confidence. To deal with the above issues
forms the motivation of the thesis. In the paper we propose
TSDFNet as a neural network with self-decomposition mecha-
nism and an attentive feature fusion mechanism, It abandons
feature engineering as a preprocessing convention and creatively
integrates it as an internal module with the deep model. The self-
decomposition mechanism empowers TSDFNet with extensible
and adaptive decomposition capabilities for any time series,
users can choose their own basis functions to decompose the
sequence into temporal and generalized spatial dimensions.
Attentive feature fusion mechanism has the ability to capture
the importance of external variables and the causality with target
variables. It can automatically suppress the unimportant features
while enhancing the effective ones, so that users do not have to
struggle with feature selection. Moreover, TSDFNet is easy to
look into the ”black box” of the deep neural network by feature
visualization and analyze the prediction results. We demonstrate
performance improvements over existing widely accepted models
on more than a dozen datasets, and three experiments showcase
the interpretability of TSDFNet.
Index Terms—time series,interpretability, long-term predic-
tion, deep learning
I. INTRODUCTION
Time series forecasting plays a key role in numerous fields
such as economy [1],finance [2], transportation [3], meteo-
rology [4], It empowers people to foresee opportunities and
serves as guidance for decision-making. Therefore, it is crucial
to increase the generality of time series models and lower
modeling complexity while maintaining performance. In the
field of time series forecasting, multi-variable and multi-step
forecasting forms one of the most challenging tasks. Errors
may accumulate as the forecast step increases.At present,
there is no universal method to handle the problem of multi-
variable and multi-step time series prediction. One time series
usually calls for its specific feature engineering and forecasting
model, due to the complexity and diversity of real world time
series, which usually requires data analysts to have specialized
background knowledge.
Feature engineering is usually used to preprocess data
before modeling. In the field of feature engineering, time
series decomposition is a classical method to decompose a
complex time series into numerous predictable sub-series, such
as STL [37] with seasonal and trend decomposition, EEMD
[24] with ensemble empirical mode decomposition, EWT
[25] with empirical wavelet transform. In addition, feature
selection is another important step. For complex tasks, some
auxiliary variables are usually needed to assist the prediction
of target variables. The reasonable selection of additional
features is crucial to the performance of the model, because
the introduction of some redundant additional features may
degrade the performance of the model. How to choose the
appropriate decomposition methods and important additional
features is also a challenging problem for data analysts.
On the other hand, despite the fact that numerous models
have been put forth, each one has drawbacks of its own.
The majority of deep learning based models are difficult to
comprehend and produce unconvincing predictions. However,
models like ARIMA and xgboost [26], which have sound
mathematical foundations and offer interpretability, cannot
compete with deep learning-based models in terms of per-
formance.
Therefore, it is necessary to break the traditional practice
and devise a new way to handle these problems. In this study,
a novel neural network model called TSDFNet is developed
based on the self-decomposition mechanism and attentive fea-
ture fusing mechanism. Decomposition and feature selection
are integrated as internal modules of the deep model to lessen
complexity and increase adaptability. The data’s high-order
statistical features may be captured by this model’s robust
feature expression capabilities, which make it applicable to
datasets from a variety of domains.
In summary, The contributions are summarized as follows:
We proposed Temporal Decomposition Network (TDN),
which is extensible and adaptive.it decomposes time
series over temporal dimension and allows users to cus-
tomize basis functions for specific tasks.
We proposed Spatial Decomposition Network (SDN),
which creatively uses high-dimensional external features
as decomposition basis functions to model the relation-
ship between external variables and target variables.
arXiv:2210.03122v1 [cs.LG] 6 Oct 2022
We proposed Attentive Feature Fusion Network (AFFN),
which has the ability of automatic feature selection and
can capture the importance and causality of features.
In this way, users can avoid the trouble of feature
selection and use arbitrary basis functions in the self-
decomposition network without worrying about the loss
of model performance caused by introducing invalid
features.
TSDFNet obtains interpretable results on datasets in mul-
tiple fields, and has significantly improved performance
compared with many traditional models.
II. RELATIVE WORK
The field of time series prediction has a rich history, and
many outstanding models have been developed. The most
well-known conventional methods include ARIMA [6] and
exponential smoothing [7]. The interpretability and usability
of the ARIMA model, which turns nonstationary processes
into stationary ones through difference and can also be further
expanded into VAR [8] to address the issue of multivariate
time series forecasting, are the main reasons for its popularity.
Another effective forecasting technique is exponential smooth-
ing, which smooths univariate time-series by giving the data
weights that decrease exponentially over time.
Since time series prediction is essentially a regression
problem, it is also possible to utilize a variety of regression
models.Some machine learning-based techniques, including
decision trees [10] and support vector regression (SVR) [9].
Additionally, ensemble methods, which employ multiple learn-
ing algorithms to achieve better predictive performance than
could be attained from any one of the constituent learning
algorithms alone, are effective tools for sequence prediction.
Examples of these methods include random forest [11] and
adaptive lifting algorithm (Adaboost) [12].
In recent years, deep learning has become popular and neu-
ral networks have achieved success in many fields [29], [30],
[31]. It uses the back propagation algorithm [32] to optimize
the network parameters. Long Short-Term Memory (LSTM)
[13] and its derivatives shows great power in sequential data,
It overcomes the defect of vanishing gradient of recurrent
neural network (RNN) [14] and can better capture long-
term dependence. Deep autoregressive network (DeepAR)
[15] uses stacked LSTMs for iterative multi-step prediction,
and Deep state-space Models (DSSM) [16] also adopts a
similar approach, utilizing LSTMs to generate parameters of
a predefined linear state-space model. Sequence to Sequence
(Seq2Seq) [17] usually uses a pair of LSTMs or GRUs [18]
as encoder and decoder. The encoder maps the input data
to the hidden space into a fixed-length semantic vector, The
decoder reads the context vector and attempts to predict the
target variable step by step. Temporal convolutional network
(TCN) [19] could also be effectively applied to the sequence
prediction problem, which may be used as an alternative to
the popular RNN family of methods and has faster speed
and fewer parameters compared with RNN-based models with
causal convolution and residual connection. The attention
mechanism [22] emerged as an improvement over the encoder
decoder based [23], and it can easily be further extended into
a self-attentional mechanism as the core of the Transfomer
models [20], [21].
III. METHODOLOGY
The network’s architecture is depicted in Figure 1. It has two
main parts, the first of which is a self-decomposition network
that includes TDN and SDN. The feature fusion network
(AFFN), is the additional element.
Fig. 1: Overall structure of TSDFNet
A. Self-decomposing network
The structure of self-decomposition network includes two
decomposition modules, one is time decomposition network
TDN, which adopts custom basis function to decompose se-
quences in time dimension. The other is spatial decomposition
network SDN, which decomposes sequences in generalized
spatial dimensions, using exogenous features as basic func-
tions. Its main objective is to break down complex sequences
into ones that are simple and predictable.
TDN uses multiple sets of pre-trained basis functions with
different parameters to capture signal features, which could be
triangular basis, polynomial basis, wavelet basis and so on.
The architecture of TDN is shown in Figure 2. There are
Nrecursive decomposition units in it. (n+ 1)th unit accepts
its respective input Xnas input and output two intermediate
components Wnand Vn. Each decomposition unit consists of
two parts, Stacked fully connected network Lsmaps data into
hidden space to produce the semantic vector Sn, predicts basis
expansion coefficients both forward and backward through two
sets of fully connected networks Lpand Lqrespectively. The
process is:
Sn=Ls(Xn)(1)
Pn=Lp(Sn)(2)
摘要:

TemporalSpatialDecompositionandFusionNetworkforTimeSeriesForecasting*1stLiwangZhouZhejiangUniversityChina21731005@zju.edu.cn2ndJingGaoAnhuiUniversityChinajingles980@gmail.comAbstract—Featureengineeringisrequiredtoobtainbetterresultsfortimeseriesforecasting,anddecompositionisacru-cialone.Onedecomposi...

展开>> 收起<<
Temporal Spatial Decomposition and Fusion Network for Time Series Forecasting 1stLiwang Zhou.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:3.19MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注