Deep-Learning-Based Precipitation Nowcasting with Ground Weather Station Data and Radar Data Jihoon Ko Kyuhan Lee Hyunjin Hwang and Kijung Shin

2025-05-06 0 0 1.61MB 8 页 10玖币
侵权投诉
Deep-Learning-Based Precipitation Nowcasting with
Ground Weather Station Data and Radar Data
Jihoon Ko, Kyuhan Lee, Hyunjin Hwang, and Kijung Shin
Kim Jaechul Graduate School of AI
Korea Advanced Institute of Science and Technology
Seoul, South Korea
{jihoonko, kyuhan.lee, hyunjinhwang, kijungs}@kaist.ac.kr
Abstract—Recently, many deep-learning techniques have been
applied to various weather-related prediction tasks, including
precipitation nowcasting (i.e., predicting precipitation levels and
locations in the near future). Most existing deep-learning-based
approaches for precipitation nowcasting, however, consider only
radar and/or satellite images as inputs, and meteorological
observations collected from ground weather stations, which are
sparsely located, are relatively unexplored. In this paper, we
propose ASOC, a novel attentive method for effectively exploiting
ground-based meteorological observations from multiple weather
stations. ASOC is designed to capture temporal dynamics of
the observations and also contextual relationships between them.
ASOC is easily combined with existing image-based precipitation
nowcasting models without changing their architectures. We
show that such a combination improves the average critical
success index (CSI) of predicting heavy (at least 10 mm/hr) and
light (at least 1 mm/hr) rainfall events at 1-6 hr lead times
by 5.7%, compared to the original image-based model, using
the radar images and ground-based observations around South
Korea collected from 2014 to 2020.
Index Terms—precipitation nowcasting, ground-based meteo-
rological observations, attention mechanism with sparse features
I. INTRODUCTION
Recently, deep learning techniques, especially computer
vision techniques, have been applied to forecasting various
weather-related events, and such approaches [1]–[6] often
outperform traditional methods in the field. A representative
example is precipitation nowcasting, which is short-term (e.g.,
at 0-6hour lead times [7], [8]) location-specific forecasting of
precipitation. According to [9], current approaches equipped
with deep convolutional networks outperform HRRR [10],
which is one of the state-of-the-art numerical weather pre-
diction models, with lead times up to 12 hours.
For precipitation nowcasting, U-Net [11] and ConvLSTM
[1], which were originally designed for semantic segmentation
and spatio-temporal sequence forecast, respectively, have been
used mainly as backbone network architectures. For example,
Agrawal et al. [4] and Lebedev et al. [6] adapted U-Net and
used radar-reflectivity images and satellite images as inputs.
Ko et al. [3] also adapted U-Net to demonstrate the effective-
ness of their proposed training strategies for deep-learning-
based precipitation nowcasting. Shi et al. [1] demonstrated
that ConvLSTM outperforms optical-flow-based methods and
Equal contribution.
fully-connected LSTM on precipitation nowcasting. In or-
der to improve the performance of precipitation nowcasting,
ConvLSTM was extended to learn additional location-variant
structures [2], and it was also extended with exponentially
dilated convolution blocks, which enhance expressive power
by capturing additional spatial information [9].
Most deep-learning-based approaches (e.g., [1]–[4], [6])
consider only radar images and satellite images as inputs,
and meteorological observations from ground weather stations
have been underutilized. Although radar and satellite images,
which are in a grid format, are naturally fed into deep
convolutional neural networks (e.g., U-Net and ConvLSTM),
ground-based meteorological observations are not naturally
represented in a grid format since ground weather stations
are sparsely located. Although interpolation techniques, such
as Inverse Distance Weighting [12] and Kriging [13], can be
used to resolve this issue, they are expensive both in time
and memory, especially to obtain high-resolution data. Thus,
in order to utilize ground-based meteorological observations
together with radar and satellite images, deep-learning models
should be able to utilize input data of different formats
efficiently and effectively.
In this paper, to address the aforementioned challenge, we
propose Attentive Sparse Observation Combiner (ASOC), a
novel deep-learning model for precipitation nowcasting based
on meteorological observations collected from multiple ground
weather stations. In a nutshell, ASOC combines LSTM [14]
and Transformer [15] to capture temporal dynamics of the
ground-based observations and also contextual relationships
between them. Specifically, ASOC uses LSTM, which pro-
cesses observations in chronological order, to capture temporal
dynamics, and it uses Transformer-style attention blocks be-
tween LSTM cells to capture contextual relationships between
observations. Another advantage of ASOC is that it is easily
combined with existing image-based models, without any
change in their design. In our experiments, we use ASOC+,
where ASOC is combined with DeepRaNE [3], which is one
of the state-of-the-art image-based models.
We evaluate our approaches using radar-reflectivity images
and ground-based observations (from 714 weather stations)
around South Korea collected for seven years (spec., from
2014 to 2020). We demonstrate that ASOC+ improves the
average critical success index (CSI) of predicting heavy (10
arXiv:2210.12853v1 [physics.ao-ph] 20 Oct 2022
mm/hr) and light (1mm/hr) rainfall events at 1-6 hr lead
times by 5.7%, compared to DeepRaNE. For reproducibility,
we made the source code used in the paper publicly available
at https://github.com/jihoonko/ASOC.
In Section II, we briefly review related studies. In Sec-
tion III, we introduce the notations used in this paper and
define the precipitation nowcasting problem. In Section IV, we
present ASOC and ASOC+. In Section V, we provide experi-
mental results. Lastly, in Section VI, we provide conclusions.
II. RELATED WORK
In the machine-learning literature, precipitation nowcasting
is often formulated as pixel-wise classification of precipitation
levels in the near future from input radar-reflectivity images,
and satellite images are often used additionally as inputs.
Among convolutional neural networks (CNNs), U-Net [11]
has been widely used for precipitation nowcasting [3], [4], [6],
[16]. U-Net was originally designed for an image segmentation
task, i.e., pixel-wise classification. For example, Lebedev et al.
[6] used U-Net for precipitation detection, which is formulated
as a pixel-wise binary-classification problem. Agrawal et al.
[4] divided precipitation levels into four classes and used U-
Net for pixel-wise multiclass classification. Based on a similar
multiclass classification formulation, Ko et al. [3] proposed
training strategies for precipitation nowcasting (spec., a pre-
training scheme and a loss function) and demonstrated their
effectiveness using a U-Net-based model.
Moreover, in order to aggregate both spatial and temporal
information, there have been several attempts to combine
recurrent neural networks (RNNs) (e.g., LSTM [14]) into
CNNs [1], [2], [5]. For example, Shi et al. [1] proposed
ConvLSTM, which has convolutional structures in the input-
to-state and state-to-state transitions in LSTM. Shi et al. [2]
extended ConvLSTM to TrajGRU, which can learn location-
variant connections between RNN layers. Sønderby et al. [5]
proposed MetNet, which uses ConvLSTM as its temporal
encoder and adapts axial attention structure for its spatial
encoder. Ravuri et al. [16] pointed out that deep-learning
based approaches tend to provide blurry predictions, especially
at long lead times, and they used a conditional generative
adversarial network [17] consists of ConvGRU-based [18]
generator and the spatial and temporal discriminators to ad-
dress this limitation. Espeholt et al. [9] extended ConvLSTM
with exponentially dilated convolution blocks, which enhance
expressive power by capturing additional spatial information.
Several studies utilized meteorological observations from
multiple weather stations as inputs to predict weather-related
events. For example, Seo et al. [19] considered temperature
forecasting. They generated a graph, where each node corre-
sponds to a weather station, and inferred the data quality of
each station, during training, by applying the graph convo-
lutional network (GCN) to the generated graph. Wang et al.
[20] focused on short-term intense precipitation (SIP) now-
casting. They generated a graph and its feature, by identifying
and clustering convective cells from radar-reflectivity images,
and used them, together with ground-based observations, as
TABLE I
FREQUENTLY USED NOTATION.
Notation Description
ttime (unit: minutes)
R(t)
xRradar reflectivity at time tin each region x(unit: dbZ)
R(t)radar reflectivity image at time tin all regions
Iset of regions where ground weather stations are located
O(t)
xRdground-based observations at time tin each region x
O(t)ground-based observations at time tin all regions
C(t)
xground-truth precipitation class at time tin each region x
ˆ
C(t)
xpredicted probability distribution over
precipitation classes at time tin each region x
the inputs of a random forest classifier. In contrast to our
deep-learning-based approach, they did not employ any deep-
learning techniques to process radar images and ground-based
observations together.
III. BASIC NOTATIONS & PROBLEM DEFINITION
In this section, we introduce basic notations and formulate
the precipitation nowcasting problem.
A. Basic Notations
The frequently-used symbols are listed in Table I. We use
R(t)
xRto indicate the radar reflectivity in dBZ at time
tin each region x, and we use R(t)to indicate the whole
radar-reflectivity image at time t. We use Ito denote the set
of regions where ground weather stations are located. Then,
O(t)
xRddenotes the ground-based observations in each
region xIat time t, and O(t)denotes the ground-based
observations at time tfrom all regions in I. Lastly, C(t)
x
indicates the ground-truth precipitation class (see the following
subsection for precipitation classes) in each region xIat
time t, and ˆ
C(t)
xindicates the predicted probability distribution
over all precipitation classes for each region xat time t.
B. Problem Definition
The goal of precipitation nowcasting is to predict precipita-
tion levels and locations at very short lead times. In this paper,
we formulate the problem as a location-wise classification
problem, as in [3]. Specifically, we split precipitation levels
into three classes: (a) HEAVY for precipitation at least 10
mm/hr, (b) LIGHT for precipitation at least 1 mm/hr but
less than 10 mm/hr, and (c) OTHERS for precipitation less
than 1 mm/hr. We also frequently use a combined class
named RAIN (=HEAVY+LIGHT) for precipitation at least 1
mm/hr. As inputs, we use ground-based observations collected
from multiple weather stations and radar-reflectivity images
collected for an hour. We assume that both are collected
every 10 minutes. For example, if we perform prediction at
time tin minutes, the inputs are (a) seven radar reflectivity
images at times {t60, t 50,· · · , t}, i.e., R(t60),R(t50),
· · · ,R(t), and (b) seven snapshots of ground observations at
times {t60, t 50,· · · , t}, i.e., O(t60), O(t50),· · · , O(t).
摘要:

Deep-Learning-BasedPrecipitationNowcastingwithGroundWeatherStationDataandRadarDataJihoonKo,KyuhanLee,HyunjinHwang,andKijungShinKimJaechulGraduateSchoolofAIKoreaAdvancedInstituteofScienceandTechnologySeoul,SouthKoreafjihoonko,kyuhan.lee,hyunjinhwang,kijungsg@kaist.ac.krAbstract—Recently,manydeep-le...

展开>> 收起<<
Deep-Learning-Based Precipitation Nowcasting with Ground Weather Station Data and Radar Data Jihoon Ko Kyuhan Lee Hyunjin Hwang and Kijung Shin.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:1.61MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注