2022 should be less uncertain than its forecast for 2023. But just how much less uncertain
depends on the time series properties of GDP, and is hard to grasp intuitively.
Motivated by this situation, the present paper constructs measures for the uncertainty
in fixed-event point forecasts. Together with the point forecasts themselves, we can then
construct forecast distributions for GDP growth and other economic variables. The princi-
ple of forecast post-processing – using point forecasts and past forecast errors to construct
forecast distributions – is popular in meteorology (see e.g. Gneiting and Raftery 2005,
Gneiting et al. 2005, Rasp and Lerch 2018 and Vannitsem et al. 2021), economics (e.g.
Kn¨uppel 2014, Kr¨uger and Nolte 2016 and Clark et al. 2020) and other fields. State-of-the-
art point forecasts are often publicly available, so that using them as a basis for forecast
distributions is more practical than generating forecast distributions from scratch. Fur-
thermore, assessing forecast uncertainty based on past errors does not require knowledge
about how the point forecasts were generated. This is an important advantage in practice,
where the forecasting process may be judgmental, subject to institutional idiosyncracies,
or simply unknown to the public.
The vast majority of the postprocessing literature considers a ‘fixed-horizon’ fore-
casting setup where the time between the forecast and the realization remains constant.
Examples of the fixed-horizon case include daily forecasts of temperature 12 hours ahead,
or quarterly forecasts of the inflation rate between the current and next quarter. In eco-
nomics, Clements (2018) constructs a measure of fixed-event forecast uncertainty that is
based on fixed-horizon forecast errors, and thus requires that an appropriate database
of fixed-horizon forecasts is available. This is the case in the US Survey of Professional
Forecasters analyzed by Clements, but not in other situations including the German GDP
example mentioned earlier. Existing approaches are thus not applicable to the fixed-event
case. Instead, the latter requires different tools which we develop in this paper.
The main idea behind our proposed approach is simple: We model quantiles of the
forecast error distribution as a function of the forecast horizon. The latter is defined as
the time (measured in weeks) between the forecast and the end of the target year. For
example, forecasts made on July 1, 2022 for 2022 and 2023 correspond to horizons 26 and
78, respectively. To estimate regression models, we use a dataset of past forecast errors at
different horizons. Pooling forecast errors across horizons allows us to estimate uncertainty
at any given horizon by considering uncertainty at neighboring horizons. This approach
is helpful in the fixed-event case, where only a small number of past errors is typically
available for a given horizon. For example, the German data set we consider covers
forecast-observation pairs ranging from 1991 to 2022, and includes precise information
(daily time stamps) on the forecast horizon h. Specifically, the data contains 525 unique
values for h, ranging from h= 0 to h= 104. Note that hneed not be an integer; for
example, forecasts made on the seven days of the target year’s final week correspond to
horizons h∈ {0,1/7,...,6/7}. Given that the data covers n= 1 307 observations in total,
the average number of forecast errors corresponding to each of the 525 different horizons is
about 2.5. Using only forecast errors that correspond exactly to some horizon of interest
(h= 5 weeks, say) is hence not a promising strategy, and considering forecast errors from
neighboring horizons seems advisable. This aspect is not relevant when postprocessing
2