Predictive, finite-sample model choice for time series under stationarity and non-stationarity (Q143634)

From MaRDI portal
Revision as of 14:50, 20 July 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article; zbMATH DE number 7113729
  • Predictive, finite-sample model choice for time series under stationarity and non-stationarity
Language Label Description Also known as
English
Predictive, finite-sample model choice for time series under stationarity and non-stationarity
scientific article; zbMATH DE number 7113729
  • Predictive, finite-sample model choice for time series under stationarity and non-stationarity

Statements

14 November 2016
0 references
4 October 2019
0 references
math.ST
0 references
stat.ME
0 references
stat.TH
0 references
0 references
0 references
0 references
0 references
0 references
0 references
Predictive, finite-sample model choice for time series under stationarity and non-stationarity (English)
0 references
The paper is concerned with the bias-variance trade-off problem in the context of stationarity versus non-stationarity modeling for prediction. The authors find that a simple stationary model can often perform better in terms of finite sample prediction than a more complicated non-stationary one. The authors illustrate that phenomenon first at a time varying AR(2) process, where they compare the prediction performance with that of a time varying AR(1) process and with stationary AR(1) and AR(2) processes, with the findings that for not too large sample sizes, the stationary models may very well give more accurate prediction in terms of the empirical mean squared errors. The authors then continue to develop a general methodology to find a model for prediction of finite sample data. For that, they divide the observation sets into three sets, namely the training set \(M_0\), a validation set \(M_1\) and a final validation set \(M_2\). Based on the observations in \(M_0\), they calculate the locally stationary model that predicts best (among certain other locally stationary models) into the set \(M_1\) in terms of minimised empirical mean square prediction errors for linear \(h\)-step predictors, and similarly the best stationary model with this property. Whether to choose the best locally stationary model or the best stationary model is then decided by the comparison of the performance of both models on the second evaluation set \(M_2\). The authors then show rigorously under certain assumptions that with high probability, the chosen of the two models (best locally stationary / best stationary) will perform empirically better in forecasting the future of not yet observed values. Various simulations are presented and an \texttt{R} package is provided. The authors apply their method to three real data sets, namely to London housing prices, to temperatures and to the volatility around the time of the EU referendum in the UK in 2016. As a further theoretical result, they prove that the localised Yule-Walker estimator in locally stationary models is strongly, uniformly consistent.
0 references
bias-variance trade-off
0 references
forecasting
0 references
prediction
0 references
Yule-Walker estimate
0 references
local stationarity versus stationarity
0 references
0 references
0 references
0 references
0 references
0 references
0 references

Identifiers

0 references
0 references
0 references
0 references
0 references