Predictive, finite-sample model choice for time series under stationarity and non-stationarity (Q143634): Difference between revisions

From MaRDI portal
Changed label, description and/or aliases in en, and other parts
Merged Item from Q2326070
aliases / en / 0aliases / en / 0
 
Predictive, finite-sample model choice for time series under stationarity and non-stationarity
description / endescription / en
 
scientific article; zbMATH DE number 7113729
Property / publication date
 
4 October 2019
Timestamp+2019-10-04T00:00:00Z
Timezone+00:00
CalendarGregorian
Precision1 day
Before0
After0
Property / publication date: 4 October 2019 / rank
 
Normal rank
Property / author
 
Property / author: Tobias Kley / rank
 
Normal rank
Property / author
 
Property / author: Piotr Fryzlewicz / rank
 
Normal rank
Property / author
 
Property / author: Philip Preuss / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1214/19-EJS1606 / rank
 
Normal rank
Property / title
 
Predictive, finite-sample model choice for time series under stationarity and non-stationarity (English)
Property / title: Predictive, finite-sample model choice for time series under stationarity and non-stationarity (English) / rank
 
Normal rank
Property / zbMATH Open document ID
 
Property / zbMATH Open document ID: 1432.62323 / rank
 
Normal rank
Property / published in
 
Property / published in: Electronic Journal of Statistics / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://arxiv.org/abs/1611.04460 / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://projecteuclid.org/euclid.ejs/1569895286 / rank
 
Normal rank
Property / review text
 
The paper is concerned with the bias-variance trade-off problem in the context of stationarity versus non-stationarity modeling for prediction. The authors find that a simple stationary model can often perform better in terms of finite sample prediction than a more complicated non-stationary one. The authors illustrate that phenomenon first at a time varying AR(2) process, where they compare the prediction performance with that of a time varying AR(1) process and with stationary AR(1) and AR(2) processes, with the findings that for not too large sample sizes, the stationary models may very well give more accurate prediction in terms of the empirical mean squared errors. The authors then continue to develop a general methodology to find a model for prediction of finite sample data. For that, they divide the observation sets into three sets, namely the training set \(M_0\), a validation set \(M_1\) and a final validation set \(M_2\). Based on the observations in \(M_0\), they calculate the locally stationary model that predicts best (among certain other locally stationary models) into the set \(M_1\) in terms of minimised empirical mean square prediction errors for linear \(h\)-step predictors, and similarly the best stationary model with this property. Whether to choose the best locally stationary model or the best stationary model is then decided by the comparison of the performance of both models on the second evaluation set \(M_2\). The authors then show rigorously under certain assumptions that with high probability, the chosen of the two models (best locally stationary / best stationary) will perform empirically better in forecasting the future of not yet observed values. Various simulations are presented and an \texttt{R} package is provided. The authors apply their method to three real data sets, namely to London housing prices, to temperatures and to the volatility around the time of the EU referendum in the UK in 2016. As a further theoretical result, they prove that the localised Yule-Walker estimator in locally stationary models is strongly, uniformly consistent.
Property / review text: The paper is concerned with the bias-variance trade-off problem in the context of stationarity versus non-stationarity modeling for prediction. The authors find that a simple stationary model can often perform better in terms of finite sample prediction than a more complicated non-stationary one. The authors illustrate that phenomenon first at a time varying AR(2) process, where they compare the prediction performance with that of a time varying AR(1) process and with stationary AR(1) and AR(2) processes, with the findings that for not too large sample sizes, the stationary models may very well give more accurate prediction in terms of the empirical mean squared errors. The authors then continue to develop a general methodology to find a model for prediction of finite sample data. For that, they divide the observation sets into three sets, namely the training set \(M_0\), a validation set \(M_1\) and a final validation set \(M_2\). Based on the observations in \(M_0\), they calculate the locally stationary model that predicts best (among certain other locally stationary models) into the set \(M_1\) in terms of minimised empirical mean square prediction errors for linear \(h\)-step predictors, and similarly the best stationary model with this property. Whether to choose the best locally stationary model or the best stationary model is then decided by the comparison of the performance of both models on the second evaluation set \(M_2\). The authors then show rigorously under certain assumptions that with high probability, the chosen of the two models (best locally stationary / best stationary) will perform empirically better in forecasting the future of not yet observed values. Various simulations are presented and an \texttt{R} package is provided. The authors apply their method to three real data sets, namely to London housing prices, to temperatures and to the volatility around the time of the EU referendum in the UK in 2016. As a further theoretical result, they prove that the localised Yule-Walker estimator in locally stationary models is strongly, uniformly consistent. / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 62M20 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 62M10 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 62P20 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 7113729 / rank
 
Normal rank
Property / zbMATH Keywords
 
bias-variance trade-off
Property / zbMATH Keywords: bias-variance trade-off / rank
 
Normal rank
Property / zbMATH Keywords
 
forecasting
Property / zbMATH Keywords: forecasting / rank
 
Normal rank
Property / zbMATH Keywords
 
prediction
Property / zbMATH Keywords: prediction / rank
 
Normal rank
Property / zbMATH Keywords
 
Yule-Walker estimate
Property / zbMATH Keywords: Yule-Walker estimate / rank
 
Normal rank
Property / zbMATH Keywords
 
local stationarity versus stationarity
Property / zbMATH Keywords: local stationarity versus stationarity / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: R / rank
 
Normal rank

Revision as of 10:04, 26 April 2024

scientific article; zbMATH DE number 7113729
  • Predictive, finite-sample model choice for time series under stationarity and non-stationarity
Language Label Description Also known as
English
Predictive, finite-sample model choice for time series under stationarity and non-stationarity
scientific article; zbMATH DE number 7113729
  • Predictive, finite-sample model choice for time series under stationarity and non-stationarity

Statements

14 November 2016
0 references
4 October 2019
0 references
math.ST
0 references
stat.ME
0 references
stat.TH
0 references
0 references
0 references
0 references
0 references
0 references
0 references
Predictive, finite-sample model choice for time series under stationarity and non-stationarity (English)
0 references
The paper is concerned with the bias-variance trade-off problem in the context of stationarity versus non-stationarity modeling for prediction. The authors find that a simple stationary model can often perform better in terms of finite sample prediction than a more complicated non-stationary one. The authors illustrate that phenomenon first at a time varying AR(2) process, where they compare the prediction performance with that of a time varying AR(1) process and with stationary AR(1) and AR(2) processes, with the findings that for not too large sample sizes, the stationary models may very well give more accurate prediction in terms of the empirical mean squared errors. The authors then continue to develop a general methodology to find a model for prediction of finite sample data. For that, they divide the observation sets into three sets, namely the training set \(M_0\), a validation set \(M_1\) and a final validation set \(M_2\). Based on the observations in \(M_0\), they calculate the locally stationary model that predicts best (among certain other locally stationary models) into the set \(M_1\) in terms of minimised empirical mean square prediction errors for linear \(h\)-step predictors, and similarly the best stationary model with this property. Whether to choose the best locally stationary model or the best stationary model is then decided by the comparison of the performance of both models on the second evaluation set \(M_2\). The authors then show rigorously under certain assumptions that with high probability, the chosen of the two models (best locally stationary / best stationary) will perform empirically better in forecasting the future of not yet observed values. Various simulations are presented and an \texttt{R} package is provided. The authors apply their method to three real data sets, namely to London housing prices, to temperatures and to the volatility around the time of the EU referendum in the UK in 2016. As a further theoretical result, they prove that the localised Yule-Walker estimator in locally stationary models is strongly, uniformly consistent.
0 references
bias-variance trade-off
0 references
forecasting
0 references
prediction
0 references
Yule-Walker estimate
0 references
local stationarity versus stationarity
0 references

Identifiers

0 references
0 references
0 references
0 references
0 references