Adaptive deep learning for nonlinear time series models
From MaRDI portal
Publication:6632604
DOI10.3150/24-BEJ1726MaRDI QIDQ6632604FDOQ6632604
Authors: Daisuke Kurisu, Riku Fukami, Yuta Koike
Publication date: 5 November 2024
Published in: Bernoulli (Search for Journal in Brave)
Recommendations
- A uniform central limit theorem for neural network-based autoregressive processes with applications to change-point analysis
- Optimal nonparametric inference via deep neural network
- Nonparametric time series prediction through adaptive model selection
- Autoregressive approximations to nonstationary time series with inference and applications
- Bayesian Analysis of Nonlinear Autoregression Models Based on Neural Networks
Nonparametric inference (62Gxx) Artificial intelligence (68Txx) Inference from stochastic processes (62Mxx)
Cites Work
- The elements of statistical learning. Data mining, inference, and prediction
- Nearly unbiased variable selection under minimax concave penalty
- Weak convergence and empirical processes. With applications to statistics
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Analysis of multi-stage convex relaxation for sparse regularization
- Nonlinear time series. Nonparametric and parametric methods
- UNIFORM CONVERGENCE RATES FOR KERNEL ESTIMATION WITH DEPENDENT DATA
- Title not available (Why is that?)
- Specification, estimation, and evaluation of smooth transition autoregressive models
- MULTIVARIATE LOCAL POLYNOMIAL REGRESSION FOR TIME SERIES:UNIFORM STRONG CONSISTENCY AND RATES
- Functional-Coefficient Autoregressive Models
- Title not available (Why is that?)
- A distribution-free theory of nonparametric regression
- On Single-Index Coefficient Regression Models
- Towards a Unified Approach for Proving Geometric Ergodicity and Mixing Properties of Nonlinear Autoregressive Processes
- Nonparametric regression for locally stationary time series
- Time-varying nonlinear regression models: nonparametric estimation and model selection
- NONPARAMETRIC ESTIMATORS FOR TIME SERIES
- Title not available (Why is that?)
- Yet Another Look at Harris’ Ergodic Theorem for Markov Chains
- Multivariate regression estimation: Local polynomial fitting for time series
- Self-normalized processes: exponential inequalities, moment bounds and iterated logarithm laws.
- Nonparametric regression with dependent errors
- Stability and the Lyapounov exponent of threshold AR-ARCH models
- Confidence bands in nonparametric time series regression
- Non-linear time series and Markov chains
- Modelling nonlinear random vibrations using an amplitude-dependent autoregressive time series model
- \(L_1\) geometric ergodicity of a multivariate nonlinear AR model with an ARCH term.
- Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality
- On geometric ergodicity of nonlinear autoregressive models
- Threshold variable selection using nonparametric methods
- Simultaneous nonparametric inference of time series
- Geometric ergodicity of nonlinear autoregressive models with changing conditional variances
- Nonparametric function estimation for time series by local average estimators
- Some results on random design regression with long memory errors and predictors
- On nonparametric estimation in nonlinear AR(1)-models
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Inequalities for a Pair of Processes Stopped at a Random Time
- Nonparametric time series regression
- Model selection in kernel ridge regression
- Generalization bounds for non-stationary mixing processes
- Nonparametric regression using deep neural networks with ReLU activation function
- Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space
- On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
- On the rate of convergence of fully connected deep neural network regression estimates
- Deep Neural Network Approximation Theory
- Title not available (Why is that?)
- To tune or not to tune the number of trees in random forest
- Misspecified diffusion models with high-frequency observations and an application to neural networks
- On the rate of convergence of a deep recurrent neural network estimate in a regression problem with dependent data
- Drift estimation for a multi-dimensional diffusion process using deep neural networks
This page was built for publication: Adaptive deep learning for nonlinear time series models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6632604)