A generalization bound of deep neural networks for dependent data
From MaRDI portal
Publication:6540912
Recommendations
- Generalization bounds for non-stationary mixing processes
- Generalization bounds for time series prediction with non-stationary processes
- Stability bounds for stationary \(\varphi \)-mixing and \(\beta \)-mixing processes
- A tight upper bound on the generalization error of feedforward neural networks
- Chromatic PAC-Bayes bounds for non-IID data: applications to ranking and stationary \(\beta \)-mixing processes
Cites work
- Adaptive group Lasso neural network models for functions of few variables and time-dependent data
- Asymptotic theory with hierarchical autocorrelation: Ornstein-Uhlenbeck tree models
- Deep learning for finance: deep portfolios
- Direct likelihood-based inference for discretely observed stochastic compartmental models of infectious disease
- Generalization and Robustness of Batched Weighted Average Algorithm with V-Geometrically Ergodic Markov Data
- Generalization bounds for averaged classifiers
- Generalization bounds for non-stationary mixing processes
- Learning from non-iid data: fast rates for the one-vs-all multiclass plug-in classifiers
- Minimum complexity regression estimation with weakly dependent observations
- Nonlinear Regression with Dependent Observations
- Recovery guarantees for polynomial coefficients from weakly dependent data with outliers
- Searching for minimal optimal neural networks
- Stability bounds for stationary \(\varphi \)-mixing and \(\beta \)-mixing processes
- Strong mixing properties of discrete-valued time series with exogenous covariates
- The Generalization Ability of Online Algorithms for Dependent Data
- The generalization performance of ERM algorithm with strongly mixing observations
This page was built for publication: A generalization bound of deep neural networks for dependent data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6540912)