Aggregation of estimators and stochastic optimization
From MaRDI portal
Publication:2197367
Recommendations
Cites work
- scientific article; zbMATH DE number 5957353 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 67633 (Why is no real title available?)
- scientific article; zbMATH DE number 1522808 (Why is no real title available?)
- scientific article; zbMATH DE number 3446442 (Why is no real title available?)
- scientific article; zbMATH DE number 847282 (Why is no real title available?)
- scientific article; zbMATH DE number 3336465 (Why is no real title available?)
- scientific article; zbMATH DE number 3083069 (Why is no real title available?)
- 10.1162/153244304773936108
- A Stochastic Approximation Method
- A new algorithm for estimating the effective dimension-reduction subspace
- Adaptive Regression by Mixing
- Aggregated estimators and empirical complexity for least square regression
- Aggregating regression procedures to improve performance
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Aggregation for Gaussian regression
- Approximation and learning by greedy algorithms
- Boosting a weak learning algorithm by majority
- Boosting with early stopping: convergence and consistency
- Combining different procedures for adaptive regression
- Density estimation with stagewise optimization of the empirical risk
- Direct estimation of the index coefficient in a single-index model
- Functional aggregation for nonparametric regression.
- Generalized mirror averaging and D-convex aggregation
- High-dimensional generalized linear models and the lasso
- Learning Theory and Kernel Machines
- Learning by mirror averaging
- Linear and convex aggregation of density estimators
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Mixing strategies for density estimation.
- Model selection in nonparametric regression
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- On the Bayes-risk consistency of regularized boosting methods.
- Prediction, Learning, and Games
- Primal-dual subgradient methods for convex problems
- Randomized prediction of individual sequences
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Some theory for generalized boosting algorithms
- Sparse Density Estimation with ℓ1 Penalties
- Sparse boosting
- Sparsity oracle inequalities for the Lasso
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Suboptimality of Penalized Empirical Risk Minimization in Classification
- Theory of statistical inference and information. Transl. from the Slovak by the author
- Variational Analysis
Cited in
(5)- Optimal rates and adaptation in the single-index model using aggregation
- Aggregation of regularized solutions from multiple observation models
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- An introduction to nonparametric adaptive estimation
- Aggregating estimates by convex optimization
This page was built for publication: Aggregation of estimators and stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2197367)