Aggregation of estimators and stochastic optimization
From MaRDI portal
Publication:2197367
zbMATH Open1455.62080MaRDI QIDQ2197367FDOQ2197367
Authors: Alexandre B. Tsybakov
Publication date: 31 August 2020
Published in: Journal de la Société Française de Statistique \& Revue de Statistique Appliquée (Search for Journal in Brave)
Full work available at URL: http://www.numdam.org/item/JSFS_2008__149_1_3_0
Recommendations
Cites Work
- Title not available (Why is that?)
- High-dimensional generalized linear models and the lasso
- Variational Analysis
- Title not available (Why is that?)
- Learning Theory and Kernel Machines
- Prediction, Learning, and Games
- Title not available (Why is that?)
- Learning by mirror averaging
- Sparsity oracle inequalities for the Lasso
- A Stochastic Approximation Method
- Boosting with early stopping: convergence and consistency
- Primal-dual subgradient methods for convex problems
- Title not available (Why is that?)
- Model selection in nonparametric regression
- Combining different procedures for adaptive regression
- Mixing strategies for density estimation.
- Functional aggregation for nonparametric regression.
- Direct estimation of the index coefficient in a single-index model
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Aggregation for Gaussian regression
- Approximation and learning by greedy algorithms
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- Adaptive Regression by Mixing
- Title not available (Why is that?)
- Linear and convex aggregation of density estimators
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Sparse Density Estimation with ℓ1 Penalties
- Theory of statistical inference and information. Transl. from the Slovak by the author
- Aggregating regression procedures to improve performance
- Boosting a weak learning algorithm by majority
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the Bayes-risk consistency of regularized boosting methods.
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Title not available (Why is that?)
- Title not available (Why is that?)
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Aggregated estimators and empirical complexity for least square regression
- Generalized mirror averaging and \(D\)-convex aggregation
- Title not available (Why is that?)
- 10.1162/153244304773936108
- Suboptimality of Penalized Empirical Risk Minimization in Classification
- Title not available (Why is that?)
- Density estimation with stagewise optimization of the empirical risk
- Randomized prediction of individual sequences
Cited In (5)
- Title not available (Why is that?)
- Optimal rates and adaptation in the single-index model using aggregation
- Aggregation of regularized solutions from multiple observation models
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Aggregating estimates by convex optimization
Uses Software
This page was built for publication: Aggregation of estimators and stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2197367)