Robust high dimensional learning for Lipschitz and convex losses
From MaRDI portal
Publication:5149262
Authors: Chinot Geoffrey, Lecué Guillaume, Lerasle Matthieu
Publication date: 8 February 2021
Full work available at URL: https://arxiv.org/abs/1905.04281
Recommendations
total variationLassoSLOPEgroup Lassorobust learningLipschtiz and convex loss functionsRademacher complexity boundssparsity bounds
Cites Work
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity and Smoothness Via the Fused Lasso
- A fast unified algorithm for solving group-lasso penalize learning problems
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- The Group Lasso for Logistic Regression
- Structured sparsity through convex optimization
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Title not available (Why is that?)
- Title not available (Why is that?)
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Robust linear least squares regression
- The space complexity of approximating the frequency moments
- Smooth discrimination analysis
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Local Rademacher complexities
- SLOPE-adaptive variable selection via convex optimization
- An Iterative Regularization Method for Total Variation-Based Image Restoration
- Convexity, Classification, and Risk Bounds
- Random generation of combinatorial structures from a uniform distribution
- Interactions between compressed sensing random matrices and high dimensional geometry
- Upper and lower bounds for stochastic processes. Modern methods and classical problems
- Title not available (Why is that?)
- Optimal aggregation of classifiers in statistical learning.
- A new method for estimation and model selection: \(\rho\)-estimation
- Living on the edge: phase transitions in convex programs with random data
- Atomic Norm Denoising With Applications to Line Spectral Estimation
- Empirical minimization
- Tikhonov Regularization and Total Least Squares
- Stabilité et instabilité du risque minimax pour des variables indépendantes équidistribuées
- Gaussian averages of interpolated bodies and applications to approximate reconstruction
- Sub-Gaussian mean estimators
- Robust low-rank matrix estimation
- Title not available (Why is that?)
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- Regularization and the small-ball method. I: Sparse recovery
- Slope meets Lasso: improved oracle bounds and optimality
- Towards the study of least squares estimators with convex penalty
- Regularization and the small-ball method. II: Complexity dependent error rates
- On multiplier processes under weak moment assumptions
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- On sparsity inducing regularization methods for machine learning
Cited In (5)
- Risk-Based Robust Statistical Learning by Stochastic Difference-of-Convex Value-Function Optimization
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Learning with risks based on M-location
- Sparse additive support vector machines in bounded variation space
- Robust supervised learning with coordinate gradient descent
Uses Software
This page was built for publication: Robust high dimensional learning for Lipschitz and convex losses
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5149262)