Robust high dimensional learning for Lipschitz and convex losses
From MaRDI portal
Publication:5149262
Recommendations
Cites work
- scientific article; zbMATH DE number 6388313 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- A fast unified algorithm for solving group-lasso penalize learning problems
- A new method for estimation and model selection: \(\rho\)-estimation
- An Iterative Regularization Method for Total Variation-Based Image Restoration
- Atomic Norm Denoising With Applications to Line Spectral Estimation
- Convexity, Classification, and Risk Bounds
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Empirical minimization
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- Gaussian averages of interpolated bodies and applications to approximate reconstruction
- Interactions between compressed sensing random matrices and high dimensional geometry
- Living on the edge: phase transitions in convex programs with random data
- Local Rademacher complexities
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- On multiplier processes under weak moment assumptions
- On sparsity inducing regularization methods for machine learning
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Optimal aggregation of classifiers in statistical learning.
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Random generation of combinatorial structures from a uniform distribution
- Regularization and the small-ball method. I: Sparse recovery
- Regularization and the small-ball method. II: Complexity dependent error rates
- Robust linear least squares regression
- Robust low-rank matrix estimation
- SLOPE-adaptive variable selection via convex optimization
- Simultaneous analysis of Lasso and Dantzig selector
- Slope meets Lasso: improved oracle bounds and optimality
- Smooth discrimination analysis
- Sparsity and Smoothness Via the Fused Lasso
- Stabilité et instabilité du risque minimax pour des variables indépendantes équidistribuées
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Statistics for high-dimensional data. Methods, theory and applications.
- Structured sparsity through convex optimization
- Sub-Gaussian mean estimators
- The Group Lasso for Logistic Regression
- The space complexity of approximating the frequency moments
- Tikhonov Regularization and Total Least Squares
- Towards the study of least squares estimators with convex penalty
- Upper and lower bounds for stochastic processes. Modern methods and classical problems
Cited in
(6)- Robust supervised learning with coordinate gradient descent
- Robust high dimensional learning for Lipschitz and convex losses
- Risk-Based Robust Statistical Learning by Stochastic Difference-of-Convex Value-Function Optimization
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Sparse additive support vector machines in bounded variation space
- Learning with risks based on M-location
This page was built for publication: Robust high dimensional learning for Lipschitz and convex losses
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5149262)