From inexact optimization to learning via gradient concentration
From MaRDI portal
Publication:2111477
DOI10.1007/S10589-022-00408-5OpenAlexW3166001443MaRDI QIDQ2111477FDOQ2111477
Authors: Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco
Publication date: 16 January 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2106.05397
Recommendations
- Deep learning: a statistical viewpoint
- scientific article; zbMATH DE number 1786133
- Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
- Regularization techniques and suboptimal solutions to optimization problems in learning from data
- Optimization problems for machine learning: a survey
Cites Work
- Nonparametric stochastic approximation with large step-sizes
- Support Vector Machines
- High-Dimensional Statistics
- High-Dimensional Probability
- Concentration inequalities. A nonasymptotic theory of independence
- 10.1162/153244302760200704
- On early stopping in gradient descent learning
- Understanding Machine Learning
- Statistical guarantees for the EM algorithm: from population to sample-based analysis
- 10.1162/153244303321897690
- Title not available (Why is that?)
- Gradient Convergence in Gradient methods with Errors
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Title not available (Why is that?)
- On regularization algorithms in learning theory
- Convex optimization: algorithms and complexity
- Optimal rates for regularization of statistical inverse learning problems
- Robust Estimation via Robust Gradient Estimation
- Convergence rates of kernel conjugate gradient for random design regression
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Early stopping for statistical inverse problems via truncated SVD estimation
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Title not available (Why is that?)
- Iterative regularization for learning with convex loss functions
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Vector-Contraction Inequality for Rademacher Complexities
- Title not available (Why is that?)
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
Cited In (12)
- Title not available (Why is that?)
- The importance of convexity in learning with squared loss
- Special issue for SIMAI 2020-2021: large-scale optimization and applications
- Title not available (Why is that?)
- Gradient-Based Discrete-Time Concurrent Learning for Standalone Function Approximation
- Multilevel Fine-Tuning: Closing Generalization Gaps in Approximation of Solution Maps under a Limited Budget for Training Data
- Implicit regularization with strongly convex bias: Stability and acceleration
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Towards an automatic uncertainty compiler
- From inexact optimization to learning via gradient concentration
- A Note on Lewicki-Sejnowski Gradient for Learning Overcomplete Representations
- Approximation and learning of convex superpositions
This page was built for publication: From inexact optimization to learning via gradient concentration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2111477)