Machine learning algorithms of relaxation subgradient method with space extension
From MaRDI portal
Publication:2117655
DOI10.1007/978-3-030-77876-7_32zbMATH Open1489.90120OpenAlexW3170092489MaRDI QIDQ2117655FDOQ2117655
Elena S. Kagan, Vladimir V. Meshechkin, Lev A. Kazakovtsev, Vladimir N. Krutikov
Publication date: 22 March 2022
Full work available at URL: https://doi.org/10.1007/978-3-030-77876-7_32
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- The conjugate gradient method in extremal problems
- New variants of bundle methods
- Title not available (Why is that?)
- Online Learning and Online Convex Optimization
- Nondifferentiable optimization and polynomial problems
- Minimization of unsmooth functionals
- Approximate level method for nonsmooth convex minimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Universal method for stochastic composite optimization problems
- Method of conjugate subgradients with constrained memory
- Title not available (Why is that?)
- On the properties of the method of minimization for convex functions with relaxation on the distance to extremum
- Title not available (Why is that?)
Cited In (3)
Recommendations
- Sublinear optimization for machine learning π π
- Title not available (Why is that?) π π
- Title not available (Why is that?) π π
- Alternating Direction Method of Multipliers for Machine Learning π π
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning π π
- Optimization problems for machine learning: a survey π π
- Non-convex Optimization for Machine Learning π π
- Title not available (Why is that?) π π
- A Douglas-Rachford method for sparse extreme learning machine π π
This page was built for publication: Machine learning algorithms of relaxation subgradient method with space extension
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117655)