Dual subgradient algorithms for large-scale nonsmooth learning problems
DOI10.1007/s10107-013-0725-1zbMath1305.65149arXiv1302.2349OpenAlexW2156374634MaRDI QIDQ484132
Arkadi Nemirovski, Anatoli B. Juditsky, Bruce C. Cox
Publication date: 18 December 2014
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1302.2349
convex optimizationnumerical examplesmatrix completionmirror descent algorithmfirst-order algorithmsdual subgradient algorithmslarge-scale nonsmooth learning problemsNesterov's optimal algorithm
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Minimax problems in mathematical programming (90C47) Pattern recognition, speech recognition (68T10) Matrix completion problems (15A83)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Smooth minimization of non-smooth functions
- High-dimensional covariance matrix estimation in approximate factor models
- Dualization of signal recovery problems
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Conditional gradient algorithms with open loop step size rules
- Introductory lectures on convex optimization. A basic course.
- Non-Euclidean restricted memory level method for large-scale convex optimization
- New variants of bundle methods
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- 10.1162/15324430260185628
- Accuracy Certificates for Computational Problems with Convex Structure
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- On first-order algorithms forl1/nuclear norm minimization
- Proximité et dualité dans un espace hilbertien