Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning

From MaRDI portal
Revision as of 19:53, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5254990

DOI10.1137/140957639zbMath1320.90047arXiv1402.4419OpenAlexW2120717492MaRDI QIDQ5254990

Julien Mairal

Publication date: 11 June 2015

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1402.4419




Related Items (49)

Majorization-minimization generalized Krylov subspace methods for \({\ell _p}\)-\({\ell _q}\) optimization applied to image restorationThe log-exponential smoothing technique and Nesterov's accelerated gradient method for generalized Sylvester problemsBlock-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problemsA generalized proximal linearized algorithm for DC functions with application to the optimal size of the firm problemComposite Difference-Max Programs for Modern Statistical Estimation ProblemsProximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimizationAccelerating incremental gradient optimization with curvature informationAn aggressive reduction on the complexity of optimization for non-strongly convex objectivesConvergence rates of accelerated proximal gradient algorithms under independent noiseLinear convergence of cyclic SAGAGeneralized forward-backward splitting with penalization for monotone inclusion problemsEfficiency of higher-order algorithms for minimizing composite functionsRandom-reshuffled SARAH does not need full gradient computationsModulus-based iterative methods for constrained p q minimizationRecent Theoretical Advances in Non-Convex OptimizationOn the linear convergence of the approximate proximal splitting method for non-smooth convex optimizationGlobal Convergence Rate of Proximal Incremental Aggregated Gradient MethodsSurpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence RateA Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable FunctionsImproved SVRG for finite sum structure optimization with application to binary classificationUnnamed ItemStochastic variance reduced gradient methods using a trust-region-like schemeCatalyst Acceleration for First-order Convex Optimization: from Theory to PracticeNonconvex nonsmooth optimization via convex-nonconvex majorization-minimizationIncremental quasi-subgradient methods for minimizing the sum of quasi-convex functionsIQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateGeneralized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimizationUnnamed ItemUnnamed ItemLinear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problemsStochastic proximal quasi-Newton methods for non-convex composite optimizationOptimizing cluster structures with inner product induced norm based dissimilarity measures: theoretical development and convergence analysisStochastic sub-sampled Newton method with variance reductionStochastic quasi-gradient methods: variance reduction via Jacobian sketchingStream-suitable optimization algorithms for some soft-margin support vector machine variantsAn outer-inner linearization method for non-convex and nondifferentiable composite regularization problemsStochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regressionCoordinate descent with arbitrary sampling I: algorithms and complexityAn Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationRiemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector TransportAdaptive Sampling for Incremental Optimization Using Stochastic Gradient DescentProximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth ConditionsUnnamed ItemA Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local MinimaUnnamed ItemIncremental Quasi-Subgradient Method for Minimizing Sum of Geodesic Quasi-Convex Functions on Riemannian Manifolds with ApplicationsBregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient ContinuityStochastic Difference-of-Convex-Functions Algorithms for Nonconvex ProgrammingA hybrid stochastic optimization framework for composite nonconvex optimization


Uses Software


Cites Work


This page was built for publication: Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning