Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
From MaRDI portal
Publication:5254990
DOI10.1137/140957639zbMath1320.90047arXiv1402.4419OpenAlexW2120717492MaRDI QIDQ5254990
Publication date: 11 June 2015
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1402.4419
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (49)
Majorization-minimization generalized Krylov subspace methods for \({\ell _p}\)-\({\ell _q}\) optimization applied to image restoration ⋮ The log-exponential smoothing technique and Nesterov's accelerated gradient method for generalized Sylvester problems ⋮ Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems ⋮ A generalized proximal linearized algorithm for DC functions with application to the optimal size of the firm problem ⋮ Composite Difference-Max Programs for Modern Statistical Estimation Problems ⋮ Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization ⋮ Accelerating incremental gradient optimization with curvature information ⋮ An aggressive reduction on the complexity of optimization for non-strongly convex objectives ⋮ Convergence rates of accelerated proximal gradient algorithms under independent noise ⋮ Linear convergence of cyclic SAGA ⋮ Generalized forward-backward splitting with penalization for monotone inclusion problems ⋮ Efficiency of higher-order algorithms for minimizing composite functions ⋮ Random-reshuffled SARAH does not need full gradient computations ⋮ Modulus-based iterative methods for constrained ℓ p – ℓ q minimization ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization ⋮ Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods ⋮ Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate ⋮ A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions ⋮ Improved SVRG for finite sum structure optimization with application to binary classification ⋮ Unnamed Item ⋮ Stochastic variance reduced gradient methods using a trust-region-like scheme ⋮ Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice ⋮ Nonconvex nonsmooth optimization via convex-nonconvex majorization-minimization ⋮ Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions ⋮ IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate ⋮ Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems ⋮ Stochastic proximal quasi-Newton methods for non-convex composite optimization ⋮ Optimizing cluster structures with inner product induced norm based dissimilarity measures: theoretical development and convergence analysis ⋮ Stochastic sub-sampled Newton method with variance reduction ⋮ Stochastic quasi-gradient methods: variance reduction via Jacobian sketching ⋮ Stream-suitable optimization algorithms for some soft-margin support vector machine variants ⋮ An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems ⋮ Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression ⋮ Coordinate descent with arbitrary sampling I: algorithms and complexity† ⋮ An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration ⋮ Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport ⋮ Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent ⋮ Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions ⋮ Unnamed Item ⋮ A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima ⋮ Unnamed Item ⋮ Incremental Quasi-Subgradient Method for Minimizing Sum of Geodesic Quasi-Convex Functions on Riemannian Manifolds with Applications ⋮ Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity ⋮ Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming ⋮ A hybrid stochastic optimization framework for composite nonconvex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- Minimizing finite sums with the stochastic average gradient
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Monotonicity of quadratic-approximation algorithms
- Convex analysis and nonlinear optimization. Theory and examples.
- Introductory lectures on convex optimization. A basic course.
- DC programming: overview.
- A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization
- Proximal Splitting Methods in Signal Processing
- Optimization with Sparsity-Inducing Penalties
- An optimal algorithm for stochastic strongly-convex optimization
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Graphical Models, Exponential Families, and Variational Inference
- Robust Stochastic Approximation Approach to Stochastic Programming
- Sparse Reconstruction by Separable Approximation
- Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Accuracy Guarantees for <formula formulatype="inline"> <tex Notation="TeX">$\ell_1$</tex></formula>-Recovery
- On the Convergence of Block Coordinate Descent Type Methods
- Model Selection and Estimation in Regression with Grouped Variables
- A Convergent Incremental Gradient Method with a Constant Step Size
- Signal Recovery by Proximal Forward-Backward Splitting
- Logistic regression, AdaBoost and Bregman distances
This page was built for publication: Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning