Gradient methods for minimizing composite functions
From MaRDI portal
Publication:359630
DOI10.1007/s10107-012-0629-5zbMath1287.90067OpenAlexW2030161963MaRDI QIDQ359630
Publication date: 12 August 2013
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-012-0629-5
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30) Fractional programming (90C32)
Related Items
A Trust-region Method for Nonsmooth Nonconvex Optimization, Non-convex regularization and accelerated gradient algorithm for sparse portfolio selection, Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method, Analysis of the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrier, Accelerated differential inclusion for convex optimization, An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis, Accelerated First-Order Methods for Convex Optimization with Locally Lipschitz Continuous Gradient, A diagonal finite element-projection-proximal gradient algorithm for elliptic optimal control problem, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, Strong Convergence of Trajectories via Inertial Dynamics Combining Hessian-Driven Damping and Tikhonov Regularization for General Convex Minimizations, A speed restart scheme for a dynamics with Hessian-driven damping, Conditions for linear convergence of the gradient method for non-convex optimization, Branch-and-Model: a derivative-free global optimization algorithm, Linearly-convergent FISTA variant for composite optimization with duality, An accelerated tensorial double proximal gradient method for total variation regularization problem, A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems, Accelerated smoothing hard thresholding algorithms for \(\ell_0\) regularized nonsmooth convex regression problem, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Direct nonlinear acceleration, Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence, FISTA is an automatic geometrically optimized algorithm for strongly convex functions, Decentralized Gradient Descent Maximization Method for Composite Nonconvex Strongly-Concave Minimax Problems, Optimal Algorithms for Stochastic Complementary Composite Minimization, N-mode minimal tensor extrapolation methods, A class of modified accelerated proximal gradient methods for nonsmooth and nonconvex minimization problems, On a scaled symmetric Dai-Liao-type scheme for constrained system of nonlinear equations with applications, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, Efficiency of higher-order algorithms for minimizing composite functions, Proximal quasi-Newton method for composite optimization over the Stiefel manifold, A refined inertial DC algorithm for DC programming, An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems, A reduced half thresholding algorithm, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Principled analyses and design of first-order methods with inexact proximal operators, Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification, Convergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimization, Some adaptive first-order methods for variational inequalities with relatively strongly monotone operators and generalized smoothness, First-order methods for convex optimization, Data-Driven Mirror Descent with Input-Convex Neural Networks, Minimizing oracle-structured composite functions, A variable metric and Nesterov extrapolated proximal DCA with backtracking for a composite DC program, A Newton-type proximal gradient method for nonlinear multi-objective optimization problems, Optimal Transport Approximation of 2-Dimensional Measures, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems, Homogenization with the quasistatic Tresca friction law: qualitative and quantitative results, An \(\ell_{2,0}\)-norm constrained matrix optimization via extended discrete first-order algorithms, Supervised homogeneity fusion: a combinatorial approach, Adaptive proximal SGD based on new estimating sequences for sparser ERM, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, Complex-Valued Imaging with Total Variation Regularization: An Application to Full-Waveform Inversion in Visco-acoustic Media, Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction, A primal-dual flow for affine constrained convex optimization, Inexact model: a framework for optimization and variational inequalities, Universal intermediate gradient method for convex problems with inexact oracle, High-Order Optimization Methods for Fully Composite Problems, Scaled, Inexact, and Adaptive Generalized FISTA for Strongly Convex Optimization, Additive Schwarz methods for convex optimization with backtracking, Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization, Alternating direction method of multipliers with variable metric indefinite proximal terms for convex optimization, Best subset selection via a modern optimization lens, A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization, Accelerated proximal algorithms with a correction term for monotone inclusions, A high-dimensional M-estimator framework for bi-level variable selection, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, Matrix completion via max-norm constrained optimization, OSGA: a fast subgradient algorithm with optimal complexity, Inexact coordinate descent: complexity and preconditioning, Optimized first-order methods for smooth convex minimization, A dual method for minimizing a nonsmooth objective over one smooth inequality constraint, Nonnegative data interpolation by spherical splines, New results on subgradient methods for strongly convex optimization problems with a unified analysis, Accelerating \(\ell^1\)-\(\ell^2\) deblurring using wavelet expansions of operators, Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee, Convergence analysis of primal-dual based methods for total variation minimization with finite element approximation, On group-wise \(\ell_p\) regularization: theory and efficient algorithms, Adaptive restart of the optimized gradient method for convex optimization, Exact worst-case convergence rates of the proximal gradient method for composite convex minimization, A unified approach to error bounds for structured convex optimization problems, Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints, An inertial forward-backward algorithm for monotone inclusions, iPiasco: inertial proximal algorithm for strongly convex optimization, Fast first-order methods for composite convex optimization with backtracking, Optimal subgradient algorithms for large-scale convex optimization in simple domains, Inexact proximal stochastic gradient method for convex composite optimization, Metric selection in fast dual forward-backward splitting, Hierarchical sparse modeling: a choice of two group Lasso formulations, A modified strictly contractive peaceman-Rachford splitting method for multi-block separable convex programming, Accelerated first-order methods for hyperbolic programming, Conditional gradient type methods for composite nonlinear and stochastic optimization, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Conditional gradient algorithms for norm-regularized smooth convex optimization, Universal gradient methods for convex optimization problems, On the complexity analysis of randomized block-coordinate descent methods, A Barzilai-Borwein type method for minimizing composite functions, On variance reduction for stochastic smooth convex optimization with multiplicative noise, An alternating direction method of multipliers with the BFGS update for structured convex quadratic optimization, On the quality of first-order approximation of functions with Hölder continuous gradient, A proximal difference-of-convex algorithm with extrapolation, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, Universal method for stochastic composite optimization problems, A pseudo-heuristic parameter selection rule for \(l^1\)-regularized minimization problems, Pathwise coordinate optimization for sparse learning: algorithm and theory, On the convergence analysis of the optimized gradient method, Consistent learning by composite proximal thresholding, Decomposable norm minimization with proximal-gradient homotopy algorithm, Accelerating the DC algorithm for smooth functions, DC formulations and algorithms for sparse optimization problems, Nesterov's smoothing technique and minimizing differences of convex functions for hierarchical clustering, A projection method on measures sets, The landscape of empirical risk for nonconvex losses, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization, Adaptive smoothing algorithms for nonsmooth composite convex minimization, Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems, Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization, Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions, Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis, Point process estimation with Mirror Prox algorithms, Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems, Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems, An optimal randomized incremental gradient method, Complexity bounds for primal-dual methods minimizing the model of objective function, Globalized inexact proximal Newton-type methods for nonconvex composite functions, Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach, Stochastic intermediate gradient method for convex problems with stochastic inexact oracle, Reconstruction of 3D X-ray CT images from reduced sampling by a scaled gradient projection algorithm, The condition number of a function relative to a set, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, Self adaptive inertial extragradient algorithms for solving bilevel pseudomonotone variational inequality problems, Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems, Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization, Stochastic proximal splitting algorithm for composite minimization, Iteration complexity of generalized complementarity problems, A piecewise conservative method for unconstrained convex optimization, Mining events with declassified diplomatic documents, High-dimensional robust approximated \(M\)-estimators for mean regression with asymmetric data, Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\), Some modified fast iterative shrinkage thresholding algorithms with a new adaptive non-monotone stepsize strategy for nonsmooth and convex minimization problems, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization, From differential equation solvers to accelerated first-order methods for convex optimization, A control-theoretic perspective on optimal high-order optimization, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, A relaxed parameter condition for the primal-dual hybrid gradient method for saddle-point problem, Sparse regression at scale: branch-and-bound rooted in first-order optimization, On stochastic accelerated gradient with convergence rate, PPA-like contraction methods for convex optimization: a framework using variational inequality approach, Accelerated gradient methods for nonconvex nonlinear and stochastic programming, Parallel coordinate descent methods for big data optimization, Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods, Inexact basic tensor methods for some classes of convex optimization problems, Gradient methods with memory, On Full Jacobian Decomposition of the Augmented Lagrangian Method for Separable Convex Programming, Hard Thresholding Regularised Logistic Regression: Theory and Algorithms, Identifying Heterogeneous Effect Using Latent Supervised Clustering With Adaptive Fusion, An Efficient Algorithm for Minimizing Multi Non-Smooth Component Functions, Unnamed Item, Unnamed Item, Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints, Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds, A Strictly Contractive Peaceman-Rachford Splitting Method with Logarithmic-Quadratic Proximal Regularization for Convex Programming, Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms, Iterative positive thresholding algorithm for non-negative sparse optimization, Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates, A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer, A Level-Set Method for Convex Optimization with a Feasible Solution Path, A New Homotopy Proximal Variable-Metric Framework for Composite Convex Minimization, Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, Primal–dual accelerated gradient methods with small-dimensional relaxation oracle, A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure, A Scalable Algorithm for Sparse Portfolio Selection, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent, First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems, Forward-Backward Envelope for the Sum of Two Nonconvex Functions: Further Properties and Nonmonotone Linesearch Algorithms, A New Boosted Proximal Point Algorithm for Minimizing Nonsmooth DC Functions, Randomized Block Proximal Damped Newton Method for Composite Self-Concordant Minimization, A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming, Unnamed Item, Composite Convex Minimization Involving Self-concordant-Like Cost Functions, Additive Schwarz Methods for Convex Optimization as Gradient Methods, Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning, Variational Gram Functions: Convex Analysis and Optimization, A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity, Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA), Relatively Smooth Convex Optimization by First-Order Methods, and Applications, A Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems, Convergence Rates of Proximal Gradient Methods via the Convex Conjugate, Stochastic Model-Based Minimization of Weakly Convex Functions, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, On the convergence rate of the augmented Lagrangian-based parallel splitting method, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming, An accelerated primal-dual iterative scheme for the L 2 -TV regularized model of linear inverse problems, Contracting Proximal Methods for Smooth Convex Optimization, An alternating direction method of multipliers with a worst-case $O(1/n^2)$ convergence rate, Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case, On the complexity of parallel coordinate descent, Inexact primal–dual gradient projection methods for nonlinear optimization on convex set, Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions, Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems, Unnamed Item, Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Fast convergence of generalized forward-backward algorithms for structured monotone inclusions, An introduction to continuous optimization for imaging, DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS, Perturbation resilience of proximal gradient algorithm for composite objectives, An acceleration procedure for optimal first-order methods, Unnamed Item, Accelerated Residual Methods for the Iterative Solution of Systems of Equations, Finite element approximation of source term identification with TV-regularization, Nesterov perturbations and projection methods applied to IMRT, Sharpness, Restart, and Acceleration, Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems, Block-wise Alternating Direction Method of Multipliers for Multiple-block Convex Programming and Beyond, The augmented Lagrangian method with full Jacobian decomposition and logarithmic-quadratic proximal regularization for multiple-block separable convex programming, Generalized Conjugate Gradient Methods for ℓ1 Regularized Convex Quadratic Programming with Finite Convergence, A Single-Phase, Proximal Path-Following Framework, On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming, Imaging with highly incomplete and corrupted data, Unnamed Item, Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming, Composite Optimization by Nonconvex Majorization-Minimization, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Backtracking Strategies for Accelerated Descent Methods with Smooth Composite Objectives, Solving Large-Scale Optimization Problems with a Convergence Rate Independent of Grid Size, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, Variational Image Regularization with Euler's Elastica Using a Discrete Gradient Scheme, A scalable estimator of sets of integral operators, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Generalized Conditional Gradient for Sparse Estimation, Gradient Method for Optimization on Riemannian Manifolds with Lower Bounded Curvature, Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates, Unnamed Item, Unnamed Item, A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima, Unnamed Item, The method of codifferential descent for convex and global piecewise affine optimization, An accelerated communication-efficient primal-dual optimization framework for structured machine learning, A dual approach for optimal algorithms in distributed optimization over networks, Unnamed Item, Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization, An accelerated majorization-minimization algorithm with convergence guarantee for non-Lipschitz wavelet synthesis model *, On the Generation of Sampling Schemes for Magnetic Resonance Imaging, A proximal partially parallel splitting method for separable convex programs, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization, On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions, Majorization-minimization generalized Krylov subspace methods for \({\ell _p}\)-\({\ell _q}\) optimization applied to image restoration, Forward-backward quasi-Newton methods for nonsmooth optimization problems, The Cyclic Block Conditional Gradient Method for Convex Optimization Problems, Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems, Smoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimization, An accelerated coordinate gradient descent algorithm for non-separable composite optimization, Oracle complexity separation in convex optimization, Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics, Accelerating Block-Decomposition First-Order Methods for Solving Composite Saddle-Point and Two-Player Nash Equilibrium Problems, An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization, Inertial Proximal ADMM for Linearly Constrained Separable Convex Optimization, A Total Fractional-Order Variation Model for Image Restoration with Nonhomogeneous Boundary Conditions and Its Numerical Solution, Proximal methods avoid active strict saddles of weakly convex functions, MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration, Accelerated inexact composite gradient methods for nonconvex spectral optimization problems, A Proximal Strictly Contractive Peaceman--Rachford Splitting Method for Convex Programming with Applications to Imaging, Inertial accelerated primal-dual methods for linear equality constrained convex optimization problems, Fast inertial dynamic algorithm with smoothing method for nonsmooth convex optimization, A smoothing stochastic gradient method for composite optimization, Proximal algorithm for minimization problems in \(l_0\)-regularization for nonlinear inverse problems, On the iteration complexity of some projection methods for monotone linear variational inequalities, A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives, Convergence analysis of positive-indefinite proximal ADMM with a Glowinski's relaxation factor, Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem, Combining fast inertial dynamics for convex optimization with Tikhonov regularization, Accelerated additive Schwarz methods for convex optimization with adaptive restart, Scattered data interpolation with nonnegative preservation using bivariate splines and its application, A proximal algorithm with backtracked extrapolation for a class of structured fractional programming, A Note on Application of Nesterov’s Method in Solving Lasso-Type Problems, First-order frameworks for continuous Newton-like dynamics governed by maximally monotone operators, Linear convergence of first order methods for non-strongly convex optimization, Fast gradient methods for uniformly convex and weakly smooth problems, An optimal subgradient algorithm with subspace search for costly convex optimization problems, Inexact successive quadratic approximation for regularized optimization, General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, Empirical risk minimization: probabilistic complexity and stepsize strategy, An inexact interior-point Lagrangian decomposition algorithm with inexact oracles, Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization, Block-wise ADMM with a relaxation factor for multiple-block convex programming, Accelerated gradient boosting, Affine-invariant contracting-point methods for convex optimization, Convergence rates of the heavy-ball method under the Łojasiewicz property, Perturbed Fenchel duality and first-order methods, A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression, Activity Identification and Local Linear Convergence of Forward--Backward-type Methods, Accelerated Uzawa methods for convex optimization, A simple nearly optimal restart scheme for speeding up first-order methods, On FISTA with a relative error rule, A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem, Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares, Efficient first-order methods for convex minimization: a constructive approach, Fast convergence of inertial gradient dynamics with multiscale aspects, The impact of noise on evaluation complexity: the deterministic trust-region case, Accelerated and unaccelerated stochastic gradient descent in model generality, The PPA-based numerical algorithm with the \(O(1/t)\) convergence rate for variant variational inequalities, Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems, Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, Implementable tensor methods in unconstrained convex optimization, Decomposition in derivative-free optimization, Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing damping, Accelerated proximal point method for maximally monotone operators, On the linear convergence rates of exchange and continuous methods for total variation minimization, Approximating the total variation with finite differences or finite elements, Accelerated gradient sliding for minimizing a sum of functions, Automatic repair of convex optimization problems, Sparse trace norm regularization, Convergence analysis of the generalized alternating direction method of multipliers with logarithmic-quadratic proximal regularization, Fast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested point, Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM, Sorted concave penalized regression, Complexity Certifications of First-Order Inexact Lagrangian Methods for General Convex Programming: Application to Real-Time MPC, A large covariance matrix estimator under intermediate spikiness regimes, On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Iteration complexity analysis of dual first-order methods for conic convex programming, Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem, ``Active-set complexity of proximal gradient: how long does it take to find the sparsity pattern?, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, A family of subgradient-based methods for convex optimization problems in a unifying framework, On the convergence of the forward–backward splitting method with linesearches, Universal method of searching for equilibria and stochastic equilibria in transportation networks, Optimal subgradient methods: computational properties for large-scale linear inverse problems, Block-simultaneous direction method of multipliers: a proximal primal-dual splitting algorithm for nonconvex problems with multiple constraints, Generalized uniformly optimal methods for nonlinear programming, Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms, A Subgradient Method for Free Material Design, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Survival analysis of DNA mutation motifs with penalized proportional hazards, Convergence analysis of the relaxed proximal point algorithm, Generalized self-concordant functions: a recipe for Newton-type methods, Efficiency of minimizing compositions of convex functions and smooth maps, A relax inexact accelerated proximal gradient method for the constrained minimization problem of maximum eigenvalue functions, Generalized affine scaling algorithms for linear programming problems, Mirror Prox algorithm for multi-term composite minimization and semi-separable problems, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Accelerating the cubic regularization of Newton's method on convex problems
- Introductory lectures on convex optimization. A basic course.
- Just relax: convex programming methods for identifying sparse signals in noise
- Linear Inversion of Band-Limited Reflection Seismograms
- A generalized proximal point algorithm for certain non-convex minimization problems
- Atomic Decomposition by Basis Pursuit
- Rounding of convex sets and efficient gradient methods for linear programming problems