Introductory lectures on convex optimization. A basic course.
From MaRDI portal
Publication:1417731
zbMath1086.90045MaRDI QIDQ1417731
Publication date: 5 January 2004
Published in: Applied Optimization (Search for Journal in Brave)
Convex programming (90C25) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to operations research and mathematical programming (90-01)
Related Items (only showing first 100 items - show all)
An adaptive accelerated first-order method for convex optimization ⋮ Best subset selection via a modern optimization lens ⋮ Convex optimization on Banach spaces ⋮ An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions ⋮ Lower bounds on individual sequence regret ⋮ On data preconditioning for regularized loss minimization ⋮ Stopping rules for optimization algorithms based on stochastic approximation ⋮ A relaxed-projection splitting algorithm for variational inequalities in Hilbert spaces ⋮ A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints ⋮ On estimation of the diagonal elements of a sparse precision matrix ⋮ OSGA: a fast subgradient algorithm with optimal complexity ⋮ Duality for mixed-integer convex minimization ⋮ Inexact coordinate descent: complexity and preconditioning ⋮ Optimized first-order methods for smooth convex minimization ⋮ Gradient sliding for composite optimization ⋮ On the ergodic convergence rates of a first-order primal-dual algorithm ⋮ Convergence rates with inexact non-expansive operators ⋮ A family of second-order methods for convex \(\ell _1\)-regularized optimization ⋮ Necessary and sufficient Karush-Kuhn-Tucker conditions for multiobjective Markov chains optimality ⋮ On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients ⋮ New results on subgradient methods for strongly convex optimization problems with a unified analysis ⋮ Fast convex optimization via inertial dynamics with Hessian driven damping ⋮ Generalized mirror descents in congestion games ⋮ Robust reduced-rank modeling via rank regression ⋮ Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization ⋮ An optimal variant of Kelley's cutting-plane method ⋮ Approximating the little Grothendieck problem over the orthogonal and unitary groups ⋮ Practical inexact proximal quasi-Newton method with global complexity analysis ⋮ An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems ⋮ Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems ⋮ Gradient methods for minimizing composite functions ⋮ Techniques for exploring the suboptimal set ⋮ Fast alternating linearization methods for minimizing the sum of two convex functions ⋮ Approximation accuracy, gradient methods, and error bound for structured convex optimization ⋮ Optimal detection of sparse principal components in high dimension ⋮ Existence, uniqueness, and convergence of the regularized primal-dual central path ⋮ The CoMirror algorithm for solving nonsmooth constrained convex problems ⋮ Analysis of stochastic dual dynamic programming method ⋮ First-order methods of smooth convex optimization with inexact oracle ⋮ Fast first-order methods for composite convex optimization with backtracking ⋮ On a global complexity bound of the Levenberg-marquardt method ⋮ Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem ⋮ Approximate level method for nonsmooth convex minimization ⋮ Phase transitions for greedy sparse approximation algorithms ⋮ Testing the nullspace property using semidefinite programming ⋮ Barrier subgradient method ⋮ An optimal method for stochastic composite optimization ⋮ An implementable proximal point algorithmic framework for nuclear norm minimization ⋮ Lipschitz gradients for global optimization in a one-point-based partitioning scheme ⋮ Implementation of an optimal first-order method for strongly convex total variation regularization ⋮ Interior point methods 25 years later ⋮ About the Lipschitz property of the metric projection in the Hilbert space ⋮ Robust least square semidefinite programming with applications ⋮ A sparsity preserving stochastic gradient methods for sparse regression ⋮ On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems ⋮ Algorithmic construction of optimal designs on compact sets for concave and differentiable criteria ⋮ A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints ⋮ The 2-coordinate descent method for solving double-sided simplex constrained minimization problems ⋮ A first order method for finding minimal norm-like solutions of convex optimization problems ⋮ Iterative hard thresholding methods for \(l_0\) regularized convex cone programming ⋮ Convexity of the cost functional in an optimal control problem for a class of positive switched systems ⋮ Optimum design accounting for the global nonlinear behavior of the model ⋮ On lower complexity bounds for large-scale smooth convex optimization ⋮ qpOASES: a parametric active-set algorithm for~quadratic programming ⋮ Optimal computational and statistical rates of convergence for sparse nonconvex learning problems ⋮ Least quantile regression via modern optimization ⋮ Greedy expansions in convex optimization ⋮ Dual subgradient algorithms for large-scale nonsmooth learning problems ⋮ Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization ⋮ Nearest stable system using successive convex approximations ⋮ Universal gradient methods for convex optimization problems ⋮ On the complexity analysis of randomized block-coordinate descent methods ⋮ On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm ⋮ A note on augmented Lagrangian-based parallel splitting method ⋮ A parallel quadratic programming method for dynamic optimization problems ⋮ Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality ⋮ Smooth strongly convex interpolation and exact worst-case performance of first-order methods ⋮ The exact information-based complexity of smooth convex minimization ⋮ On the convergence analysis of the optimized gradient method ⋮ An approach for analyzing the global rate of convergence of quasi-Newton and truncated-Newton methods ⋮ Decomposable norm minimization with proximal-gradient homotopy algorithm ⋮ A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization ⋮ Minimizing finite sums with the stochastic average gradient ⋮ Stability and performance verification of optimization-based controllers ⋮ A survey on learning approaches for undirected graphical models. Application to scene object recognition ⋮ Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization ⋮ Dictionary descent in optimization ⋮ Inexact proximal Newton methods for self-concordant functions ⋮ The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth ⋮ Adaptive smoothing algorithms for nonsmooth composite convex minimization ⋮ Iteration complexity analysis of block coordinate descent methods ⋮ Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models ⋮ Image restoration using total variation with overlapping group sparsity ⋮ Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results ⋮ Robust boosting with truncated loss functions ⋮ Primal-dual subgradient methods for convex problems ⋮ Analysis of a nonsmooth optimization approach to robust estimation ⋮ An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints ⋮ Accelerated gradient methods for nonconvex nonlinear and stochastic programming ⋮ Parallel coordinate descent methods for big data optimization
This page was built for publication: Introductory lectures on convex optimization. A basic course.