Lectures on convex optimization

From MaRDI portal
Publication:723525

DOI10.1007/978-3-319-91578-4zbMath1427.90003OpenAlexW2901838001MaRDI QIDQ723525

Yu. E. Nesterov

Publication date: 23 July 2018

Published in: Springer Optimization and Its Applications (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/978-3-319-91578-4



Related Items

An incremental descent method for multi-objective optimization, New computational methods for classification problems in the existence of outliers based on conic quadratic optimization, Constraint qualifications in nonsmooth optimization: Classification and inter-relations, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, A Splitting Scheme for Flip-Free Distortion Energies, Subgradient ellipsoid method for nonsmooth convex problems, Optimal error bounds for non-expansive fixed-point iterations in normed spaces, New Bregman proximal type algoritms for solving DC optimization problems, Error estimates of a theta-scheme for second-order mean field games, A nonlinear conjugate gradient method using inexact first-order information, SCORE: approximating curvature information under self-concordant regularization, Global convergence of the gradient method for functions definable in o-minimal structures, Nonconvex optimization with inertial proximal stochastic variance reduction gradient, An Apocalypse-Free First-Order Low-Rank Optimization Algorithm with at Most One Rank Reduction Attempt per Iteration, An Efficient and Robust Scalar Auxialiary Variable Based Algorithm for Discrete Gradient Systems Arising from Optimizations, Stochastic momentum methods for non-convex learning without bounded assumptions, Optimal Self-Concordant Barriers for Quantum Relative Entropies, Robust sampled‐data controller design for uncertain nonlinear systems via Euler discretization, Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities, Some accelerated alternating proximal gradient algorithms for a class of nonconvex nonsmooth problems, An improved inertial projection method for solving convex constrained monotone nonlinear equations with applications, Neural‐network‐based constrained optimal coordination for heterogeneous uncertain nonlinear multi‐agent systems, Universal Conditional Gradient Sliding for Convex Optimization, Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients, Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization, An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints, Walrasian equilibria from an optimization perspective: A guide to the literature, Minimax Problems with Coupled Linear Constraints: Computational Complexity and Duality, Unnamed Item, Accelerated gradient methods with absolute and relative noise in the gradient, Smoothing unadjusted Langevin algorithms for nonsmooth composite potential functions, Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter, Conditions for linear convergence of the gradient method for non-convex optimization, Generalized damped Newton algorithms in nonsmooth optimization via second-order subdifferentials, Optimal control of ensembles of dynamical systems, Performance enhancements for a generic conic interior point algorithm, A generalized inertial proximal alternating linearized minimization method for nonconvex nonsmooth problems, On the Generalized Langevin Equation for Simulated Annealing, A robust control approach to asymptotic optimality of the heavy ball method for optimization of quadratic functions, Implicit Regularity and Linear Convergence Rates for the Generalized Trust-Region Subproblem, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence, Affine Relaxations of the Best Response Algorithm: Global Convergence in Ratio-Bounded Games, The Frank-Wolfe algorithm: a short introduction, Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods, Optimal step length for the maximal decrease of a self-concordant function by the Newton method, Optimal data splitting in distributed optimization for machine learning, Globally convergent coderivative-based generalized Newton methods in nonsmooth optimization, Byzantine-robust loopless stochastic variance-reduced gradient, Time-Varying Semidefinite Programming: Path Following a Burer–Monteiro Factorization, Super-Universal Regularized Newton Method, Tutorial on Amortized Optimization, A proximal subgradient algorithm with extrapolation for structured nonconvex nonsmooth problems, Stochastic incremental mirror descent algorithms with Nesterov smoothing, Decentralized saddle-point problems with different constants of strong convexity and strong concavity, Preconditioning meets biased compression for efficient distributed optimization, On maximum a posteriori estimation with Plug \& Play priors and stochastic gradient descent, Proximal quasi-Newton method for composite optimization over the Stiefel manifold, Complexity of optimizing over the integers, An inexact projected gradient method with rounding and lifting by nonlinear programming for solving rank-one semidefinite relaxation of polynomial optimization, Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems, Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification, Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient, First-order methods for convex optimization, General convex relaxations of implicit functions and inverse functions, PCA Sparsified, An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models, Worst-case evaluation complexity of a derivative-free quadratic regularization method, Differentially private inference via noisy optimization, Stable geodesic nets in convex hypersurfaces, Descent Properties of an Anderson Accelerated Gradient Method with Restarting, An Alternating Direction Method of Multipliers for Inverse Lithography Problem, Detecting identification failure in moment condition models, Estimation and inference by stochastic optimization, Adaptive proximal SGD based on new estimating sequences for sparser ERM, Exact convergence analysis for metropolis–hastings independence samplers in Wasserstein distances, Unifying framework for accelerated randomized methods in convex optimization, Gradient descent on infinitely wide neural networks: global convergence and generalization, Gradient regularization of Newton method with Bregman distances, A Riemannian Proximal Newton Method, Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation, Active Set Complexity of the Away-Step Frank--Wolfe Algorithm, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Contracting Proximal Methods for Smooth Convex Optimization, Fair Packing and Covering on a Relative Scale, On the Simplicity and Conditioning of Low Rank Semidefinite Programs, Primal-Dual Interior-Point Methods for Domain-Driven Formulations, Gradient Projection and Conditional Gradient Methods for Constrained Nonconvex Minimization, Greedy Quasi-Newton Methods with Explicit Superlinear Convergence, Recovering a potential in damped wave equation from Dirichlet-to-Neumann operator, Generalized Momentum-Based Methods: A Hamiltonian Perspective, Dual Space Preconditioning for Gradient Descent, Adaptive Sequential Sample Average Approximation for Solving Two-Stage Stochastic Linear Programs, Unnamed Item, COCO: a platform for comparing continuous optimizers in a black-box setting, On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints, Inexact model: a framework for optimization and variational inequalities, Projectively Self-Concordant Barriers, High-Order Optimization Methods for Fully Composite Problems, Adaptive Catalyst for Smooth Convex Optimization, Recent theoretical advances in decentralized distributed convex optimization, Recent Theoretical Advances in Non-Convex Optimization, Distributed adaptive Newton methods with global superlinear convergence, Block Bregman Majorization Minimization with Extrapolation, Solving a continuous multifacility location problem by DC algorithms, A convex optimization approach to dynamic programming in continuous state and action spaces, An Accelerated Level-Set Method for Inverse Scattering Problems, Inexact basic tensor methods for some classes of convex optimization problems, Efficient numerical methods to solve sparse linear equations with application to PageRank, Gradient methods with memory, Local convergence of tensor methods, Solution manifold and its statistical applications, Oracle complexity separation in convex optimization, On Numerical Estimates of Errors in Solving Convex Optimization Problems, Rates of superlinear convergence for classical quasi-Newton methods, On lower iteration complexity bounds for the convex concave saddle point problems, Status determination by interior-point methods for convex optimization problems in domain-driven form, A frequency-domain analysis of inexact gradient methods, Speed scaling scheduling of multiprocessor jobs with energy constraint and makespan criterion, Proportional-integral projected gradient method for conic optimization, A STOCHASTIC OIL PRICE MODEL FOR OPTIMAL HEDGING AND RISK MANAGEMENT, Convergence of Halpern’s Iteration Method with Applications in Optimization, Error bound conditions and convergence of optimization methods on smooth and proximally smooth manifolds, Discrete Choice Prox-Functions on the Simplex, Unnamed Item, Accelerated meta-algorithm for convex optimization problems, Constrained, Global Optimization of Unknown Functions with Lipschitz Continuous Gradients, Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions, Numerical methods for the resource allocation problem in a computer network, A Scalable Algorithm for Sparse Portfolio Selection, Superfast second-order methods for unconstrained convex optimization, Accelerated additive Schwarz methods for convex optimization with adaptive restart, Long-step path-following algorithm for quantum information theory: some numerical aspects and applications, Fast gradient methods for uniformly convex and weakly smooth problems, Unified linear convergence of first-order primal-dual algorithms for saddle point problems, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Recursive reasoning-based training-time adversarial machine learning, Unnamed Item, Unnamed Item, Affine-invariant contracting-point methods for convex optimization, Generalized self-concordant analysis of Frank-Wolfe algorithms, A globally convergent proximal Newton-type method in nonsmooth convex optimization, Discrete processes and their continuous limits, Inexact accelerated high-order proximal-point methods, Stochastic first-order methods for convex and nonconvex functional constrained optimization, An online convex optimization-based framework for convex bilevel optimization, Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares, Accelerated stochastic variance reduction for a class of convex optimization problems, Bregman three-operator splitting methods, Proximal nested primal-dual gradient algorithms for distributed constraint-coupled composite optimization, Accelerated and unaccelerated stochastic gradient descent in model generality, Accelerated methods for saddle-point problem, The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods, Rapid evaluation of the spectral signal detection threshold and Stieltjes transform, Alternating minimization methods for strongly convex optimization, Regularization by architecture: a deep prior approach for inverse problems, Unnamed Item, Dimension-free Wasserstein contraction of nonlinear filters, Adjoint-based exact Hessian computation, A generalized worst-case complexity analysis for non-monotone line searches, Resource allocation for contingency planning: an inexact proximal bundle method for stochastic optimization, Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems, Algorithms for nonnegative matrix factorization with the Kullback-Leibler divergence, Bounds for the tracking error of first-order online optimization methods, Minimizing uniformly convex functions by cubic regularization of Newton method, A logarithmic descent direction algorithm for the quadratic knapsack problem, Computational semi-discrete optimal transport with general storage fees, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization, Near-Optimal Hyperfast Second-Order Method for Convex Optimization, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, Recovery of pressure and wave speed for photoacoustic imaging under a condition of relative uncertainty, A Gauss-Seidel type inertial proximal alternating linearized minimization for a class of nonconvex optimization problems, Optimal step length for the Newton method: case of self-concordant functions, Adaptive optimization with periodic dither signals, Bregman primal-dual first-order method and application to sparse semidefinite programming, A piecewise conservative method for unconstrained convex optimization, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, A predictor-corrector affine scaling method to train optimized extreme learning machine, On stochastic mirror descent with interacting particles: convergence properties and variance reduction, Gaussian discrepancy: a probabilistic relaxation of vector balancing, Fisher information regularization schemes for Wasserstein gradient flows, Dualize, split, randomize: toward fast nonsmooth optimization algorithms, Curiosities and counterexamples in smooth convex optimization, A control-theoretic perspective on optimal high-order optimization, Horosphere slab separation theorems in manifolds without conjugate points, Manifold reconstruction and denoising from scattered data in high dimension, Learning over No-Preferred and Preferred Sequence of Items for Robust Recommendation, Optimization-based convex relaxations for nonconvex parametric systems of ordinary differential equations, Stochastic dual dynamic programming for multistage stochastic mixed-integer nonlinear optimization, Inexact High-Order Proximal-Point Methods with Auxiliary Search Procedure, A Proximal Bundle Variant with Optimal Iteration-Complexity for a Large Range of Prox Stepsizes, On the convergence analysis of aggregated heavy-ball method, Convex Synthesis of Accelerated Gradient Algorithms, Network manipulation algorithm based on inexact alternating minimization, Discriminative clustering with representation learning with any ratio of labeled to unlabeled data, Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction, Accelerated proximal envelopes: application to componentwise methods, Adaptive Gauss-Newton method for solving systems of nonlinear equations, Convex optimization with inexact gradients in Hilbert space and applications to elliptic inverse problems, On the computational efficiency of catalyst accelerated coordinate descent, The generalized trust region subproblem: solution complexity and convex hull results