Analysis and design of optimization algorithms via integral quadratic constraints
DOI10.1137/15M1009597zbMATH Open1329.90103arXiv1408.3595MaRDI QIDQ3465237FDOQ3465237
Authors: Laurent Lessard, Benjamin Recht, Andrew K. Packard
Publication date: 21 January 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1408.3595
Recommendations
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- Robust and structure exploiting optimisation algorithms: an integral quadratic constraint approach
- Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs
- Convex Synthesis of Accelerated Gradient Algorithms
- Analysis of optimization algorithms via sum-of-squares
first-order methodsconvex optimizationsemidefinite programmingcontrol theoryintegral quadratic constraintsNesterov's methodproximal gradient methodsheavy-ball method
Convex programming (90C25) Nonlinear programming (90C30) Semidefinite programming (90C22) Nonlinear systems in control theory (93C10) Stability of control systems (93D99)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Templates for convex cone problems with applications to sparse signal recovery
- Title not available (Why is that?)
- Linear Matrix Inequalities in System and Control Theory
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Gradient methods for minimizing composite functions
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Dissipative dynamical systems. I: General theory
- Dissipative dynamical systems. II: Linear systems with quadratic supply rates
- Nonlinear systems.
- Graph implementations for nonsmooth convex programs
- Title not available (Why is that?)
- LYAPUNOV FUNCTIONS FOR THE PROBLEM OF LUR'E IN AUTOMATIC CONTROL
- First-order methods of smooth convex optimization with inexact oracle
- Efficiency of coordinate descent methods on huge-scale optimization problems
- System analysis via integral quadratic constraints
- Stability Conditions for Systems with Monotone and Slope-Restricted Nonlinearities
- Title not available (Why is that?)
- Absolute stability of nonlinear systems of automatic control
- Stability Analysis With Dissipation Inequalities and Integral Quadratic Constraints
- Title not available (Why is that?)
- Zames-Falb Multipliers for Quadratic Programming
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Dualities in convex algebraic geometry
- The complex structured singular value
- Performance of first-order methods for smooth convex minimization: a novel approach
- Nonconvex optimization problem: The infinite-horizon linear-quadratic control problem with quadratic constraints
- Method of centers for minimizing generalized eigenvalues
- Title not available (Why is that?)
- Title not available (Why is that?)
- Semidefinite programming relaxations and algebraic optimization in control
- Transient cool-down of a porous medium in pulsating flow
- The long-step method of analytic centers for fractional problems
Cited In (only showing first 100 items - show all)
- Title not available (Why is that?)
- Robust and structure exploiting optimisation algorithms: an integral quadratic constraint approach
- Passivity-based analysis of the ADMM algorithm for constraint-coupled optimization
- Convergence of first-order methods via the convex conjugate
- An adaptive Polyak heavy-ball method
- On the convergence analysis of aggregated heavy-ball method
- Explicit stabilised gradient descent for faster strongly convex optimisation
- Stability analysis by dynamic dissipation inequalities: on merging frequency-domain techniques with time-domain conditions
- On polarization-based schemes for the FFT-based computational homogenization of inelastic materials
- A review of nonlinear FFT-based computational homogenization methods
- Optimal deterministic algorithm generation
- Complexity analysis for optimization methods
- Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- Title not available (Why is that?)
- Title not available (Why is that?)
- On lower and upper bounds in smooth and strongly convex optimization
- Zames-Falb multipliers for absolute stability: from O'Shea's contribution to convex searches
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Generalizing the optimized gradient method for smooth convex minimization
- Operator splitting performance estimation: tight contraction factors and optimal parameter selection
- A regularization interpretation of the proximal point method for weakly convex functions
- Title not available (Why is that?)
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Exact worst-case performance of first-order methods for composite convex optimization
- Robust accelerated gradient methods for smooth strongly convex functions
- Design of distributed and robust optimization algorithms. A systems theoretic approach
- Bounds for the tracking error of first-order online optimization methods
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- An optimal first order method based on optimal quadratic averaging
- Efficient first-order methods for convex minimization: a constructive approach
- A frequency-domain analysis of inexact gradient methods
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Title not available (Why is that?)
- Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
- Adaptive restart of the optimized gradient method for convex optimization
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Analysis of optimization algorithms via sum-of-squares
- A control-theoretic perspective on optimal high-order optimization
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Convergence rates of proximal gradient methods via the convex conjugate
- Title not available (Why is that?)
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Learning-based adaptive control with an accelerated iterative adaptive law
- From differential equation solvers to accelerated first-order methods for convex optimization
- Projected dynamical systems on irregular, non-Euclidean domains for nonlinear optimization
- The connections between Lyapunov functions for some optimization algorithms and differential equations
- Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis
- Bearing-only distributed localization: a unified barycentric approach
- Understanding the acceleration phenomenon via high-resolution differential equations
- Specialized fast algorithms for IQC feasibility and optimization problems.
- A dynamical view of nonlinear conjugate gradient methods with applications to FFT-based computational micromechanics
- An optimal gradient method for smooth strongly convex minimization
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- An introduction to continuous optimization for imaging
- Perturbed Fenchel duality and first-order methods
- The common-directions method for regularized empirical risk minimization
- On the convergence analysis of the optimized gradient method
- Regularized nonlinear acceleration
- Contractivity of Runge-Kutta methods for convex gradient systems
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- A forward-backward algorithm with different inertial terms for structured non-convex minimization problems
- Convex Synthesis of Accelerated Gradient Algorithms
- Lippmann-Schwinger solvers for the computational homogenization of materials with pores
- On the Barzilai-Borwein basic scheme in FFT-based computational homogenization
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- On the asymptotic linear convergence speed of Anderson acceleration, Nesterov acceleration, and nonlinear GMRES
- Higher-order power methods with momentum for solving the limiting probability distribution vector of higher-order Markov chains
- Analysis of a generalised expectation-maximisation algorithm for Gaussian mixture models: a control systems perspective
- Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems
- Heavy-ball-based hard thresholding algorithms for sparse signal recovery
- A simple PID-based strategy for particle swarm optimization algorithm
- Nonlinear optimization filters for stochastic time-varying convex optimization
- A general system of differential equations to model first-order adaptive algorithms
- Uniting Nesterov and heavy ball methods for uniform global asymptotic stability of the set of minimizers
- Understanding a Class of Decentralized and Federated Optimization Algorithms: A Multirate Feedback Control Perspective
- Multiscale analysis of accelerated gradient methods
- Fast gradient method for low-rank matrix estimation
- A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes
- Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022
- An accelerated stochastic mirror descent method
- PEPIT: computer-assisted worst-case analyses of first-order optimization methods in python
- Search direction correction with normalized gradient makes first-order methods faster
- On the necessity and sufficiency of discrete-time O'Shea-Zames-Falb multipliers
- The parameterized accelerated iteration method for solving the matrix equation \(AXB=C\)
- Robustness analysis of uncertain discrete-time systems with dissipation inequalities and integral quadratic constraints
- A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
- Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions
- On data-driven stabilization of systems with nonlinearities satisfying quadratic constraints
- Computation of invariant sets for discrete‐time uncertain systems
- A fixed step distributed proximal gradient push‐pull algorithm based on integral quadratic constraint
- A multivariate adaptive gradient algorithm with reduced tuning efforts
- Connections between Georgiou and Smith's robust stability type theorems and the nonlinear small-gain theorems
- Accelerated optimization landscape of linear-quadratic regulator
- Fast symplectic integrator for Nesterov-type acceleration method
Uses Software
This page was built for publication: Analysis and design of optimization algorithms via integral quadratic constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3465237)