Convex optimization: algorithms and complexity
From MaRDI portal
Publication:2809807
zbMATH Open1365.90196arXiv1405.4980MaRDI QIDQ2809807FDOQ2809807
Authors: Sébastien Bubeck
Publication date: 30 May 2016
Published in: Foundations and Trends in Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1405.4980
Recommendations
Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Abstract computational complexity for mathematical programming problems (90C60)
Cited In (only showing first 100 items - show all)
- Scale-free online learning
- Persuasion in networks: public signals and cores
- Accelerating incremental gradient optimization with curvature information
- Convex optimization with an interpolation-based projection and its application to deep learning
- Replicator dynamics: old and new
- Random batch methods (RBM) for interacting particle systems
- Bregman three-operator splitting methods
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- A multiplicative weights update algorithm for packing and covering semi-infinite linear programs
- Metamodel construction for sensitivity analysis
- Complexity analysis for optimization methods
- Stochastic matrix-free equilibration
- Title not available (Why is that?)
- First-order methods for convex optimization
- Asymptotic theory in network models with covariates and a growing number of node parameters
- Robust and sparse regression in generalized linear model by stochastic optimization
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- A consensus-based global optimization method for high dimensional machine learning problems
- Polynomial-time algorithms for submodular Laplacian systems
- Convergence rates for optimised adaptive importance samplers
- Regularisation of neural networks by enforcing Lipschitz continuity
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- A distributed flexible delay-tolerant proximal gradient algorithm
- A regularization interpretation of the proximal point method for weakly convex functions
- A new look at the Hardy-Littlewood-Pólya inequality of majorization
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Accelerated gradient boosting
- Min-Max-Min Optimization with Smooth and Strongly Convex Objectives
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- Inverse reinforcement learning in contextual MDPs
- A stochastic gradient algorithm with momentum terms for optimal control problems governed by a convection-diffusion equation with random diffusivity
- Natural gradient for combined loss using wavelets
- Bounds for the tracking error of first-order online optimization methods
- A fully polynomial time approximation scheme for the smallest diameter of imprecise points
- Linear convergence of first order methods for non-strongly convex optimization
- Mirror descent algorithms for minimizing interacting free energy
- Proximal gradient methods with adaptive subspace sampling
- Inexact primal-dual gradient projection methods for nonlinear optimization on convex set
- Strong convexity of sandwiched entropies and related optimization problems
- Efficient numerical methods for entropy-linear programming problems
- Bandit online optimization over the permutahedron
- Bias of homotopic gradient descent for the hinge loss
- Log-concave sampling: Metropolis-Hastings algorithms are fast
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Perturbed iterate analysis for asynchronous stochastic optimization
- From inexact optimization to learning via gradient concentration
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
- Robust statistical learning with Lipschitz and convex loss functions
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Spectral method and regularized MLE are both optimal for top-\(K\) ranking
- Infinite-dimensional gradient-based descent for alpha-divergence minimisation
- Low-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex Optimization
- The Evolution of Methods of Convex Optimization
- Robust classification via MOM minimization
- COCO: a platform for comparing continuous optimizers in a black-box setting
- Title not available (Why is that?)
- Adaptive block coordinate DIRECT algorithm
- Polynomial-Time Algorithms for Linear and Convex Optimization on Jump Systems
- Convergence rates of gradient methods for convex optimization in the space of measures
- On numerical estimates of errors in solving convex optimization problems
- Complexity of convex optimization using geometry-based measures and a reference point
- Solving convex min-min problems with smoothness and strong convexity in one group of variables and low dimension in the other
- Understanding the acceleration phenomenon via high-resolution differential equations
- Title not available (Why is that?)
- Optimal convergence rates for convex distributed optimization in networks
- Complexity of optimizing over the integers
- Approachability, regret and calibration: implications and equivalences
- Duality gap estimates for weak Chebyshev greedy algorithms in Banach spaces
- Recent Theoretical Advances in Non-Convex Optimization
- A Random-Batch Monte Carlo Method for Many-Body Systems with Singular Kernels
- Solving inverse problems using data-driven models
- Continuation methods for approximate large scale object sequencing
- Improved linear embeddings via Lagrange duality
- Dimensionality reduction of SDPs through sketching
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Optimization by Gradient Boosting
- Data-driven inverse optimization with imperfect information
- Unifying mirror descent and dual averaging
- An Accelerated Level-Set Method for Inverse Scattering Problems
- Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Random Batch Methods for Classical and Quantum Interacting Particle Systems and Statistical Samplings
- Nudging the particle filter
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Alternating direction method of multipliers for machine learning
- Adaptive Catalyst for Smooth Convex Optimization
- Accelerated proximal envelopes: application to componentwise methods
- On the computational efficiency of catalyst accelerated coordinate descent
- Efficient online linear optimization with approximation algorithms
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Variable demand and multi-commodity flow in Markovian network equilibrium
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- New Hadamard-type inequalities for \(E\)-convex functions involving generalized fractional integrals
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Numerical methods for the resource allocation problem in a computer network
- Intuitionistic-fuzzy goals in zero-sum multi criteria matrix games
- Accelerated methods for weakly-quasi-convex optimization problems
- On the random batch method for second order interacting particle systems
- Title not available (Why is that?)
This page was built for publication: Convex optimization: algorithms and complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2809807)