Convex optimization: algorithms and complexity
From MaRDI portal
Publication:2809807
zbMATH Open1365.90196arXiv1405.4980MaRDI QIDQ2809807FDOQ2809807
Authors: Sébastien Bubeck
Publication date: 30 May 2016
Published in: Foundations and Trends in Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1405.4980
Recommendations
Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Abstract computational complexity for mathematical programming problems (90C60)
Cited In (only showing first 100 items - show all)
- Random Batch Methods for Classical and Quantum Interacting Particle Systems and Statistical Samplings
- Nudging the particle filter
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Alternating direction method of multipliers for machine learning
- Adaptive Catalyst for Smooth Convex Optimization
- Accelerated proximal envelopes: application to componentwise methods
- On the computational efficiency of catalyst accelerated coordinate descent
- Efficient online linear optimization with approximation algorithms
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Variable demand and multi-commodity flow in Markovian network equilibrium
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- New Hadamard-type inequalities for \(E\)-convex functions involving generalized fractional integrals
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Numerical methods for the resource allocation problem in a computer network
- Intuitionistic-fuzzy goals in zero-sum multi criteria matrix games
- Accelerated methods for weakly-quasi-convex optimization problems
- On the random batch method for second order interacting particle systems
- Title not available (Why is that?)
- The entropic barrier: exponential families, log-concave geometry, and self-concordance
- Mass-spring-damper networks for distributed optimization in non-Euclidean spaces
- Stochastic saddle-point optimization for the Wasserstein barycenter problem
- Accelerated gradient methods with absolute and relative noise in the gradient
- Large-scale convex optimization. Algorithms \& analyses via monotone operators
- Divergences on symmetric cones and medians
- Nonconvex Low-Rank Tensor Completion from Noisy Data
- A dual approach for optimal algorithms in distributed optimization over networks
- How can we identify the sparsity structure pattern of high-dimensional data: an elementary statistical analysis to interpretable machine learning
- Vaidya's method for convex stochastic optimization problems in small dimension
- The computational asymptotics of Gaussian variational inference and the Laplace approximation
- Efficient numerical methods to solve sparse linear equations with application to PageRank
- Convergence of the random batch method for interacting particles with disparate species and weights
- Optimization based methods for partially observed chaotic systems
- Behavior of accelerated gradient methods near critical points of nonconvex functions
- Convergence guarantees for a class of non-convex and non-smooth optimization problems
- Accelerated methods for saddle-point problem
- Differentially private inference via noisy optimization
- First-order and stochastic optimization methods for machine learning
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Heteroskedastic PCA: algorithm, optimality, and applications
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
- A multiplicative weight updates algorithm for packing and covering semi-infinite linear programs
- The common-directions method for regularized empirical risk minimization
- Title not available (Why is that?)
- Approximate inference for constructing astronomical catalogs from images
- A finite time analysis of temporal difference learning with linear function approximation
- Exploiting problem structure in optimization under uncertainty via online convex optimization
- Fast Core Pricing for Rich Advertising Auctions
- Efficient, certifiably optimal clustering with applications to latent variable graphical models
- Elliptic quasi-variational inequalities under a smallness assumption: uniqueness, differential stability and optimal control
- Scale-free online learning
- Persuasion in networks: public signals and cores
- Accelerating incremental gradient optimization with curvature information
- Convex optimization with an interpolation-based projection and its application to deep learning
- Replicator dynamics: old and new
- Random batch methods (RBM) for interacting particle systems
- Bregman three-operator splitting methods
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- A multiplicative weights update algorithm for packing and covering semi-infinite linear programs
- Metamodel construction for sensitivity analysis
- Complexity analysis for optimization methods
- Stochastic matrix-free equilibration
- Title not available (Why is that?)
- First-order methods for convex optimization
- Asymptotic theory in network models with covariates and a growing number of node parameters
- Robust and sparse regression in generalized linear model by stochastic optimization
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- A consensus-based global optimization method for high dimensional machine learning problems
- Polynomial-time algorithms for submodular Laplacian systems
- Convergence rates for optimised adaptive importance samplers
- Regularisation of neural networks by enforcing Lipschitz continuity
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- A distributed flexible delay-tolerant proximal gradient algorithm
- A regularization interpretation of the proximal point method for weakly convex functions
- A new look at the Hardy-Littlewood-Pólya inequality of majorization
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Accelerated gradient boosting
- Min-Max-Min Optimization with Smooth and Strongly Convex Objectives
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- Inverse reinforcement learning in contextual MDPs
- A stochastic gradient algorithm with momentum terms for optimal control problems governed by a convection-diffusion equation with random diffusivity
- Natural gradient for combined loss using wavelets
- Bounds for the tracking error of first-order online optimization methods
- A fully polynomial time approximation scheme for the smallest diameter of imprecise points
- Linear convergence of first order methods for non-strongly convex optimization
- Mirror descent algorithms for minimizing interacting free energy
- Proximal gradient methods with adaptive subspace sampling
- Inexact primal-dual gradient projection methods for nonlinear optimization on convex set
- Strong convexity of sandwiched entropies and related optimization problems
- Efficient numerical methods for entropy-linear programming problems
- Bandit online optimization over the permutahedron
- Bias of homotopic gradient descent for the hinge loss
- Log-concave sampling: Metropolis-Hastings algorithms are fast
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Perturbed iterate analysis for asynchronous stochastic optimization
- From inexact optimization to learning via gradient concentration
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
- Robust statistical learning with Lipschitz and convex loss functions
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
This page was built for publication: Convex optimization: algorithms and complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2809807)