Convex optimization: algorithms and complexity
From MaRDI portal
Recommendations
Cited in
(only showing first 100 items - show all)- Resolving the mixing time of the Langevin algorithm to its stationary distribution for log-concave sampling
- Elliptic quasi-variational inequalities under a smallness assumption: uniqueness, differential stability and optimal control
- Optimal convergence rate for mirror descent methods with special time-varying step sizes rules
- A unified analysis of stochastic gradient‐free Frank–Wolfe methods
- Importance sampling-based gradient method for dimension reduction in Poisson log-normal model
- An Accelerated Level-Set Method for Inverse Scattering Problems
- Optimal transport map estimation in general function spaces
- Scale-free online learning
- Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions
- Unifying mirror descent and dual averaging
- Stabilizing sharpness-aware minimization through a simple renormalization strategy
- Query lower bounds for log-concave sampling
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Metric extrapolation in the Wasserstein space
- Accelerating incremental gradient optimization with curvature information
- Convex optimization with an interpolation-based projection and its application to deep learning
- Nudging the particle filter
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Random Batch Methods for Classical and Quantum Interacting Particle Systems and Statistical Samplings
- Persuasion in networks: public signals and cores
- Insights into kernel PCA with application to multivariate extremes
- Distributed Learning with Sparse Communications by Identification
- Symmetric midpoint-type inequalities for (, m)-convex functions with applications
- Replicator dynamics: old and new
- Random batch methods (RBM) for interacting particle systems
- Tutorial on Amortized Optimization
- Time‐varying β‐model for dynamic directed networks
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- Acceleration methods for fixed-point iterations
- CV@R-penalised portfolio optimisation with biased stochastic mirror descent
- Convergence rate of projected subgradient method with time-varying step-sizes
- Robust Regression with Covariate Filtering: Heavy Tails and Adversarial Contamination
- Asynchronous fully-decentralized SGD in the cluster-based model
- Bregman three-operator splitting methods
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- A guide to stochastic optimisation for large-scale inverse problems
- A stochastic-gradient-based interior-point algorithm for solving smooth bound-constrained optimization problems
- How to trap a gradient flow
- A local nearly linearly convergent first-order method for nonsmooth functions with quadratic growth
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- A multiplicative weights update algorithm for packing and covering semi-infinite linear programs
- Alternating direction method of multipliers for machine learning
- Fisher information lower bounds for sampling
- On the complexity of finding stationary points of smooth functions in one dimension
- Stochastic matrix-free equilibration
- Accelerated proximal envelopes: application to componentwise methods
- On the computational efficiency of catalyst accelerated coordinate descent
- A random batch method for efficient ensemble forecasts of multiscale turbulent systems
- The rate of convergence of Bregman proximal methods: local geometry versus regularity versus sharpness
- Nonconvex stochastic Bregman proximal gradient method with application to deep learning
- A decentralized Nesterov gradient method for stochastic optimization over unbalanced directed networks
- Metamodel construction for sensitivity analysis
- On the nonconvexity of push-forward constraints and its consequences in machine learning
- Complexity analysis for optimization methods
- Efficient projection-free online convex optimization using stochastic gradients
- Robustifying Markowitz
- Learning Stationary Nash Equilibrium Policies in \(n\)-Player Stochastic Games with Independent Chains
- Error analysis of numerical methods for optimization problems
- Adaptive Catalyst for Smooth Convex Optimization
- scientific article; zbMATH DE number 7306906 (Why is no real title available?)
- Efficient search of first-order Nash equilibria in nonconvex-concave smooth min-max problems
- Efficient online linear optimization with approximation algorithms
- Robust and sparse regression in generalized linear model by stochastic optimization
- First-order methods for convex optimization
- Asymptotic theory in network models with covariates and a growing number of node parameters
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Hypodifferentials of nonsmooth convex functions and their applications to nonsmooth convex optimization
- Variable demand and multi-commodity flow in Markovian network equilibrium
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- scientific article; zbMATH DE number 7415097 (Why is no real title available?)
- On the privacy of noisy stochastic gradient descent for convex optimization
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- Polynomial-time algorithms for submodular Laplacian systems
- Stochastic projective splitting
- A consensus-based global optimization method for high dimensional machine learning problems
- On maximum a posteriori estimation with Plug \& Play priors and stochastic gradient descent
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- DIMIX: Diminishing Mixing for Sloppy Agents
- Convergence rates for optimised adaptive importance samplers
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- Regularisation of neural networks by enforcing Lipschitz continuity
- New Hadamard-type inequalities for E-convex functions involving generalized fractional integrals
- Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition
- A new generalization of q-Hermite-Hadamard type integral inequalities for p, (p-s) and modified (p-s)-convex functions
- A regularization interpretation of the proximal point method for weakly convex functions
- Adaptive constraint satisfaction for Markov decision process congestion games: application to transportation networks
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- A distributed flexible delay-tolerant proximal gradient algorithm
- A new look at the Hardy-Littlewood-Pólya inequality of majorization
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- Grundlagen der Mathematischen Optimierung
- Numerical methods for the resource allocation problem in a computer network
- Stochastic mirror descent for convex optimization with consensus constraints
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Intuitionistic-fuzzy goals in zero-sum multi criteria matrix games
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Nesterov's Method for Convex Optimization
- Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model
- RedEx: beyond fixed representation methods via convex optimization
This page was built for publication: Convex optimization: algorithms and complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2809807)