scientific article

From MaRDI portal

zbMath0501.90062MaRDI QIDQ3967358

Arkadi Nemirovski, D. B. Yudin

Publication date: 1983


Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items

Universal Conditional Gradient Sliding for Convex Optimization, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Approximate Newton Policy Gradient Algorithms, Efficient second-order optimization with predictions in differential games, Information complexity of mixed-integer convex optimization, A truncated three-term conjugate gradient method with complexity guarantees with applications to nonconvex regression problem, Accelerated gradient methods with absolute and relative noise in the gradient, Practical perspectives on symplectic accelerated optimization, Non-asymptotic analysis and inference for an outlyingness induced winsorized mean, Block Policy Mirror Descent, Learning with risks based on M-location, A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems, Stochastic mirror descent method for linear ill-posed problems in Banach spaces, Stochastic composition optimization of functions without Lipschitz continuous gradient, Faster randomized block sparse Kaczmarz by averaging, Smooth over-parameterized solvers for non-smooth structured optimization, Convergence rates of gradient methods for convex optimization in the space of measures, A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression, Accelerated variance-reduced methods for saddle-point problems, Optimal Methods for Convex Risk-Averse Distributed Optimization, The Frank-Wolfe algorithm: a short introduction, Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems, No-regret algorithms in on-line learning, games and convex optimization, A unified stochastic approximation framework for learning in games, Runtime Analysis of a Co-Evolutionary Algorithm, Limitations of neural network training due to numerical instability of backpropagation, Optimal Algorithms for Stochastic Complementary Composite Minimization, Conformal mirror descent with logarithmic divergences, Stochastic incremental mirror descent algorithms with Nesterov smoothing, Decentralized saddle-point problems with different constants of strong convexity and strong concavity, Robustifying Markowitz, Learning Stationary Nash Equilibrium Policies in \(n\)-Player Stochastic Games with Independent Chains, Nearly Dimension-Independent Sparse Linear Bandit over Small Action Spaces via Best Subset Selection, Block mirror stochastic gradient method for stochastic optimization, Continuous time learning algorithms in optimization and game theory, Provably efficient reinforcement learning in decentralized general-sum Markov games, Complexity of optimizing over the integers, Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence, Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality, Nonsmooth optimization by Lie bracket approximations into random directions, Learning Polytopes with Fixed Facet Directions, Dual gradient method for ill-posed problems using multiple repeated measurement data, A stochastic non-monotone DR-submodular maximization problem over a convex set, First-order methods for convex optimization, Data-Driven Mirror Descent with Input-Convex Neural Networks, Estimation under group actions: recovering orbits from invariants, Robust supervised learning with coordinate gradient descent, Local convexity of the TAP free energy and AMP convergence for \(\mathbb{Z}_2\)-synchronization, Entropic Trust Region for Densest Crystallographic Symmetry Group Packings, Robust high-dimensional tuning free multiple testing, Lagrangian and Hamiltonian dynamics for probabilities on the statistical bundle, Detecting identification failure in moment condition models, Mean estimation in high dimension, Learning lyapunov functions for hybrid systems, On the optimal solution of large eigenpair problems, A survey of information-based complexity, Riemannian game dynamics, Marginally parameterized spatio-temporal models and stepwise maximum likelihood estimation, Stochastic mirror descent dynamics and their convergence in monotone variational inequalities, A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, OSGA: a fast subgradient algorithm with optimal complexity, A dual method for minimizing a nonsmooth objective over one smooth inequality constraint, On the ergodic convergence rates of a first-order primal-dual algorithm, On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients, Communication complexity of convex optimization, A weighted mirror descent algorithm for nonsmooth convex optimization problem, A simplified view of first order methods for optimization, Sampling from a log-concave distribution with projected Langevin Monte Carlo, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, On the worst case performance of the steepest descent algorithm for quadratic functions, Average complexity of divide-and-conquer algorithms, Optimal search algorithm for a minimum of a discrete periodic bimodal function, Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint, Existence and computation of short-run equilibria in economic geography, A fast dual proximal gradient algorithm for convex minimization and applications, Convergence analysis of primal-dual based methods for total variation minimization with finite element approximation, On the information-adaptive variants of the ADMM: an iteration complexity perspective, Optimal deterministic algorithm generation, Optimal search algorithm for extrema of a discrete periodic bimodal function, Stochastic mirror descent method for distributed multi-agent optimization, Randomization for continuous problems, Global optimization in clustering using hyperbolic cross points, Sparse non Gaussian component analysis by semidefinite programming, On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions, Accelerated schemes for a class of variational inequalities, Accelerated training of max-margin Markov networks with kernels, The CoMirror algorithm for solving nonsmooth constrained convex problems, Multiobjective \(L_1/H_\infty\) controller design for systems with frequency and time domain constraints, Discussion on: ``Multiobjective \(L_1/H_\infty\) controller design for systems with frequency and time domain constraints, Empirical risk minimization for heavy-tailed losses, First-order methods of smooth convex optimization with inexact oracle, Optimal subgradient algorithms for large-scale convex optimization in simple domains, Stochastic heavy ball, On the computational efficiency of subgradient methods: a case study with Lagrangian bounds, Iterative methods of stochastic approximation for solving non-regular nonlinear operator equations, Descent gradient methods for nonsmooth minimization problems in ill-posed problems, Scale-free online learning, Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market, Distributed constrained optimization via continuous-time mirror design, A sparsity preserving stochastic gradient methods for sparse regression, Accelerated first-order methods for hyperbolic programming, Conditional gradient type methods for composite nonlinear and stochastic optimization, Learning in games with continuous action sets and unknown payoff functions, A conjugate gradient algorithm under Yuan-Wei-Lu line search technique for large-scale minimization optimization models, A first order method for finding minimal norm-like solutions of convex optimization problems, On lower complexity bounds for large-scale smooth convex optimization, On optimality of Krylov's information when solving linear operator equations, Dual subgradient algorithms for large-scale nonsmooth learning problems, A strongly polynomial-time algorithm for the strict homogeneous linear-inequality feasibility problem, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Nesterov's smoothing and excessive gap methods for an optimization problem in VLSI placement, A generalized online mirror descent with applications to classification and regression, Conditional gradient algorithms for norm-regularized smooth convex optimization, Universal gradient methods for convex optimization problems, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Further study on the convergence rate of alternating direction method of multipliers with logarithmic-quadratic proximal regularization, Lower bounds for randomized direct search with isotropic sampling, Inductively inferring valid logical models of continuous-state dynamical systems, Saddle point mirror descent algorithm for the robust PageRank problem, Smooth strongly convex interpolation and exact worst-case performance of first-order methods, Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions, The exact information-based complexity of smooth convex minimization, Information-based complexity of linear operator equations, An approach for analyzing the global rate of convergence of quasi-Newton and truncated-Newton methods, Near-optimal stochastic approximation for online principal component estimation, Minimizing finite sums with the stochastic average gradient, Learning by mirror averaging, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), A continuous-time approach to online optimization, Projection algorithms for linear programming, Lower bounds for the complexity of Monte Carlo function approximation, On the computational complexity of integral equations, Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization, A two-dimensional bisection envelope algorithm for fixed points, Benefit sharing in holding situations, On the optimality of Krylov information, A class of ADMM-based algorithms for three-block separable convex programming, An optimal randomized incremental gradient method, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, Geometric median and robust estimation in Banach spaces, Stochastic intermediate gradient method for convex problems with stochastic inexact oracle, An optimal algorithm for global optimization and adaptive covering, A new perspective on robust \(M\)-estimation: finite sample theory and applications to dependence-adjusted multiple testing, Method of centers for minimizing generalized eigenvalues, The complexity of dynamic programming, Mirror descent and nonlinear projected subgradient methods for convex optimization., On average case errors in numerical analysis, Asymptotic error for the global maximum of functions in s dimensions, On the communication complexity of Lipschitzian optimization for the coordinated model of computation, Exploiting special structure in semidefinite programming: a survey of theory and applications, Make \(\ell_1\) regularization effective in training sparse CNN, Complexity analysis of logarithmic barrier decomposition methods for semi-infinite linear programming, Functional aggregation for nonparametric regression., Accelerated gradient methods for nonconvex nonlinear and stochastic programming, Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method, Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization, Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods, Projection-free accelerated method for convex optimization, Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals, Stability and generalization of graph convolutional networks in eigen-domains, Stochastic Optimization for Dynamic Pricing, Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*, Unnamed Item, On the Properties of Convex Functions over Open Sets, Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning, Unnamed Item, Improving “Fast Iterative Shrinkage-Thresholding Algorithm”: Faster, Smarter, and Greedier, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, Discrete Choice Prox-Functions on the Simplex, Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method, Unnamed Item, Unnamed Item, Unnamed Item, An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization, Constrained, Global Optimization of Unknown Functions with Lipschitz Continuous Gradients, Primal–dual accelerated gradient methods with small-dimensional relaxation oracle, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, The Stochastic Auxiliary Problem Principle in Banach Spaces: Measurability and Convergence, Lower bounds for non-convex stochastic optimization, Unifying mirror descent and dual averaging, Non-asymptotic superlinear convergence of standard quasi-Newton methods, Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique, Improved algorithms for bandit with graph feedback via regret decomposition, Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact, A hierarchy of spectral relaxations for polynomial optimization, Unified representation of the classical ellipsoid method, Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization, Unnamed Item, Unnamed Item, Unnamed Item, A STOCHASTIC APPROXIMATION ALGORITHM FOR STOCHASTIC SEMIDEFINITE PROGRAMMING, On the rates of convergence of parallelized averaged stochastic gradient algorithms, First-Order Methods for Nonconvex Quadratic Minimization, MultiComposite Nonconvex Optimization for Training Deep Neural Networks, Technical Note—Nonstationary Stochastic Optimization Under Lp,q-Variation Measures, Online First-Order Framework for Robust Convex Optimization, Analysis of Online Composite Mirror Descent Algorithm, Contracting Proximal Methods for Smooth Convex Optimization, Interior-Point-Based Online Stochastic Bin Packing, Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization, Unnamed Item, Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems, Unnamed Item, A survey of computational complexity results in systems and control, Centerpoints: A Link between Optimization and Convex Geometry, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, The omnipresence of Lagrange, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Iteration-complexity of first-order augmented Lagrangian methods for convex programming, The Grace of Quadratic Norms: Some Examples, Scalable estimation strategies based on stochastic approximations: classical results and new insights, Subdeterminants and Concave Integer Quadratic Programming, An entropic Landweber method for linear ill-posed problems, On the Convergence Time of a Natural Dynamics for Linear Programming, Essentials of numerical nonsmooth optimization, Simpler and Better Algorithms for Minimum-Norm Load Balancing, Discussion on: ``Why is resorting to fate wise? A critical look at randomized algorithms in systems and control, Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives, Unique End of Potential Line, Modern regularization methods for inverse problems, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Making the Last Iterate of SGD Information Theoretically Optimal, Adaptive FISTA for Nonconvex Optimization, Accelerate stochastic subgradient method by leveraging local growth condition, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, Spectral Computed Tomography with Linearization and Preconditioning, Robust Optimization for Electricity Generation, Subsampling Algorithms for Semidefinite Programming, Solving variational inequalities with Stochastic Mirror-Prox algorithm, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unified Acceleration of High-Order Algorithms under General Hölder Continuity, Image Restoration with Mixed or Unknown Noises, Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction, The Supporting Halfspace--Quadratic Programming Strategy for the Dual of the Best Approximation Problem, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, Asymptotic Results of Stochastic Decomposition for Two-Stage Stochastic Quadratic Programming, How Many Steps Still Left to $x$*?, Generalized Momentum-Based Methods: A Hamiltonian Perspective, Adaptive Hamiltonian Variational Integrators and Applications to Symplectic Accelerated Optimization, A dual approach for optimal algorithms in distributed optimization over networks, On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints, Essentials of numerical nonsmooth optimization, A Variational Formulation of Accelerated Optimization on Riemannian Manifolds, Universal intermediate gradient method for convex problems with inexact oracle, Penalty and Augmented Lagrangian Methods for Constrained DC Programming, Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities, A Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex Optimization, Proximal Gradient Methods for Machine Learning and Imaging, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization, Why does Monte Carlo fail to work properly in high-dimensional optimization problems?, On the Complexity of Random Satisfiability Problems with Planted Solutions, Proximal Splitting Methods in Signal Processing, Algorithmic analysis of a basic evolutionary algorithm for continuous optimization, Lower Bounds for Parallel and Randomized Convex Optimization, Stochastic intermediate gradient method for convex optimization problems, On Full Jacobian Decomposition of the Augmented Lagrangian Method for Separable Convex Programming, Cutting Plane Methods Based on the Analytic Barrier for Minimization of a Convex Function Subject to Box-Constraints, MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization, GMRES-Accelerated ADMM for Quadratic Objectives, A Proximal Strictly Contractive Peaceman--Rachford Splitting Method for Convex Programming with Applications to Imaging, Hedge algorithm and dual averaging schemes, Inertial Game Dynamics and Applications to Constrained Optimization, Random gradient-free minimization of convex functions, Asymptotic optimality in stochastic optimization, Accelerated Methods for NonConvex Optimization, Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization, An optimal subgradient algorithm for large-scale bound-constrained convex optimization, Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization, On linear and super-linear convergence of natural policy gradient algorithm, A multiplicative weight updates algorithm for packing and covering semi-infinite linear programs, An optimal subgradient algorithm with subspace search for costly convex optimization problems, Fast bundle-level methods for unconstrained and ball-constrained convex optimization, Convergence of the exponentiated gradient method with Armijo line search, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm, On the steplength selection in gradient methods for unconstrained optimization, On the complexity of quasiconvex integer minimization problem, Smoothed quantile regression with large-scale inference, Perturbed Fenchel duality and first-order methods, Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes, Sample average approximations of strongly convex stochastic programs in Hilbert spaces, Revisiting the approximate Carathéodory problem via the Frank-Wolfe algorithm, Variance reduction for root-finding problems, On the Convergence of Gradient-Like Flows with Noisy Gradient Input, Relatively Smooth Convex Optimization by First-Order Methods, and Applications, Combinatorial optimization. Abstracts from the workshop held November 7--13, 2021 (hybrid meeting), Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs, Penalty methods with stochastic approximation for stochastic nonlinear programming, Accelerated Uzawa methods for convex optimization, Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points, A simple nearly optimal restart scheme for speeding up first-order methods, An inexact Riemannian proximal gradient method, Confidence level solutions for stochastic programming, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Re-examination of Bregman functions and new properties of their divergences, Stochastic Model-Based Minimization of Weakly Convex Functions, An accelerated primal-dual iterative scheme for the L 2 -TV regularized model of linear inverse problems, Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy, The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods, A universal modification of the linear coupling method, String-averaging incremental stochastic subgradient algorithms, High dimensional numerical problems, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, A probabilistic analytic center cutting plane method for feasibility of uncertain LMIs, Complexity and algorithms for nonlinear optimization problems, Stochastic Subgradient Estimation Training for Support Vector Machines, Unnamed Item, Robust modifications of U-statistics and applications to covariance estimation problems, Near optimal solutions to least-squares problems with stochastic uncertainty, Learning Theory of Randomized Sparse Kaczmarz Method, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants, Image Labeling Based on Graphical Models Using Wasserstein Messages and Geometric Assignment, Distributed statistical estimation and rates of convergence in normal approximation, Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Generalized alternating direction method of multipliers: new theoretical insights and applications, Stochastic subgradient method converges on tame functions, User-friendly covariance estimation for heavy-tailed distributions, A Decomposition Algorithm for Nested Resource Allocation Problems, Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems, Communication-efficient algorithms for decentralized and stochastic optimization, A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions, The augmented Lagrangian method with full Jacobian decomposition and logarithmic-quadratic proximal regularization for multiple-block separable convex programming, Conditional Gradient Sliding for Convex Optimization, Bregman proximal mappings and Bregman-Moreau envelopes under relative prox-regularity, Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates, A comparison of numerical methods for solving multibody dynamics problems with frictional contact modeled via differential variational inequalities, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, Optimal subgradient methods: computational properties for large-scale linear inverse problems, Generalized uniformly optimal methods for nonlinear programming, Robust and Scalable Bayes via a Median of Subset Posterior Measures, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, The breakdown point of the median of means tournament, Mean estimation and regression under heavy-tailed distributions: A survey, Oracle complexity of second-order methods for smooth convex optimization, A decomposition method for MINLPs with Lipschitz continuous nonlinearities, Efficiency of minimizing compositions of convex functions and smooth maps, Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs, Near-optimal mean estimators with respect to general norms, Random minibatch subgradient algorithms for convex problems with functional constraints, Analysis of singular value thresholding algorithm for matrix completion, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators, Unnamed Item, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization, Quasi-monotone subgradient methods for nonsmooth convex minimization, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization, Adaptive restart for accelerated gradient schemes, Robust sub-Gaussian estimation of a mean vector in nearly linear time, Complexity of nonlinear two-point boundary-value problems, A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm, Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting), A recursive algorithm for the infinity-norm fixed point problem, The cost of not knowing enough: mixed-integer optimization with implicit Lipschitz nonlinearities, Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference, Oracle complexity separation in convex optimization, Concentration of the collision estimator, Accelerated gradient sliding for structured convex optimization, Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Optimal complexity and certification of Bregman first-order methods, Riemannian proximal gradient methods, Sparse optimization on measures with over-parameterized gradient descent, On lower iteration complexity bounds for the convex concave saddle point problems, A modification of the inscribed ellipsoid method, A consumer-theoretic characterization of Fisher market equilibria, The robust nearest shrunken centroids classifier for high-dimensional heavy-tailed data, Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms, Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle, Improved exploitation of higher order smoothness in derivative-free optimization, Stochastic saddle-point optimization for the Wasserstein barycenter problem, A unitary distributed subgradient method for multi-agent optimization with different coupling sources, Robust statistical learning with Lipschitz and convex loss functions, New variants of bundle methods, The multiproximal linearization method for convex composite problems, A polynomial algorithm for minimizing discrete convic functions in fixed dimension, Linear convergence of cyclic SAGA, The projection technique for two open problems of unconstrained optimization problems, Unique end of potential line, Robust machine learning by median-of-means: theory and practice, Mean estimation with sub-Gaussian rates in polynomial time, Bridging the gap between constant step size stochastic gradient descent and Markov chains, Robust covariance estimation under \(L_4\)-\(L_2\) norm equivalence, Aggregation of estimators and stochastic optimization, Robust classification via MOM minimization, Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming, Mirror descent algorithms for minimizing interacting free energy, Lower bounds for finding stationary points I, Efficient first-order methods for convex minimization: a constructive approach, \(H_{\infty}\) identification of ``soft uncertainty models, Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems, Lower bounds for finding stationary points II: first-order methods, An adaptive primal-dual framework for nonsmooth convex minimization, Why random reshuffling beats stochastic gradient descent, Global optimization with space-filling curves., A gradient descent perspective on Sinkhorn, Composite convex optimization with global and local inexact oracles, On the convergence time of a natural dynamics for linear programming, Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries, Bundle methods for sum-functions with ``easy components: applications to multicommodity network design, Performance of first-order methods for smooth convex minimization: a novel approach, Rounding on the standard simplex: regular grids for global optimization, A minmax regret linear regression model under uncertainty in the dependent variable, Solvable integration problems and optimal sample size selection, Learning from MOM's principles: Le Cam's approach, Convergence of stochastic proximal gradient algorithm, Point process estimation with Mirror Prox algorithms, Acceleration techniques for level bundle methods in weakly smooth convex constrained optimization, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Natural gradient for combined loss using wavelets, A generalized worst-case complexity analysis for non-monotone line searches, Some worst-case datasets of deterministic first-order methods for solving binary logistic regression, An accelerated directional derivative method for smooth stochastic convex optimization, Bounds for the tracking error of first-order online optimization methods, Stochastic approximation: from statistical origin to big-data, multidisciplinary applications, Is there an analog of Nesterov acceleration for gradient-based MCMC?, Nearly optimal robust mean estimation via empirical characteristic function, A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning, Iteratively reweighted \(\ell_1\)-penalized robust regression, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, Fastest rates for stochastic mirror descent methods, Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling, On the oracle complexity of smooth strongly convex minimization, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Robust \(k\)-means clustering for distributions with two moments, Hybrid control for tracking environmental level sets by nonholonomic robots in maze-like environments, A stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs, Randomized block Krylov methods for approximating extreme eigenvalues, Inverse reinforcement learning in contextual MDPs, K-bMOM: A robust Lloyd-type clustering algorithm based on bootstrap median-of-means, Iterative ensemble Kalman methods: a unified perspective with some new variants, A multi-scale method for distributed convex optimization with constraints, Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Robust and efficient mean estimation: an approach based on the properties of self-normalized sums, On stochastic mirror descent with interacting particles: convergence properties and variance reduction, On Monte-Carlo methods in convex stochastic optimization, Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization, Understanding the acceleration phenomenon via high-resolution differential equations, Curiosities and counterexamples in smooth convex optimization, A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization, On the convergence analysis of aggregated heavy-ball method, Noisy zeroth-order optimization for non-smooth saddle point problems, A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids, New challenges in covariance estimation: multiple structures and coarse quantization, Distribution-free robust linear regression, Convex optimization with inexact gradients in Hilbert space and applications to elliptic inverse problems, Machine learning algorithms of relaxation subgradient method with space extension, A hybrid stochastic optimization framework for composite nonconvex optimization