Publication:2896156

From MaRDI portal


zbMath1242.62011MaRDI QIDQ2896156

Lin Xiao

Publication date: 13 July 2012

Full work available at URL: http://www.jmlr.org/papers/v11/xiao10a.html


68T05: Learning and adaptive systems in artificial intelligence

90C15: Stochastic programming

62C99: Statistical decision theory


Related Items

An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, A Tight Bound of Hard Thresholding, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Unnamed Item, Unnamed Item, Unnamed Item, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning, Accelerated dual-averaging primal–dual method for composite convex minimization, Adaptive sequential machine learning, A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization, Accelerate stochastic subgradient method by leveraging local growth condition, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, A general framework of online updating variable selection for generalized linear models with streaming datasets, An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization, Distributed one-pass online AUC maximization, Online Estimation for Functional Data, No-regret algorithms in on-line learning, games and convex optimization, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Regularized quasi-monotone method for stochastic optimization, Simple and fast algorithm for binary integer and online linear programming, Online learning over a decentralized network through ADMM, Stochastic forward-backward splitting for monotone inclusions, A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, A family of second-order methods for convex \(\ell _1\)-regularized optimization, A sparsity preserving stochastic gradient methods for sparse regression, A generalized online mirror descent with applications to classification and regression, Minimizing finite sums with the stochastic average gradient, Sample size selection in optimization methods for machine learning, Stochastic primal dual fixed point method for composite optimization, Make \(\ell_1\) regularization effective in training sparse CNN, Feature-aware regularization for sparse online learning, A stochastic variational framework for fitting and diagnosing generalized linear mixed models, Stochastic mirror descent dynamics and their convergence in monotone variational inequalities, Stochastic mirror descent method for distributed multi-agent optimization, Group online adaptive learning, Scale-free online learning, Learning in games with continuous action sets and unknown payoff functions, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Gradient-free method for nonsmooth distributed optimization, Convergence of stochastic proximal gradient algorithm, Randomized smoothing variance reduction method for large-scale non-smooth convex optimization, Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization, One-stage tree: end-to-end tree builder and pruner, Large-scale multivariate sparse regression with applications to UK Biobank, Statistical inference for model parameters in stochastic gradient descent, Algorithms for stochastic optimization with function or expectation constraints, Convergence of distributed gradient-tracking-based optimization algorithms with random graphs, Incrementally updated gradient methods for constrained and regularized optimization, Robust and sparse regression in generalized linear model by stochastic optimization, Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization, A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression, Asymptotic optimality in stochastic optimization, Scale-Free Algorithms for Online Linear Optimization, Distributed subgradient method for multi-agent optimization with quantized communication


Uses Software