scientific article
From MaRDI portal
Publication:2896156
zbMath1242.62011MaRDI QIDQ2896156
Publication date: 13 July 2012
Full work available at URL: http://www.jmlr.org/papers/v11/xiao10a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15) Statistical decision theory (62C99)
Related Items (59)
A general framework of online updating variable selection for generalized linear models with streaming datasets ⋮ Stochastic forward-backward splitting for monotone inclusions ⋮ Stochastic mirror descent dynamics and their convergence in monotone variational inequalities ⋮ A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ A family of second-order methods for convex \(\ell _1\)-regularized optimization ⋮ Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization ⋮ One-stage tree: end-to-end tree builder and pruner ⋮ Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Asymptotic optimality in stochastic optimization ⋮ Large-scale multivariate sparse regression with applications to UK Biobank ⋮ Stochastic mirror descent method for distributed multi-agent optimization ⋮ Statistical inference for model parameters in stochastic gradient descent ⋮ Algorithms for stochastic optimization with function or expectation constraints ⋮ A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression ⋮ An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization ⋮ Distributed one-pass online AUC maximization ⋮ Feature-aware regularization for sparse online learning ⋮ Online Estimation for Functional Data ⋮ No-regret algorithms in on-line learning, games and convex optimization ⋮ No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization ⋮ A stochastic variational framework for fitting and diagnosing generalized linear mixed models ⋮ Regularized quasi-monotone method for stochastic optimization ⋮ Simple and fast algorithm for binary integer and online linear programming ⋮ Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning ⋮ Group online adaptive learning ⋮ Scale-free online learning ⋮ An incremental mirror descent subgradient algorithm with random sweeping and proximal step ⋮ Distributed subgradient method for multi-agent optimization with quantized communication ⋮ A sparsity preserving stochastic gradient methods for sparse regression ⋮ Learning in games with continuous action sets and unknown payoff functions ⋮ Accelerated dual-averaging primal–dual method for composite convex minimization ⋮ A generalized online mirror descent with applications to classification and regression ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ Convergence of distributed gradient-tracking-based optimization algorithms with random graphs ⋮ Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods ⋮ Minimizing finite sums with the stochastic average gradient ⋮ A Tight Bound of Hard Thresholding ⋮ Incrementally updated gradient methods for constrained and regularized optimization ⋮ Gradient-free method for nonsmooth distributed optimization ⋮ Convergence of stochastic proximal gradient algorithm ⋮ Sample size selection in optimization methods for machine learning ⋮ Randomized smoothing variance reduction method for large-scale non-smooth convex optimization ⋮ Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization ⋮ Adaptive sequential machine learning ⋮ Robust and sparse regression in generalized linear model by stochastic optimization ⋮ A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization ⋮ Scale-Free Algorithms for Online Linear Optimization ⋮ Accelerate stochastic subgradient method by leveraging local growth condition ⋮ Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization ⋮ Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression ⋮ Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent ⋮ Stochastic primal dual fixed point method for composite optimization ⋮ Make \(\ell_1\) regularization effective in training sparse CNN ⋮ Unnamed Item ⋮ On the Convergence of Mirror Descent beyond Stochastic Convex Programming ⋮ Online learning over a decentralized network through ADMM
Uses Software
This page was built for publication: