scientific article

From MaRDI portal
Publication:2896156

zbMath1242.62011MaRDI QIDQ2896156

Lin Xiao

Publication date: 13 July 2012

Full work available at URL: http://www.jmlr.org/papers/v11/xiao10a.html

Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items (59)

A general framework of online updating variable selection for generalized linear models with streaming datasetsStochastic forward-backward splitting for monotone inclusionsStochastic mirror descent dynamics and their convergence in monotone variational inequalitiesA stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networksGraph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient DescentA family of second-order methods for convex \(\ell _1\)-regularized optimizationAsymptotic properties of dual averaging algorithm for constrained distributed stochastic optimizationOne-stage tree: end-to-end tree builder and prunerProximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimizationUnnamed ItemUnnamed ItemAsymptotic optimality in stochastic optimizationLarge-scale multivariate sparse regression with applications to UK BiobankStochastic mirror descent method for distributed multi-agent optimizationStatistical inference for model parameters in stochastic gradient descentAlgorithms for stochastic optimization with function or expectation constraintsA random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regressionAn indefinite proximal subgradient-based algorithm for nonsmooth composite optimizationDistributed one-pass online AUC maximizationFeature-aware regularization for sparse online learningOnline Estimation for Functional DataNo-regret algorithms in on-line learning, games and convex optimizationNo-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimizationA stochastic variational framework for fitting and diagnosing generalized linear mixed modelsRegularized quasi-monotone method for stochastic optimizationSimple and fast algorithm for binary integer and online linear programmingIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine LearningGroup online adaptive learningScale-free online learningAn incremental mirror descent subgradient algorithm with random sweeping and proximal stepDistributed subgradient method for multi-agent optimization with quantized communicationA sparsity preserving stochastic gradient methods for sparse regressionLearning in games with continuous action sets and unknown payoff functionsAccelerated dual-averaging primal–dual method for composite convex minimizationA generalized online mirror descent with applications to classification and regressionOn variance reduction for stochastic smooth convex optimization with multiplicative noiseConvergence of distributed gradient-tracking-based optimization algorithms with random graphsGlobal Convergence Rate of Proximal Incremental Aggregated Gradient MethodsMinimizing finite sums with the stochastic average gradientA Tight Bound of Hard ThresholdingIncrementally updated gradient methods for constrained and regularized optimizationGradient-free method for nonsmooth distributed optimizationConvergence of stochastic proximal gradient algorithmSample size selection in optimization methods for machine learningRandomized smoothing variance reduction method for large-scale non-smooth convex optimizationAccelerated proximal stochastic dual coordinate ascent for regularized loss minimizationAdaptive sequential machine learningRobust and sparse regression in generalized linear model by stochastic optimizationA Single Timescale Stochastic Approximation Method for Nested Stochastic OptimizationScale-Free Algorithms for Online Linear OptimizationAccelerate stochastic subgradient method by leveraging local growth conditionStochastic Primal-Dual Coordinate Method for Regularized Empirical Risk MinimizationHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionLinear Coupling: An Ultimate Unification of Gradient and Mirror DescentStochastic primal dual fixed point method for composite optimizationMake \(\ell_1\) regularization effective in training sparse CNNUnnamed ItemOn the Convergence of Mirror Descent beyond Stochastic Convex ProgrammingOnline learning over a decentralized network through ADMM


Uses Software



This page was built for publication: