The following pages link to (Q2896156):
Displaying 50 items.
- Online learning over a decentralized network through ADMM (Q259131) (← links)
- Stochastic forward-backward splitting for monotone inclusions (Q289110) (← links)
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks (Q301668) (← links)
- A family of second-order methods for convex \(\ell _1\)-regularized optimization (Q312690) (← links)
- A sparsity preserving stochastic gradient methods for sparse regression (Q457215) (← links)
- A generalized online mirror descent with applications to classification and regression (Q493737) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Sample size selection in optimization methods for machine learning (Q715253) (← links)
- Stochastic primal dual fixed point method for composite optimization (Q777039) (← links)
- Make \(\ell_1\) regularization effective in training sparse CNN (Q782914) (← links)
- Feature-aware regularization for sparse online learning (Q893629) (← links)
- A stochastic variational framework for fitting and diagnosing generalized linear mixed models (Q899068) (← links)
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities (Q1626529) (← links)
- Stochastic mirror descent method for distributed multi-agent optimization (Q1670526) (← links)
- Group online adaptive learning (Q1698874) (← links)
- Scale-free online learning (Q1704560) (← links)
- Learning in games with continuous action sets and unknown payoff functions (Q1717237) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Gradient-free method for nonsmooth distributed optimization (Q2018475) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization (Q2033403) (← links)
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization (Q2154832) (← links)
- One-stage tree: end-to-end tree builder and pruner (Q2163237) (← links)
- Large-scale multivariate sparse regression with applications to UK Biobank (Q2170442) (← links)
- Statistical inference for model parameters in stochastic gradient descent (Q2176618) (← links)
- Algorithms for stochastic optimization with function or expectation constraints (Q2181600) (← links)
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs (Q2235622) (← links)
- Incrementally updated gradient methods for constrained and regularized optimization (Q2251572) (← links)
- Robust and sparse regression in generalized linear model by stochastic optimization (Q2303494) (← links)
- Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization (Q2398094) (← links)
- A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression (Q2419533) (← links)
- Asymptotic optimality in stochastic optimization (Q2656586) (← links)
- Scale-Free Algorithms for Online Linear Optimization (Q2835636) (← links)
- Distributed subgradient method for multi-agent optimization with quantized communication (Q2978003) (← links)
- A Tight Bound of Hard Thresholding (Q4558539) (← links)
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step (Q4613984) (← links)
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization (Q4636997) (← links)
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression (Q4637017) (← links)
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent (Q4638051) (← links)
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods (Q4641660) (← links)
- (Q4969042) (← links)
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent (Q4969072) (← links)
- (Q4969260) (← links)
- Accelerated dual-averaging primal–dual method for composite convex minimization (Q5135253) (← links)
- (Q5148934) (← links)
- Adaptive sequential machine learning (Q5215364) (← links)
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization (Q5220424) (← links)
- Accelerate stochastic subgradient method by leveraging local growth condition (Q5236746) (← links)
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning (Q5254990) (← links)
- On the Convergence of Mirror Descent beyond Stochastic Convex Programming (Q5853716) (← links)