On‐line learning for very large data sets
From MaRDI portal
Publication:5467278
DOI10.1002/asmb.538zbMath1091.68063OpenAlexW2101159990MaRDI QIDQ5467278
Publication date: 24 May 2006
Published in: Applied Stochastic Models in Business and Industry (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/asmb.538
Related Items
Stochastic gradient descent for semilinear elliptic equations with uncertainties ⋮ Stochastic forward-backward splitting for monotone inclusions ⋮ Streaming constrained binary logistic regression with online standardized data ⋮ A stochastic variational framework for fitting and diagnosing generalized linear mixed models ⋮ Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning ⋮ Why random reshuffling beats stochastic gradient descent ⋮ Large-Scale Machine Learning with Stochastic Gradient Descent ⋮ Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate ⋮ Optimization Methods for Large-Scale Machine Learning ⋮ ACCELERATING GENERALIZED ITERATIVE SCALING BASED ON STAGGERED AITKEN METHOD FOR ON-LINE CONDITIONAL RANDOM FIELDS ⋮ IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate ⋮ Scalable estimation strategies based on stochastic approximations: classical results and new insights ⋮ A stochastic trust region method for unconstrained optimization problems ⋮ Convergence Rate of Incremental Gradient and Incremental Newton Methods ⋮ Unnamed Item ⋮ Sampled limited memory methods for massive linear inverse problems ⋮ A globally convergent incremental Newton method ⋮ On the Convergence Rate of Incremental Aggregated Gradient Algorithms