Pages that link to "Item:Q715253"
From MaRDI portal
The following pages link to Sample size selection in optimization methods for machine learning (Q715253):
Displaying 50 items.
- A Stochastic Quasi-Newton Method for Large-Scale Optimization (Q121136) (← links)
- An inexact successive quadratic approximation method for L-1 regularized optimization (Q301652) (← links)
- A family of second-order methods for convex \(\ell _1\)-regularized optimization (Q312690) (← links)
- Estimating the algorithmic variance of randomized ensembles via the bootstrap (Q666594) (← links)
- Spectral projected gradient method for stochastic optimization (Q670658) (← links)
- Convergence of the reweighted \(\ell_1\) minimization algorithm for \(\ell_2-\ell_p\) minimization (Q742293) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- Adaptive stochastic approximation algorithm (Q1689446) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Sub-sampled Newton methods (Q1739039) (← links)
- Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization (Q2013139) (← links)
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches (Q2022225) (← links)
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization (Q2082285) (← links)
- Linesearch Newton-CG methods for convex optimization with noise (Q2084588) (← links)
- Accelerating mini-batch SARAH by step size rules (Q2127094) (← links)
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization (Q2143221) (← links)
- Ritz-like values in steplength selections for stochastic gradient methods (Q2156893) (← links)
- Subsampled nonmonotone spectral gradient methods (Q2178981) (← links)
- Inexact restoration with subsampled trust-region methods for finite-sum minimization (Q2191786) (← links)
- A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers (Q2208931) (← links)
- A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems (Q2245100) (← links)
- A deep learning semiparametric regression for adjusting complex confounding structures (Q2247451) (← links)
- Accelerating deep neural network training with inconsistent stochastic gradient descent (Q2292210) (← links)
- Nonmonotone line search methods with variable sample size (Q2340358) (← links)
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization (Q2355319) (← links)
- Statistically equivalent surrogate material models: impact of random imperfections on the elasto-plastic response (Q2679290) (← links)
- Risk-averse design of tall buildings for uncertain wind conditions (Q2679299) (← links)
- A second-order method for convex<sub>1</sub>-regularized optimization with active-set prediction (Q2815550) (← links)
- Deep Learning for Trivial Inverse Problems (Q3296181) (← links)
- Parallel Optimization Techniques for Machine Learning (Q3300501) (← links)
- Algorithms for Kullback--Leibler Approximation of Probability Measures in Infinite Dimensions (Q3454461) (← links)
- Descent direction method with line search for unconstrained optimization in noisy environment (Q3458837) (← links)
- Adaptive Sampling Strategies for Stochastic Optimization (Q4562248) (← links)
- On Sampling Rates in Simulation-Based Recursions (Q4600839) (← links)
- Stable architectures for deep neural networks (Q4607800) (← links)
- Batched Stochastic Gradient Descent with Weighted Sampling (Q4609808) (← links)
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities (Q4620417) (← links)
- Randomized Approach to Nonlinear Inversion Combining Random and Optimized Simultaneous Sources and Detectors (Q4631407) (← links)
- Optimization Methods for Large-Scale Machine Learning (Q4641709) (← links)
- (Q4642146) (← links)
- A robust multi-batch L-BFGS method for machine learning (Q4972551) (← links)
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise (Q4997171) (← links)
- Adaptive Deep Learning for High-Dimensional Hamilton--Jacobi--Bellman Equations (Q4997364) (← links)
- A fully stochastic second-order trust region method (Q5043844) (← links)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization (Q5076721) (← links)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models (Q5079553) (← links)
- A nonmonotone line search method for stochastic optimization problems (Q5086883) (← links)
- Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations (Q5113710) (← links)
- Gradient-Based Adaptive Stochastic Search for Simulation Optimization Over Continuous Space (Q5131717) (← links)
- Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization (Q5144794) (← links)