Max-affine regression via first-order methods
From MaRDI portal
Publication:6583522
DOI10.1137/23M1594662MaRDI QIDQ6583522FDOQ6583522
Authors: Seonho Kim, Kiryung Lee
Publication date: 6 August 2024
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Recommendations
- Composite difference-MAX programs for modern statistical estimation problems
- A sparsity preserving stochastic gradient methods for sparse regression
- Spectrahedral Regression
- On the regularizing property of stochastic gradient descent
- On stochastic accelerated gradient with convergence rate of regression learning
Cites Work
- Title not available (Why is that?)
- High-dimensional probability. An introduction with applications in data science
- Deep learning
- Robust Stochastic Approximation Approach to Stochastic Programming
- 10.1162/15324430260185628
- Generalized Gradients and Applications
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Covering Numbers for Convex Functions
- Reinforcement learning. An introduction
- Consistency of multidimensional convex regression
- Convex piecewise-linear fitting
- Fitting piecewise linear continuous functions
- Title not available (Why is that?)
- Multivariate convex regression with adaptive partitioning
- Statistical mechanics of learning
- Optimization methods for large-scale machine learning
- A nonconvex approach for phase retrieval: reshaped Wirtinger flow and incremental algorithms
- Phase Retrieval via Reweighted Amplitude Flow
- Phase retrieval via randomized Kaczmarz: theoretical guarantees
- Advances and Open Problems in Federated Learning
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*
- Max-Affine Regression: Parameter Estimation for Gaussian Designs
This page was built for publication: Max-affine regression via first-order methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6583522)