Max-affine regression via first-order methods
From MaRDI portal
Publication:6583522
Recommendations
- Composite difference-MAX programs for modern statistical estimation problems
- A sparsity preserving stochastic gradient methods for sparse regression
- Spectrahedral Regression
- On the regularizing property of stochastic gradient descent
- On stochastic accelerated gradient with convergence rate of regression learning
Cites work
- scientific article; zbMATH DE number 3737398 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- 10.1162/15324430260185628
- A nonconvex approach for phase retrieval: reshaped Wirtinger flow and incremental algorithms
- Advances and Open Problems in Federated Learning
- Consistency of multidimensional convex regression
- Convex piecewise-linear fitting
- Covering Numbers for Convex Functions
- Deep learning
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*
- Fitting piecewise linear continuous functions
- Generalized Gradients and Applications
- High-dimensional probability. An introduction with applications in data science
- Max-Affine Regression: Parameter Estimation for Gaussian Designs
- Multivariate convex regression with adaptive partitioning
- Optimization methods for large-scale machine learning
- Phase Retrieval via Reweighted Amplitude Flow
- Phase retrieval via randomized Kaczmarz: theoretical guarantees
- Reinforcement learning. An introduction
- Robust Stochastic Approximation Approach to Stochastic Programming
- Statistical mechanics of learning
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: Max-affine regression via first-order methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6583522)