The regularized least squares algorithm and the problem of learning halfspaces
DOI10.1016/J.IPL.2011.01.011zbMATH Open1260.68189OpenAlexW1987010876MaRDI QIDQ1944907FDOQ1944907
Authors: Hà Quang Minh
Publication date: 28 March 2013
Published in: Information Processing Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ipl.2011.01.011
computational complexitykernelanalysis of algorithmscomputational learning theoryhalfspacesregularized least squares algorithm
Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32) Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) (68Q17)
Cites Work
- Theory of Reproducing Kernels
- Title not available (Why is that?)
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Title not available (Why is that?)
- Optimal aggregation of classifiers in statistical learning.
- Title not available (Why is that?)
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
- Learning theory estimates via integral operators and their approximations
- Efficient noise-tolerant learning from statistical queries
- Agnostically Learning Halfspaces
- Optimum bounds for the distributions of martingales in Banach spaces
- Learning intersections and thresholds of halfspaces
- An upper bound on the sample complexity of PAC-learning halfspaces with respect to the uniform distribution
- Halfspace learning, linear programming, and nonmalicious distributions
Cited In (3)
This page was built for publication: The regularized least squares algorithm and the problem of learning halfspaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1944907)