The following pages link to Lorenzo Rosasco (Q289108):
Displayed 50 items.
- Stochastic forward-backward splitting for monotone inclusions (Q289110) (← links)
- Unsupervised learning of invariant representations (Q290560) (← links)
- Multi-output learning via spectral filtering (Q439000) (← links)
- Proximal methods for the latent group lasso penalty (Q457209) (← links)
- Item:Q289108 (redirect page) (← links)
- Model selection for regularized least-squares algorithm in learning theory (Q812379) (← links)
- Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel (Q829893) (← links)
- On regularization algorithms in learning theory (Q870339) (← links)
- Elastic-net regularization in learning theory (Q1023403) (← links)
- Generalization properties of doubly stochastic learning algorithms (Q1635837) (← links)
- Iterative regularization via dual diagonal descent (Q1703168) (← links)
- Modified Fejér sequences and applications (Q1790672) (← links)
- Adaptive kernel methods using the balancing principle (Q1959089) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- An elementary analysis of ridge regression with random design (Q2080945) (← links)
- Understanding neural networks with reproducing kernel Banach spaces (Q2105111) (← links)
- From inexact optimization to learning via gradient concentration (Q2111477) (← links)
- Neurally plausible mechanisms for learning selective and invariant representations (Q2202934) (← links)
- Learning sets with separating kernels (Q2252512) (← links)
- Mathematics of the neural response (Q2269906) (← links)
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces (Q2300763) (← links)
- Multiscale geometric methods for data sets. I: Multiscale SVD, noise and curvature. (Q2402490) (← links)
- On early stopping in gradient descent learning (Q2642922) (← links)
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry (Q2687067) (← links)
- A stochastic inertial forward–backward splitting algorithm for multivariate monotone inclusions (Q2810123) (← links)
- Iterative Regularization for Learning with Convex Loss Functions (Q2810891) (← links)
- (Q2896059) (← links)
- Kernels for Vector-Valued Functions: A Review (Q2903301) (← links)
- Some Recent Advances in Multiscale Geometric Analysis of Point Clouds (Q2913189) (← links)
- Nonparametric sparsity and regularization (Q2933860) (← links)
- (Q2933941) (← links)
- (Q3093228) (← links)
- (Q3093282) (← links)
- Consistency of learning algorithms using Attouch–Wets convergence (Q3225085) (← links)
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION (Q3379456) (← links)
- Multi-scale vector quantization with reconstruction trees (Q3383820) (← links)
- Spectral Algorithms for Supervised Learning (Q3510946) (← links)
- On invariance and selectivity in representation learning (Q4603721) (← links)
- Optimal Rates for Multi-pass Stochastic Gradient Methods (Q4637012) (← links)
- Are Loss Functions All the Same? (Q4832479) (← links)
- (Q4969161) (← links)
- Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces (Q4995041) (← links)
- Regularization: From Inverse Problems to Large-Scale Machine Learning (Q5028166) (← links)
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence (Q5109200) (← links)
- Faster Kriging: Facing High-Dimensional Simulators (Q5130493) (← links)
- On Learnability, Complexity and Stability (Q5264089) (← links)
- A First-Order Stochastic Primal-Dual Algorithm with Correction Step (Q5357019) (← links)
- Accelerated Iterative Regularization via Dual Diagonal Descent (Q5853571) (← links)
- Constructing Fast Approximate Eigenspaces With Application to the Fast Graph Fourier Transforms (Q5868755) (← links)
- Implicit regularization with strongly convex bias: Stability and acceleration (Q5873931) (← links)