Distributed learning with regularized least squares
From MaRDI portal
Publication:4637006
Recommendations
- Distributed regression learning with coefficient regularization
- Distributed learning with partial coefficients regularization
- Distributed regularized least squares with flexible Gaussian kernels
- Distributed learning and distribution regression of coefficient regularization
- Distributed kernel-based gradient descent algorithms
Cites work
- scientific article; zbMATH DE number 962825 (Why is no real title available?)
- 10.1162/15324430260185619
- A distribution-free theory of nonparametric regression
- Adaptive kernel methods using the balancing principle
- An empirical feature-based learning algorithm producing sparse approximations
- An extension of Mercer theorem to matrix-valued measurable kernels
- An introduction to support vector machines and other kernel-based learning methods.
- Capacity of reproducing kernel spaces in learning theory
- Communication-efficient algorithms for statistical optimization
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Consistency analysis of an empirical minimum error entropy algorithm
- Convergence rates of kernel conjugate gradient for random design regression
- Cross-validation based adaptation for regularization operators in learning theory
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Introduction to the peptide binding problem of computational immunology: new results
- Iterative regularization for learning with convex loss functions
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Learning theory of randomized Kaczmarz algorithm
- Learning with sample dependent hypothesis spaces
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Model selection for regularized least-squares algorithm in learning theory
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- Optimal distributed online prediction using mini-batches
- Optimal learning rates for localized SVMs
- Optimal rates for the regularized least-squares algorithm
- Optimum bounds for the distributions of martingales in Banach spaces
- Regularization in kernel learning
- Regularization networks and support vector machines
- Regularization schemes for minimum error entropy principle
- Revisiting the Nyström method for improved large-scale machine learning
- Support Vector Machines
- Support vector machine soft margin classifiers: error analysis
- The covering number in learning theory
- Thresholded spectral algorithms for sparse approximations
Cited in
(87)- Distributed learning with partial coefficients regularization
- Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems
- Universality of deep convolutional neural networks
- Distributed penalized modal regression for massive data
- Communication-efficient distributed estimation for high-dimensional large-scale linear regression
- Distributed robust regression with correntropy losses and regularization kernel networks
- Nyström subsampling method for coefficient-based regularized regression
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- Convergence of online mirror descent
- Learning theory of distributed spectral algorithms
- Distributed learning with indefinite kernels
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Efficient kernel canonical correlation analysis using Nyström approximation
- On nonparametric randomized sketches for kernels with further smoothness
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Partially functional linear regression with quadratic regularization
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Parallelizing spectrally regularized kernel algorithms
- Random sampling and approximation of signals with bounded derivatives
- Iterative kernel regression with preconditioning
- Analysis of regularized least squares ranking with centered reproducing kernel
- Spectral algorithms for functional linear regression
- Radial basis function approximation with distributively stored data on spheres
- Semi-supervised learning with summary statistics
- Spectral algorithms for learning with dependent observations
- Capacity dependent analysis for functional online learning algorithms
- Distributed kernel-based gradient descent algorithms
- Distributed least squares prediction for functional linear regression*
- scientific article; zbMATH DE number 7306853 (Why is no real title available?)
- Discussion of: ‘A review of distributed statistical inference’
- Distributed regression learning with coefficient regularization
- Distributed learning with multi-penalty regularization
- scientific article; zbMATH DE number 7370612 (Why is no real title available?)
- A distributed training method for L1 regularized kernel machines based on filtering mechanism
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- Distributed estimation of functional linear regression with functional responses
- Distributed minimum error entropy algorithms
- Distributed regularized least squares with flexible Gaussian kernels
- Bias corrected regularization kernel method in ranking
- Distributed semi-supervised learning with kernel ridge regression
- Analysis of regularized least-squares in reproducing kernel Kreĭn spaces
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Distributed kernel ridge regression with communications
- Distributed linear regression by averaging
- Learning sparse conditional distribution: an efficient kernel-based approach
- Sketching with Spherical Designs for Noisy Data Fitting on Spheres
- Distributed learning via filtered hyperinterpolation on manifolds
- scientific article; zbMATH DE number 7415083 (Why is no real title available?)
- Deep neural networks for rotation-invariance approximation and learning
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Averaging versus voting: a comparative study of strategies for distributed classification
- Kernel-based online gradient descent using distributed approach
- Learning rate of distribution regression with dependent samples
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Distributed semi-supervised regression learning with coefficient regularization
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- scientific article; zbMATH DE number 7625155 (Why is no real title available?)
- Distributed Generalized Cross-Validation for Divide-and-Conquer Kernel Ridge Regression and Its Asymptotic Optimality
- Adaptive distributed inference for multi-source massive heterogeneous data
- Adaptive parameter selection for kernel ridge regression
- Distributed SGD in overparametrized linear regression
- Learning with centered reproducing kernels
- Optimal rates for coefficient-based regularized regression
- Distributed adaptive Huber regression
- Distributed learning for sketched kernel regression
- Boosted kernel ridge regression: optimal learning rates and early stopping
- Distributed Sparse Total Least-Squares Over Networks
- Distributed learning and distribution regression of coefficient regularization
- Least squares regression under weak moment conditions
- Decentralized RLS With Data-Adaptive Censoring for Regressions Over Large-Scale Networks
- Optimal learning rates for distribution regression
- A review of distributed statistical inference
- Near Optimal Coded Data Shuffling for Distributed Learning
- Decentralized learning over a network with Nyström approximation using SGD
- Distributed spectral pairwise ranking algorithms
- Distributed filtered hyperinterpolation for noisy data on the sphere
- scientific article; zbMATH DE number 7370577 (Why is no real title available?)
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Learning theory of distributed regression with bias corrected regularization kernel network
- Deep distributed convolutional neural networks: universality
- Optimal learning with Gaussians and correntropy loss
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Robust distributed multicategory angle-based classification for massive data
- Statistical inference and distributed implementation for linear multicategory SVM
- Distributed learning for random vector functional-link networks
- WONDER: weighted one-shot distributed ridge regression in high dimensions
This page was built for publication: Distributed learning with regularized least squares
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4637006)