Distributed learning with regularized least squares
From MaRDI portal
Publication:4637006
zbMATH Open1435.68273arXiv1608.03339MaRDI QIDQ4637006FDOQ4637006
Authors: Xin Guo, Ding-Xuan Zhou, Shaobo Lin
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1608.03339
Recommendations
- Distributed regression learning with coefficient regularization
- Distributed learning with partial coefficients regularization
- Distributed regularized least squares with flexible Gaussian kernels
- Distributed learning and distribution regression of coefficient regularization
- Distributed kernel-based gradient descent algorithms
Nonparametric estimation (62G05) Asymptotic properties of nonparametric inference (62G20) Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cites Work
- Regularization networks and support vector machines
- 10.1162/15324430260185619
- An introduction to support vector machines and other kernel-based learning methods.
- Support Vector Machines
- On early stopping in gradient descent learning
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- A distribution-free theory of nonparametric regression
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Optimal rates for the regularized least-squares algorithm
- Support vector machine soft margin classifiers: error analysis
- Title not available (Why is that?)
- Consistency analysis of an empirical minimum error entropy algorithm
- Revisiting the Nyström method for improved large-scale machine learning
- The covering number in learning theory
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Capacity of reproducing kernel spaces in learning theory
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Adaptive kernel methods using the balancing principle
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Cross-validation based adaptation for regularization operators in learning theory
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Regularization schemes for minimum error entropy principle
- Introduction to the peptide binding problem of computational immunology: new results
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Optimum bounds for the distributions of martingales in Banach spaces
- Regularization in kernel learning
- An empirical feature-based learning algorithm producing sparse approximations
- Optimal distributed online prediction using mini-batches
- Convergence rates of kernel conjugate gradient for random design regression
- Optimal learning rates for localized SVMs
- Learning theory of randomized Kaczmarz algorithm
- Communication-efficient algorithms for statistical optimization
- Iterative regularization for learning with convex loss functions
- An extension of Mercer theorem to matrix-valued measurable kernels
- Thresholded spectral algorithms for sparse approximations
Cited In (87)
- Universality of deep convolutional neural networks
- Distributed penalized modal regression for massive data
- Communication-efficient distributed estimation for high-dimensional large-scale linear regression
- Title not available (Why is that?)
- Learning theory of distributed spectral algorithms
- Convergence of online mirror descent
- Distributed learning with indefinite kernels
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- On nonparametric randomized sketches for kernels with further smoothness
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Partially functional linear regression with quadratic regularization
- Parallelizing spectrally regularized kernel algorithms
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Random sampling and approximation of signals with bounded derivatives
- Semi-supervised learning with summary statistics
- Distributed least squares prediction for functional linear regression*
- Distributed kernel-based gradient descent algorithms
- Title not available (Why is that?)
- Distributed learning with multi-penalty regularization
- Distributed regression learning with coefficient regularization
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- A distributed training method for L1 regularized kernel machines based on filtering mechanism
- Distributed minimum error entropy algorithms
- Distributed semi-supervised learning with kernel ridge regression
- Bias corrected regularization kernel method in ranking
- Distributed regularized least squares with flexible Gaussian kernels
- Analysis of regularized least-squares in reproducing kernel Kreĭn spaces
- Distributed kernel ridge regression with communications
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Distributed linear regression by averaging
- Learning sparse conditional distribution: an efficient kernel-based approach
- Distributed learning via filtered hyperinterpolation on manifolds
- Deep neural networks for rotation-invariance approximation and learning
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Averaging versus voting: a comparative study of strategies for distributed classification
- Kernel-based online gradient descent using distributed approach
- Learning rate of distribution regression with dependent samples
- Distributed semi-supervised regression learning with coefficient regularization
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Distributed Generalized Cross-Validation for Divide-and-Conquer Kernel Ridge Regression and Its Asymptotic Optimality
- Optimal rates for coefficient-based regularized regression
- Distributed learning for sketched kernel regression
- Distributed adaptive Huber regression
- Boosted kernel ridge regression: optimal learning rates and early stopping
- Distributed Sparse Total Least-Squares Over Networks
- Distributed learning and distribution regression of coefficient regularization
- A review of distributed statistical inference
- Optimal learning rates for distribution regression
- Distributed spectral pairwise ranking algorithms
- Title not available (Why is that?)
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Optimal learning with Gaussians and correntropy loss
- Learning theory of distributed regression with bias corrected regularization kernel network
- Deep distributed convolutional neural networks: universality
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- WONDER: weighted one-shot distributed ridge regression in high dimensions
- Distributed learning for random vector functional-link networks
- Distributed learning with partial coefficients regularization
- Distributed robust regression with correntropy losses and regularization kernel networks
- Nyström subsampling method for coefficient-based regularized regression
- Efficient kernel canonical correlation analysis using Nyström approximation
- Iterative kernel regression with preconditioning
- Analysis of regularized least squares ranking with centered reproducing kernel
- Spectral algorithms for functional linear regression
- Radial basis function approximation with distributively stored data on spheres
- Spectral algorithms for learning with dependent observations
- Capacity dependent analysis for functional online learning algorithms
- Discussion of: ‘A review of distributed statistical inference’
- Title not available (Why is that?)
- Distributed estimation of functional linear regression with functional responses
- Sketching with Spherical Designs for Noisy Data Fitting on Spheres
- Title not available (Why is that?)
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- Title not available (Why is that?)
- Adaptive distributed inference for multi-source massive heterogeneous data
- Adaptive parameter selection for kernel ridge regression
- Distributed SGD in overparametrized linear regression
- Learning with centered reproducing kernels
- Least squares regression under weak moment conditions
- Decentralized RLS With Data-Adaptive Censoring for Regressions Over Large-Scale Networks
- Near Optimal Coded Data Shuffling for Distributed Learning
- Decentralized learning over a network with Nyström approximation using SGD
- Distributed filtered hyperinterpolation for noisy data on the sphere
- Robust distributed multicategory angle-based classification for massive data
- Statistical inference and distributed implementation for linear multicategory SVM
- Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems
This page was built for publication: Distributed learning with regularized least squares
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4637006)