scientific article; zbMATH DE number 6860797
From MaRDI portal
Publication:4637006
zbMath1435.68273arXiv1608.03339MaRDI QIDQ4637006
Ding-Xuan Zhou, Xin Guo, Shao-Bo Lin
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1608.03339
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Ridge regression; shrinkage estimators (Lasso) (62J07) Asymptotic properties of nonparametric inference (62G20) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
A review of distributed statistical inference, Discussion of: ‘A review of distributed statistical inference’, WONDER: Weighted one-shot distributed ridge regression in high dimensions, Distributed spectral pairwise ranking algorithms, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Distributed regression learning with coefficient regularization, Deep distributed convolutional neural networks: Universality, Distributed learning via filtered hyperinterpolation on manifolds, Unnamed Item, Unnamed Item, Distributed learning with partial coefficients regularization, Learning rate of distribution regression with dependent samples, Distributed kernel gradient descent algorithm for minimum error entropy principle, Distributed semi-supervised regression learning with coefficient regularization, Kernel-based online gradient descent using distributed approach, Averaging versus voting: a comparative study of strategies for distributed classification, Spectral algorithms for learning with dependent observations, Capacity dependent analysis for functional online learning algorithms, Distributed penalized modal regression for massive data, Distributed learning for sketched kernel regression, Decentralized learning over a network with Nyström approximation using SGD, Robust distributed multicategory angle-based classification for massive data, Unnamed Item, Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems, Efficient kernel canonical correlation analysis using Nyström approximation, Communication-efficient distributed estimation for high-dimensional large-scale linear regression, Distributed estimation of functional linear regression with functional responses, Sketching with Spherical Designs for Noisy Data Fitting on Spheres, Bias corrected regularization kernel method in ranking, Distributed learning and distribution regression of coefficient regularization, Kernel regression, minimax rates and effective dimensionality: Beyond the regular case, Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces, On the Improved Rates of Convergence for Matérn-Type Kernel Ridge Regression with Application to Calibration of Computer Models, Distributed kernel-based gradient descent algorithms, Unnamed Item, Unnamed Item, Partially functional linear regression with quadratic regularization, Convergence of online mirror descent, Optimal learning rates for distribution regression, Distributed regularized least squares with flexible Gaussian kernels, Distributed linear regression by averaging, Learning sparse conditional distribution: an efficient kernel-based approach, Nyström subsampling method for coefficient-based regularized regression, Universality of deep convolutional neural networks, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Analysis of regularized least-squares in reproducing kernel Kreĭn spaces, Convergence analysis of distributed multi-penalty regularized pairwise learning, Distributed Generalized Cross-Validation for Divide-and-Conquer Kernel Ridge Regression and Its Asymptotic Optimality, Random sampling and approximation of signals with bounded derivatives, Deep neural networks for rotation-invariance approximation and learning, Semi-supervised learning with summary statistics, Analysis of regularized Nyström subsampling for regression functions of low smoothness, Distributed learning with indefinite kernels, On nonparametric randomized sketches for kernels with further smoothness, Debiased magnitude-preserving ranking: learning rate and bias characterization, Unnamed Item, Optimal rates for coefficient-based regularized regression, Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere, Optimal learning with Gaussians and correntropy loss, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Distributed least squares prediction for functional linear regression*
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- An empirical feature-based learning algorithm producing sparse approximations
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Introduction to the peptide binding problem of computational immunology: new results
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- On regularization algorithms in learning theory
- A distribution-free theory of nonparametric regression
- The covering number in learning theory
- Optimum bounds for the distributions of martingales in Banach spaces
- An extension of Mercer theorem to matrix-valued measurable kernels
- Adaptive kernel methods using the balancing principle
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Iterative Regularization for Learning with Convex Loss Functions
- Convergence rates of Kernel Conjugate Gradient for random design regression
- 10.1162/15324430260185619
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Regularization schemes for minimum error entropy principle
- Thresholded spectral algorithms for sparse approximations
- Optimal Distributed Online Prediction using Mini-Batches
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality