Learning theory of distributed spectral algorithms

From MaRDI portal
Revision as of 23:50, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5348011

DOI10.1088/1361-6420/AA72B2zbMath1372.65162OpenAlexW2613940844MaRDI QIDQ5348011

Ding-Xuan Zhou, Zheng-Chu Guo, Shao-Bo Lin

Publication date: 11 August 2017

Published in: Inverse Problems (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1088/1361-6420/aa72b2




Related Items (52)

Distributed spectral pairwise ranking algorithmsDistributed regression learning with coefficient regularizationDeep distributed convolutional neural networks: UniversalityDistributed learning via filtered hyperinterpolation on manifoldsUnnamed ItemUnnamed ItemGradient descent for robust kernel-based regressionDistributed learning with partial coefficients regularizationManifold regularization based on Nyström type subsamplingDistributed kernel gradient descent algorithm for minimum error entropy principleMulti-task learning via linear functional strategyDistributed semi-supervised regression learning with coefficient regularizationAveraging versus voting: a comparative study of strategies for distributed classificationDistributed learning with multi-penalty regularizationPreface for Inverse Problems special issue on learning and inverse problemsSpectral algorithms for learning with dependent observationsCapacity dependent analysis for functional online learning algorithmsDistributed learning for sketched kernel regressionTikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problemsConvex regularization in statistical inverse learning problemsDomain Generalization by Functional RegressionInverse learning in Hilbert scalesEstimates on learning rates for multi-penalty distribution regressionCoefficient-based regularized distribution regressionOnline regularized learning algorithm for functional dataCommunication-efficient estimation of high-dimensional quantile regressionDistributed learning and distribution regression of coefficient regularizationKernel regression, minimax rates and effective dimensionality: Beyond the regular caseOn the Improved Rates of Convergence for Matérn-Type Kernel Ridge Regression with Application to Calibration of Computer ModelsCoefficient-based regularization network with variance loss for errorRobust kernel-based distribution regressionUnnamed ItemPartially functional linear regression with quadratic regularizationDistributed SGD in overparametrized linear regressionLearning with centered reproducing kernelsBalancing principle in supervised learning for a general regularization schemeConvergence of online mirror descentOptimal learning rates for distribution regressionDistributed estimation of principal eigenspacesAnalysis of regularized least squares for functional linear regression modelUniversality of deep convolutional neural networksOptimal rates for spectral algorithms with least-squares regression over Hilbert spacesRegularized Nyström subsampling in regression and ranking problems under general smoothness assumptionsSemi-supervised learning with summary statisticsDistributed learning with indefinite kernelsUnnamed ItemOptimal rates for coefficient-based regularized regressionDistributed Filtered Hyperinterpolation for Noisy Data on the SphereUnnamed ItemUnnamed ItemUnnamed ItemDistributed least squares prediction for functional linear regression*




Cites Work




This page was built for publication: Learning theory of distributed spectral algorithms