Estimates on learning rates for multi-penalty distribution regression
From MaRDI portal
Publication:6138930
DOI10.1016/J.ACHA.2023.101609arXiv2006.09017MaRDI QIDQ6138930FDOQ6138930
Publication date: 16 January 2024
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Abstract: This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression under the framework of learning theory. The algorithm aims at regressing to real valued outputs from probability measures. The theoretical analysis on distribution regression is far from maturity and quite challenging, since only second stage samples are observable in practical setting. In the algorithm, to transform information from samples, we embed the distributions to a reproducing kernel Hilbert space associated with Mercer kernel via mean embedding technique. The main contribution of the paper is to present a novel multi-penalty regularization algorithm to capture more features of distribution regression and derive optimal learning rates for the algorithm. The work also derives learning rates for distribution regression in the nonstandard setting , which is not explored in existing literature. Moreover, we propose a distribution regression-based distributed learning algorithm to face large-scale data or information challenge. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, we improve the existing work in different aspects in the literature.
Full work available at URL: https://arxiv.org/abs/2006.09017
learning theoryintegral operatormulti-penalty regularizationdistributed learninglearning ratedistribution regression
Cites Work
- Sparsity in multiple kernel learning
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling and function reconstruction from point values
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Multi-penalty regularization with a component-wise penalization
- Multi-penalty regularization in learning theory
- Title not available (Why is that?)
- Unifying Divergence Minimization and Statistical Inference Via Convex Duality
- Multi-parameter Tikhonov regularization with the \(\ell^0\) sparsity constraint
- Optimal learning rates for distribution regression
- Distributed learning with multi-penalty regularization
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Learning theory of distributed spectral algorithms
- Learning theory for distribution regression
- Title not available (Why is that?)
- Distributed learning with indefinite kernels
This page was built for publication: Estimates on learning rates for multi-penalty distribution regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6138930)