Local convergence rates of the nonparametric least squares estimator with applications to transfer learning
From MaRDI portal
Publication:6565304
DOI10.3150/23-BEJ1655MaRDI QIDQ6565304FDOQ6565304
Johannes Schmidt-Hieber, Petr Zamolodtchikov
Publication date: 2 July 2024
Published in: Bernoulli (Search for Journal in Brave)
nonparametric regressionmean squared errortransfer learningcovariate shiftnonparametric least squaresminimax estimationdomain adaptationlocal rates
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Weak convergence and empirical processes. With applications to statistics
- Maximum likelihood estimation of a log-concave density and its distribution function: basic properties and uniform consistency
- Optimal rates of convergence for nonparametric estimators
- Locally adaptive regression splines
- High-Dimensional Statistics
- Introduction to nonparametric estimation
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Adaptive transfer learning
- Transfer learning for nonparametric classification: minimax rate and adaptive classifier
- The asymptotic behavior of monotone regression estimates
- Improving predictive inference under covariate shift by weighting the log-likelihood function
- A distribution-free theory of nonparametric regression
- Maximum Likelihood Estimates of Monotone Parameters
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- Learning Theory and Kernel Machines
- Rates of convergence for minimum contrast estimators
- Estimation of a convex function: Characterizations and asymptotic theory.
- Estimating a regression function
- On the Estimation of Parameters Restricted by Inequalities
- On consistency in monotonic regression
- Three notes on perfect linear sets
- Smoothing Lipschitz functions
- A regularity class for the roots of nonnegative functions
- A Bayesian/information theoretic model of learning to learn via multiple task sampling
- Regularization and the small-ball method. I: Sparse recovery
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Isotonic regression in general dimensions
- Singular measures and the key of \(G\)
- Distribution-free properties of isotonic regression
- Adaptation to lowest density regions with application to support recovery
- Nonparametric shape-restricted regression
- Regularization and the small-ball method II: complexity dependent error rates
- Nonparametric regression using deep neural networks with ReLU activation function
- On the rate of convergence of fully connected deep neural network regression estimates
- Set structured global empirical risk minimizers are rate optimal in general dimensions
- Marginal singularity and the benefits of labels in covariate-shift
- On the robustness of minimum norm interpolators and regularized empirical risk minimizers
- On least squares estimation under heteroscedastic and heavy-tailed errors
This page was built for publication: Local convergence rates of the nonparametric least squares estimator with applications to transfer learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6565304)