scientific article; zbMATH DE number 2115052
From MaRDI portal
Publication:4826695
zbMath1083.68100MaRDI QIDQ4826695
Publication date: 11 November 2004
Full work available at URL: http://www.ams.org/notices/200305/200305-toc.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (60)
Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension ⋮ Another look at statistical learning theory and regularization ⋮ Generalized vec trick for fast learning of pairwise kernel models ⋮ A control theory approach to the analysis and synthesis of the experimentally observed motion primitives ⋮ Efficiency of classification methods based on empirical risk minimization ⋮ Shannon sampling and function reconstruction from point values ⋮ Unnamed Item ⋮ Application of integral operator for regularized least-square regression ⋮ Terminated Ramp--Support Vector machines: A nonparametric data dependent kernel ⋮ Theoretical issues in deep networks ⋮ Rough sets: some extensions ⋮ Nonlinear bivariate comovements of asset prices: methodology, tests and applications ⋮ Reproducing kernels: harmonic analysis and some of their applications ⋮ Random sampling of bandlimited functions ⋮ Functional reproducing kernel Hilbert spaces for non-point-evaluation functional data ⋮ Comparing fixed and variable-width Gaussian networks ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Semi-discrete Tikhonov regularization in RKHS with large randomly distributed noise ⋮ Spherical random sampling of localized functions on 𝕊ⁿ⁻¹ ⋮ Random sampling of signals concentrated on compact set in localized reproducing kernel subspace of \(L^p (\mathbb{R}^n)\) ⋮ Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points ⋮ New Hilbert space tools for analysis of graph Laplacians and Markov processes ⋮ Random sampling in reproducing kernel subspaces of \(L^p(\mathbb{R}^n)\) ⋮ Projected estimators for robust semi-supervised classification ⋮ The regularized least squares algorithm and the problem of learning halfspaces ⋮ A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization ⋮ Consistency analysis of spectral regularization algorithms ⋮ Efficient cross-validation for kernelized least-squares regression with sparse basis expansions ⋮ Fast learning of relational kernels ⋮ Training regression ensembles by sequential target correction and resampling ⋮ Information flow in logic for distributed systems: extending graded consequence ⋮ A multiscale support vector regression method on spheres with data compression ⋮ Application of Hurwitz-Radon matrices in curve interpolation and almost-smoothing ⋮ Feature Selection for Ridge Regression with Provable Guarantees ⋮ Generalization ability of fractional polynomial models ⋮ Approximate dynamic programming for stochastic \(N\)-stage optimization with application to optimal consumption under uncertainty ⋮ On adaptive estimators in statistical learning theory ⋮ A tutorial on kernel methods for categorization ⋮ Learning sets with separating kernels ⋮ Multi-parameter regularization and its numerical realization ⋮ Functional optimization by variable-basis approximation schemes ⋮ Time-based detection of changes to multivariate patterns ⋮ Replacing points by compacta in neural network approximation ⋮ Learning with generalization capability by kernel methods of bounded complexity ⋮ Mathematics of the neural response ⋮ The weight-decay technique in learning from data: an optimization point of view ⋮ Interactive information systems: toward perception based computing ⋮ Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization ⋮ Sampling and Stability ⋮ Sparse regularization for semi-supervised classification ⋮ Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory ⋮ Learning with Kernels and Logical Representations ⋮ Robust semi-supervised least squares classification by implicit constraints ⋮ Kernel Principal Component Analysis: Applications, Implementation and Comparison ⋮ Analysis of support vector machines regression ⋮ Deep vs. shallow networks: An approximation theory perspective ⋮ Robustness by reweighting for kernel estimators: an overview ⋮ Rough Sets: From Rudiments to Challenges ⋮ Shannon sampling. II: Connections to learning theory ⋮ Data-Driven Optimization: A Reproducing Kernel Hilbert Space Approach
This page was built for publication: