A one-sample test for normality with kernel methods
From MaRDI portal
Abstract: We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. Our method makes use of a special kind of parametric bootstrap (typical of goodness-of-fit tests) which is computationally more efficient than standard parametric bootstrap. Moreover, an upper bound for the Type-II error highlights the dependence on influential quantities. Experiments illustrate the practical improvement allowed by our test in high-dimensional settings where common normality tests are known to fail. We also consider an application to covariance rank selection through a sequential procedure.
Recommendations
- Asymptotics and practical aspects of testing normality with kernel methods
- Testing normality using kernel methods
- Algorithmic Learning Theory
- Multivariate tests of independence based on a new class of measures of independence in reproducing kernel Hilbert space
- Equivalence of distance-based and RKHS-based statistics in hypothesis testing
Cites work
- A class of invariant consistent tests for multivariate normality
- A kernel two-sample test
- A new test for multivariate normality
- A one-sample test for normality with kernel methods
- A two sample test in high dimensional data
- Bootstrap based goodness-of-fit-tests
- Equivalence of distance-based and RKHS-based statistics in hypothesis testing
- Goodness-of-fit testing based on a weighted bootstrap: a fast large-sample alternative to the parametric bootstrap
- Hilbert space embeddings and metrics on probability measures
- Kernel Fisher Discriminants for Outlier Detection
- Kernel discriminant analysis and clustering with parsimonious Gaussian process models
- Measures of multivariate skewness and kurtosis with applications
- Multivariate tests-of-fit and uniform confidence bands using a weighted bootstrap
- Non-asymptotic adaptive prediction in functional linear models
- Selecting the number of components in principal component analysis using cross-validation approximations
- Selecting the number of principal components: estimation of the true rank of a noisy matrix
- Sparse Non-Gaussian Component Analysis
- Sparse estimation of a covariance matrix
- Sparse non Gaussian component analysis by semidefinite programming
- TESTS OF RANK
- Testing Statistical Hypotheses
- The law of large numbers and the central limit theorem in Banach spaces
- The random projection method in goodness of fit for functional data
- Theory of Reproducing Kernels
- Thresholding projection estimators in functional linear models
Cited in
(11)- On combining the zero bias transform and the empirical characteristic function to test normality
- Fourier approach to goodness-of-fit tests for Gaussian random processes
- Asymptotic normality of a consistent estimator of maximum mean discrepancy in Hilbert space
- A one-sample test for normality with kernel methods
- scientific article; zbMATH DE number 7370518 (Why is no real title available?)
- Asymptotically Optimal One- and Two-Sample Testing With Kernels
- Dimension-agnostic inference using cross U-statistics
- Tests for multivariate normality -- a critical review with emphasis on weighted L^2-statistics
- On one homogeneity test based on the kernel-type estimators of the distribution density
- A review of goodness-of-fit tests for models involving functional data
- Asymptotics and practical aspects of testing normality with kernel methods
This page was built for publication: A one-sample test for normality with kernel methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2419660)