An empirical power comparison of univariate goodness-of-fit tests for normality

From MaRDI portal
Revision as of 04:16, 5 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3589960

DOI10.1080/00949650902740824zbMath1195.62056OpenAlexW1986263287MaRDI QIDQ3589960

Aníbal Costa, Xavier Romão, Raimundo Delgado

Publication date: 17 September 2010

Published in: Journal of Statistical Computation and Simulation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1080/00949650902740824




Related Items

A comprehensive empirical power comparison of univariate goodness-of-fit tests for the Laplace distributionA powerful and interpretable alternative to the Jarque–Bera test of normality based on 2nd-power skewness and kurtosis, using the Rao's score test on the APD familyRecognizing distributions rather than goodness-of-fit testingTest of Normality Against Generalized Exponential Power AlternativesEM-based algorithms for autoregressive models with t-distributed innovationsMosaic normality testModified Lilliefors goodness-of-fit test for normalityThe performance of univariate goodness-of-fit tests for normality based on the empirical characteristic function in large samplesNormality tests for dependent data: large-sample and bootstrap approachesTesting normality via a distributional fixed point property in the Stein characterizationOn combining the zero bias transform and the empirical characteristic function to test normalityPenalized power properties of the normality tests in the presence of outliersA comparison of normality testing methods by empirical power and distribution of P -valuesAre You All Normal? It Depends!Graphical comparison of normality tests for unimodal distribution dataOn the automatic selection of the tuning parameter appearing in certain families of goodness-of-fit testsRecognizing distributions using method of potential functionsNonparametric statistical analysis for multiple comparison of machine learning regression algorithmsStatistical power of goodness-of-fit tests based on the empirical distribution function for type-I right-censored dataAn estimation of Phi divergence and its application in testing normalityAsymptotic power of tests of normality under local alternativesTesting normality based on new entropy estimatorsA new test of multivariate normality by a double estimation in a characterizing PDEA system dynamics-based simulation model of production line with cross-trained workersDiscriminating between distributions using feed-forward neural networksDetection of non-GaussianityModified entropy estimators for testing normalityFitting polynomial trend to time series by the method of Buys-Ballot estimatorsNew fat-tail normality test based on conditional second moments with applications to financeOn automatic kernel density estimate-based tests for goodness-of-fitStatistical inference based on weighted divergence measures with simulations and applicationsA Correlation Test for Normality Based on the Lévy Characterization


Uses Software


Cites Work