Higher accuracy for Bayesian and frequentist inference: large sample theory for small sample likelihood
From MaRDI portal
Publication:449760
Abstract: Recent likelihood theory produces -values that have remarkable accuracy and wide applicability. The calculations use familiar tools such as maximum likelihood values (MLEs), observed information and parameter rescaling. The usual evaluation of such -values is by simulations, and such simulations do verify that the global distribution of the -values is uniform(0, 1), to high accuracy in repeated sampling. The derivation of the -values, however, asserts a stronger statement, that they have a uniform(0, 1) distribution conditionally, given identified precision information provided by the data. We take a simple regression example that involves exact precision information and use large sample techniques to extract highly accurate information as to the statistical position of the data point with respect to the parameter: specifically, we examine various -values and Bayesian posterior survivor -values for validity. With observed data we numerically evaluate the various -values and -values, and we also record the related general formulas. We then assess the numerical values for accuracy using Markov chain Monte Carlo (McMC) methods. We also propose some third-order likelihood-based procedures for obtaining means and variances of Bayesian posterior distributions, again followed by McMC assessment. Finally we propose some adaptive McMC methods to improve the simulation acceptance rates. All these methods are based on asymptotic analysis that derives from the effect of additional data. And the methods use simple calculations based on familiar maximizing values and related informations. The example illustrates the general formulas and the ease of calculations, while the McMC assessments demonstrate the numerical validity of the -values as percentage position of a data point. The example, however, is very simple and transparent, and thus gives little indication that in a wide generality of models the formulas do accurately separate information for almost any parameter of interest, and then do give accurate -value determinations from that information. As illustration an enigmatic problem in the literature is discussed and simulations are recorded; various examples in the literature are cited.
Recommendations
- Accurate parametric inference for small samples
- Regression Analysis, Nonlinear or Nonnormal: Simple and Accurate p Values From Likelihood Analysis
- Stability and uniqueness of \(p\)-values for likelihood-based inference
- Three enigmatic examples and inference from likelihood
- Asymptotics and the theory of inference
Cites work
- scientific article; zbMATH DE number 3667770 (Why is no real title available?)
- scientific article; zbMATH DE number 3713025 (Why is no real title available?)
- scientific article; zbMATH DE number 2062414 (Why is no real title available?)
- scientific article; zbMATH DE number 749293 (Why is no real title available?)
- scientific article; zbMATH DE number 2117879 (Why is no real title available?)
- scientific article; zbMATH DE number 774827 (Why is no real title available?)
- scientific article; zbMATH DE number 800290 (Why is no real title available?)
- scientific article; zbMATH DE number 3426640 (Why is no real title available?)
- scientific article; zbMATH DE number 3189754 (Why is no real title available?)
- scientific article; zbMATH DE number 3190795 (Why is no real title available?)
- A simple general formula for tail probabilities for frequentist and Bayesian inference
- An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S.
- An invariant form for the prior probability in estimation problems
- Ancillaries and conditional inference (with comments and rejoinder)
- Approximations of marginal tail probabilities for a class of smooth functions with applications to Bayesian and conditional inference
- Computation of distribution functions from likelihood information near observed data
- Higher-Order Asymptotic Approximation: Laplace, Saddlepoint, and Related Methods
- Improved Likelihood Inference for Discrete Data
- Interval estimation for a binomial proportion. (With comments and a rejoinder).
- Likelihood centered asymptotic model exponential and location model versions
- Modified signed log likelihood ratio
- Monte Carlo sampling methods using Markov chains and their applications
- Regression Analysis, Nonlinear or Nonnormal: Simple and Accurate p Values From Likelihood Analysis
- Saddle point approximation for the distribution of the sum of independent random variables
- Saddlepoint Approximations in Statistics
- Some Problems Connected with Statistical Inference
- Strong matching of frequentist and Bayesian parametric inference
- The Behrens-Fisher problem revisited: a Bayes-frequentist synthesis
- The roles of conditioning in inference. With comments and rejoinder
Cited in
(8)- On default priors and approximate location models
- Is Bayes posterior just quick and dirty confidence?
- Objective Bayes models for compatibility assessment and bias estimation
- Three enigmatic examples and inference from likelihood
- Higher order accurate procedures to compare two normal populations
- Confidence regions for comparison of two normal samples
- Improvement over bayes prediction in small samples in the presence of model uncertainty
- Accurate parametric inference for small samples
This page was built for publication: Higher accuracy for Bayesian and frequentist inference: large sample theory for small sample likelihood
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q449760)