A GPU-based Gibbs sampler for a unidimensional IRT model (Q1727811): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Bayesian estimation of item response curves / rank
 
Normal rank
Property / cites work
 
Property / cites work: Corrigendum: On the time derivatives of equilibrated response functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical theory for logistic mental test models with a prior distribution of ability / rank
 
Normal rank
Property / cites work
 
Property / cites work: Item Response Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4850582 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4272782 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov chains for exploring posterior distributions. (With discussion) / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5580837 / rank
 
Normal rank
Property / cites work
 
Property / cites work: High performance Gibbs sampling for IRT models using row-wise decomposition / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4855364 / rank
 
Normal rank

Latest revision as of 08:12, 18 July 2024

scientific article
Language Label Description Also known as
English
A GPU-based Gibbs sampler for a unidimensional IRT model
scientific article

    Statements

    A GPU-based Gibbs sampler for a unidimensional IRT model (English)
    0 references
    0 references
    0 references
    0 references
    21 February 2019
    0 references
    Summary: Item response theory (IRT) is a popular approach used for addressing large-scale statistical problems in psychometrics as well as in other fields. The fully Bayesian approach for estimating IRT models is usually memory and computationally expensive due to the large number of iterations. This limits the use of the procedure in many applications. In an effort to overcome such restraint, previous studies focused on utilizing the message passing interface (MPI) in a distributed memory-based Linux cluster to achieve certain speedups. However, given the high data dependencies in a single Markov chain for IRT models, the communication overhead rapidly grows as the number of cluster nodes increases. This makes it difficult to further improve the performance under such a parallel framework. This study aims to tackle the problem using massive core-based graphic processing units (GPU), which is practical, cost-effective, and convenient in actual applications. The performance comparisons among serial CPU, MPI, and compute unified device architecture (CUDA) programs demonstrate that the CUDA GPU approach has many advantages over the CPU-based approach and therefore is preferred.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references