autoFC (Q125264): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Added link to MaRDI item.
 
(12 intermediate revisions by 2 users not shown)
Property / last update
7 June 2021
Timestamp+2021-06-07T00:00:00Z
Timezone+00:00
CalendarGregorian
Precision1 day
Before0
After0
 
Property / last update: 7 June 2021 / rank
Normal rank
 
Property / author
 
Property / author: Tianjun Sun / rank
Normal rank
 
Property / author
 
Property / author: Bo Zhang / rank
Normal rank
 
Property / copyright license
 
Property / copyright license: GNU General Public License, version 3.0 / rank
Normal rank
 
Property / cites work
 
Property / cites work: Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. / rank
Normal rank
 
Property / cites work
 
Property / cites work: Though Forced, Still Valid: Psychometric Equivalence of Forced-Choice and Single-Statement Measures / rank
Normal rank
 
Property / cites work
 
Property / cites work: Item Desirability Matching in Forced-choice Test Construction / rank
Normal rank
 
Property / cites work
 
Property / cites work: Item Response Modeling of Forced-Choice Questionnaires / rank
Normal rank
 
Property / imports
 
Property / imports: irrCAC / rank
Normal rank
 
Property / software version identifier
 
0.2.0.1001
Property / software version identifier: 0.2.0.1001 / rank
 
Normal rank
Property / software version identifier: 0.2.0.1001 / qualifier
 
publication date: 17 February 2024
Timestamp+2024-02-17T00:00:00Z
Timezone+00:00
CalendarGregorian
Precision1 day
Before0
After0
Property / last update
 
17 February 2024
Timestamp+2024-02-17T00:00:00Z
Timezone+00:00
CalendarGregorian
Precision1 day
Before0
After0
Property / last update: 17 February 2024 / rank
 
Normal rank
Property / description
 
Forced-choice (FC) response has gained increasing popularity and interest for its resistance to faking when well-designed (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>). To established well-designed FC scales, typically each item within a block should measure different trait and have similar level of social desirability (Zhang et al., 2020 <doi:10.1177/1094428119836486>). Recent study also suggests the importance of high inter-item agreement of social desirability between items within a block (Pavlov et al., 2021 <doi:10.31234/osf.io/hmnrc>). In addition to this, FC developers may also need to maximize factor loading differences (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) or minimize item location differences (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>) depending on scoring models. Decision of which items should be assigned to the same block, termed item pairing, is thus critical to the quality of an FC test. This pairing process is essentially an optimization process which is currently carried out manually. However, given that we often need to simultaneously meet multiple objectives, manual pairing becomes impractical or even not feasible once the number of latent traits and/or number of items per trait are relatively large. To address these problems, autoFC is developed as a practical tool for facilitating the automatic construction of FC tests (Li et al., 2022 <doi:10.1177/01466216211051726>), essentially exempting users from the burden of manual item pairing and reducing the computational costs and biases induced by simple ranking methods. Given characteristics of each item (and item responses), FC measures can be constructed either automatically based on user-defined pairing criteria and weights, or based on exact specifications of each block (i.e., blueprint; see Li et al., 2024 <doi:10.1177/10944281241229784>). Users can also generate simulated responses based on the Thurstonian Item Response Theory model (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) and predict trait scores of simulated/actual respondents based on an estimated model.
Property / description: Forced-choice (FC) response has gained increasing popularity and interest for its resistance to faking when well-designed (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>). To established well-designed FC scales, typically each item within a block should measure different trait and have similar level of social desirability (Zhang et al., 2020 <doi:10.1177/1094428119836486>). Recent study also suggests the importance of high inter-item agreement of social desirability between items within a block (Pavlov et al., 2021 <doi:10.31234/osf.io/hmnrc>). In addition to this, FC developers may also need to maximize factor loading differences (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) or minimize item location differences (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>) depending on scoring models. Decision of which items should be assigned to the same block, termed item pairing, is thus critical to the quality of an FC test. This pairing process is essentially an optimization process which is currently carried out manually. However, given that we often need to simultaneously meet multiple objectives, manual pairing becomes impractical or even not feasible once the number of latent traits and/or number of items per trait are relatively large. To address these problems, autoFC is developed as a practical tool for facilitating the automatic construction of FC tests (Li et al., 2022 <doi:10.1177/01466216211051726>), essentially exempting users from the burden of manual item pairing and reducing the computational costs and biases induced by simple ranking methods. Given characteristics of each item (and item responses), FC measures can be constructed either automatically based on user-defined pairing criteria and weights, or based on exact specifications of each block (i.e., blueprint; see Li et al., 2024 <doi:10.1177/10944281241229784>). Users can also generate simulated responses based on the Thurstonian Item Response Theory model (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) and predict trait scores of simulated/actual respondents based on an estimated model. / rank
 
Normal rank
Property / author
 
Property / author: Mengtong Li / rank
 
Normal rank
Property / author
 
Property / author: Tianjun Sun / rank
 
Normal rank
Property / author
 
Property / author: Bo Zhang / rank
 
Normal rank
Property / copyright license
 
Property / copyright license: GNU General Public License, version 3.0 / rank
 
Normal rank
Property / imports
 
Property / imports: dplyr / rank
 
Normal rank
Property / imports
 
Property / imports: irrCAC / rank
 
Normal rank
Property / imports
 
Property / imports: lavaan / rank
 
Normal rank
Property / imports
 
Property / imports: MASS / rank
 
Normal rank
Property / imports
 
Property / imports: SimDesign / rank
 
Normal rank
Property / imports
 
Property / imports: thurstonianIRT / rank
 
Normal rank
Property / imports
 
Property / imports: MplusAutomation / rank
 
Normal rank
Property / imports
 
Property / imports: glue / rank
 
Normal rank
Property / imports
 
Property / imports: tidyr / rank
 
Normal rank
Property / cites work
 
Property / cites work: Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Though Forced, Still Valid: Psychometric Equivalence of Forced-Choice and Single-Statement Measures / rank
 
Normal rank
Property / cites work
 
Property / cites work: Item Desirability Matching in Forced-choice Test Construction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Item Response Modeling of Forced-Choice Questionnaires / rank
 
Normal rank
Property / cites work
 
Property / cites work: autoFC: An R Package for Automatic Item Pairing in Forced-Choice Test Construction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations / rank
 
Normal rank
Property / depends on software
 
Property / depends on software: R / rank
 
Normal rank
Property / depends on software: R / qualifier
 
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI software profile / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 18:56, 12 March 2024

Automatic Construction of Forced-Choice Tests
Language Label Description Also known as
English
autoFC
Automatic Construction of Forced-Choice Tests

    Statements

    0 references
    0.1.2
    7 June 2021
    0 references
    0.2.0.1001
    17 February 2024
    0 references
    0 references
    0 references
    17 February 2024
    0 references
    Forced-choice (FC) response has gained increasing popularity and interest for its resistance to faking when well-designed (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>). To established well-designed FC scales, typically each item within a block should measure different trait and have similar level of social desirability (Zhang et al., 2020 <doi:10.1177/1094428119836486>). Recent study also suggests the importance of high inter-item agreement of social desirability between items within a block (Pavlov et al., 2021 <doi:10.31234/osf.io/hmnrc>). In addition to this, FC developers may also need to maximize factor loading differences (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) or minimize item location differences (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>) depending on scoring models. Decision of which items should be assigned to the same block, termed item pairing, is thus critical to the quality of an FC test. This pairing process is essentially an optimization process which is currently carried out manually. However, given that we often need to simultaneously meet multiple objectives, manual pairing becomes impractical or even not feasible once the number of latent traits and/or number of items per trait are relatively large. To address these problems, autoFC is developed as a practical tool for facilitating the automatic construction of FC tests (Li et al., 2022 <doi:10.1177/01466216211051726>), essentially exempting users from the burden of manual item pairing and reducing the computational costs and biases induced by simple ranking methods. Given characteristics of each item (and item responses), FC measures can be constructed either automatically based on user-defined pairing criteria and weights, or based on exact specifications of each block (i.e., blueprint; see Li et al., 2024 <doi:10.1177/10944281241229784>). Users can also generate simulated responses based on the Thurstonian Item Response Theory model (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) and predict trait scores of simulated/actual respondents based on an estimated model.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references