Speaker
Abstract
One of the main objectives of psychometrics is to determine how many common factors underlie observed responses. When there is a theoretical basis for specifying the latent structure, assessing model fit and making model comparisons allow researchers to evaluate latent dimensionality. In the absence of such a theoretical framework, dimensionality can be examined using data-driven approaches such as Horn’s parallel analysis or Exploratory Graph Analysis (EGA). These strategies, among others, are often used as a preliminary step before fitting an (unconstrained) exploratory factor model.
Although psychometricians have spent many years developing and refining methods for evaluating latent dimensionality, applying these methods in other fields is not always straightforward. A clear example is experimental psychology, where interest in the psychometric properties of experimental tasks has grown in recent years. In this context, researchers design and administer multiple tasks (e.g., Stroop, Flanker, Simon) that aim to measure the same underlying cognitive process (e.g., behavioral inhibition). To analyze these data, several authors have suggested fitting a linear mixed model in which the random effects (i.e., the model’s latent variables) represent each participant’s experimental effect (e.g., Stroop effect, Flanker effect, Simon effect). This approach allows researchers to quantify individual differences while preserving the experimental structure; however, it’s limited to capturing correlations between experimental tasks and does not leverage the full potential of psychometric latent-variable decomposition.
Recently, Mehrvarz and Rouder (2024) introduced a hierarchical psychometric model designed to estimate the common factors underlying experimental effects from multiple tasks. While this represents a significant advancement, determining the number of latent common factors can be challenging, particularly when the latent structure is unclear. Fortunately, because experimental effects are latent variables, these common factors can be viewed as “second-order factors” in the psychometric field, and established methods do exist for determining their number. Building directly on the approach proposed by Jiménez et al. (2023), the present study extends their procedures to the experimental domain. Specifically, our workflow for assessing latent dimensionality involves: (1) fitting a linear mixed model, (2) extracting each participant’s experimental effects for each task, (3) estimating the correlations among these effects, and (4) applying dimensionality detection techniques to the resulting correlation matrix to identify the number of common factors.
We conducted a simulation study to assess the performance of this strategy, comparing parallel analysis (with different factor extraction methods) and EGA (with different unidimensionality detection methods and clustering algorithms). The simulations varied the number of latent dimensions, sample size, factor loadings, and the reliability of the experimental effects. Results indicate that, as reliability and factor loadings increase, both EGA with the Louvain algorithm and parallel analysis using principal components extraction show a close to perfect accuracy in detecting the correct number of latent dimensions. This method therefore provides a straightforward and effective alternative for determining how many latent dimensions underlie experimental tasks.
Oral presentation | From Random Effects to Common Factors: Latent Dimensionality Assessment in Experimental Psychology |
---|---|
Author | Ricardo Rey Sáez |
Affiliation | Universidad Autónoma de Madrid |
Keywords | Dimensionality, Experimental Psychology, Reliability, Psychometrics. |