The European Congress of Methodology is organized biennially by the European Association of Methodology (EAM), a society established in 2004, which currently brings together a large number of researchers from all over the world to exchange ideas on developing new methods and on applying new methodologies in empirical research.
As public trust in standardized testing declines, AI-driven methods such as machine learning and natural language processing are increasingly being applied to optimize traditional measurement approaches. While these innovations offer important gains in efficiency, cost, and scalability, there is a risk that, without also addressing broader concerns of trust, equity, and relevance, psychometrics may become increasingly disconnected from evolving scientific standards, societal needs, and ethical principles.
Psychometrics has been instrumental in establishing psychology and education as scientific disciplines, sharpening clinical diagnosis, advancing prevention, promoting educational equity, and exposing systemic inequities. Yet, as we navigate the complexities of an increasingly diverse and technology driven 21st century, it is necessary to ask whether our current assessments, still largely grounded in 20th-century measurement theories and assumptions, are adequately equipped to meet the evolving needs of today’s test users.
This presentation offers a critical yet constructive reflection on how fragmented assessment systems, outdated assumptions, and rigid adherence to technical standards detached from the lived realities of those being assessed can unintentionally limit our collective impact and overlook opportunities to better serve society. By revisiting classic debates, I invite us to question long-held measurement mantras and consider how the field can evolve to better serve a rapidly changing world. Through concrete examples, I advocate for assessment systems that are responsive to real-world contexts, address the diverse needs of test users, and thoughtfully balance implementation trade-offs by considering opportunity costs.
Ultimately, aligning psychometrics with the demands of the 21st century will position us to leverage AI-driven methods not only to optimize traditional measurement, but also to become more scientifically interdisciplinary, socially responsive, and ethically grounded in advancing societal progress.
This talk explores the core principles and practical applications of AI. We begin by defining AI as the discipline that imbues machines with human-like intelligence, encompassing reasoning, learning, and creativity. Key characteristics include the ability to perceive, interact, solve problems, act autonomously, and adapt to environments. We will cover the diverse problems AI addresses, such as classification, regression, prediction, clustering, optimization, and Natural Language Processing (NLP), alongside content generation. The presentation traces AI's evolution from symbolic AI to Machine Learning, Deep Learning, and the transformative rise of Generative AI. We will delve into Large Language Models (LLMs) like GPT and GEMINI and the current technologies based on Agentic AI. The global impact of AI is undeniable, with its interdisciplinary nature driving widespread applications across various sectors, significantly improving efficiency and enabling new capabilities worldwide. The presentation will finish analysing the profound influence of this technology on education and research. AI's intrinsic capabilities in learning, reasoning, communication, and creativity are directly applicable, assisting with academic text analysis, content creation, and report generation. In this scenario AI is becoming an indispensable assistant for students, reasearchers and educators alike, with autonomous AI Agents poised to further revolutionize these fields.
In this State of the Art Address, I revisit and extend the conceptual boundaries of two core mixed methods transformation techniques: qualitizing and quantitizing. In so doing, I spotlight the expanded methodological and philosophical dimensions that elevate their application in contemporary mixed methods research. The first third of the presentation is dedicated to qualitizing, defined as the transformation of quantitative data into qualitative form that can be analyzed qualitatively. I will outline how qualitizing has evolved to include five major elements: (1) it can yield numerous representations (e.g., narratives, profiles), (2) it can stem from either quantitative or qualitative data, (3) it may involve either qualitative or quantitative analyses, (4) it can be applied as a single or multiple analyses, and (5) it can produce a fully integrated analysis. Special emphasis will be placed on narrative profile formation—such as modal, average, holistic, comparative, and normative profiles—which allows for rich, contextualized interpretations of numerical data.
In the second third of the address, I will introduce the DIME-Driven Model of Quantitizing, which encompasses four core classes of Level 1 quantitizing:
• Descriptive-Based Quantitizing transforms qualitative data into quantitative metrics to summarize patterns using measures such as mean, standard deviation, percentiles, and skewness.
• Inferential-Based Quantitizing involves converting qualitative data into formats suitable for statistical inference, including tests such as analysis of variance (ANOVA), regression, and structural equation modeling.
• Measurement-Based Quantitizing refers to the transformation of qualitative insights into quantifiable constructs for instrument development and validation, often using techniques such as Rasch modeling and Item Response Theory (IRT).
• Exploratory-Based Quantitizing converts qualitative data into numerical formats to explore underlying patterns, relationships, or structures through methods such as factor analysis, cluster analysis, and correspondence analysis.
In this presentation, I will also introduce for the first time a novel concept, which I call Transformatizing. Transformatizing refers to the integrated process of applying both qualitizing and quantitizing techniques within a single analytical framework fully to harness and to interweave the strengths of qualitative and quantitative data transformations. It represents a dynamic, bidirectional approach wherein data are fluidly transformed across paradigms to achieve comprehensive, meta-integrative insights. Major components of transformatizing are QuanQualitizing and QualQuantitizing—both of which will be defined.
To concretize these ideas, I will present a real example from the published literature that illustrates both QuanQualitizing and QualQuantitizing in action.
Throughout the session, I will illustrate these expanded definitions with practical examples from diverse research contexts. Attendees will leave with a clearer understanding of how thoughtfully transforming data across traditions/paradigms not only enriches methodological rigor, but also facilitates deeper, more meaningful meta-inferences. I invite colleagues to consider how these advanced transformation techniques can further democratize evidence, foster integration, and propel mixed methods research into new frontiers.
Meta-analytic structural equation modeling (MASEM), originally referred to as model-based meta-analysis, involves testing structural equation models on meta-analytic data. The technique is being applied in a broad range of fields, including education, psychology, environmental research, information security, medicine, and ecology. In this talk I will outline various methods that can be used to apply MASEM. I will explain how different methods may lead to different (possibly incorrect) conclusions, and consider the pros and cons of the methods currently available. As MASEM is a relatively new technique, there are many opportunities to extend existing approaches, enabling researchers to make better use of available data. Examples of necessary developments include the analysis of dependent effect sizes, handling effect size heterogeneity, synthesizing raw data, analyzing mean structures and evaluating model fit. I will therefore conclude my talk by presenting a research agenda for MASEM.
To do.
In survey research, especially under unsupervised online conditions, careless responding—also referred to as insufficient effort responding—remains a significant threat to data quality. When respondents fail to engage meaningfully with questionnaire content, the resulting bias can weaken psychometric properties, distort correlations, and lead to erroneous conclusions. Recent estimates place the prevalence of careless responding between 10% and 40%, depending on survey design, context, and detection methods (Kam & Meyer, 2015; Oppenheimer et al., 2009; Ward et al., 2017). This keynote will synthesize current evidence on best practices for preventing, detecting, and managing careless responding.
Prevention strategies should reflect the dual nature of careless responding. Empirical evidence, including recent longitudinal studies from our own team, indicates that response attentiveness can vary across time and context, and may be influenced by both individual traits and situational demands (e.g., Tomas et al., 2024; Hasselhorn et al., 2023).Some individuals are consistently attentive or inattentive (trait-like), while others shift depending on situational context—such as fatigue, time pressure, or lack of interest. To address both patterns, researchers should combine context-sensitive strategies (e.g., optimizing survey length or timing) with broader, person-focused approaches like motivational instructions or commitment pledges, which can reduce carelessness even among those predisposed to inattention.
Detection strategies should be multifaceted. While attention check items offer a direct, in-survey method to flag inattentiveness, their effectiveness may decline over time as participants become familiar with them (Kam & Chan, 2018). Post-hoc statistical indices such as longstring response patterns, psychometric synonyms/antonyms, and Mahalanobis distance can be useful (Yentes, 2023), although researchers are encouraged to adopt model-based techniques, such as constrained factor and IRT mixture models (e.g. Kam & Cheung, 2023; Ulitzsch et al., 2022) and multilevel latent class analyses (Hasselhorn et al., 2023), which allow for the classification of random, patterned, and attentive respondents—without the need for additional survey items.
Managing careless responding requires more than simply discarding data. Once CR has been detected, researchers must make thoughtful decisions about how to handle it. Model-based approaches, such as constrained factor mixture models, can help disentangle trait-relevant from trait-irrelevant response patterns at the group level. However, Edwards (2019) recommends alternative strategies, such as statistically controlling for CR indices or using them as moderator variables in substantive models. These approaches acknowledge that CR can systematically influence results and should be modeled—not merely eliminated—to preserve data quality and enhance replicability.
There is hope—but only if we treat careless responding as a central concern rather than a peripheral nuisance. By integrating prevention, detection, and thoughtful data management strategies, researchers can substantially improve data quality in health and social sciences. Thus, researchers should adopt rigorous and transparent practices in dealing with careless responding in survey research.
Meta-analysis is the statistical methodology to synthesize findings across multiple studies. However, publication bias is arguably one of the most important threats to the validity of a meta-analysis. One major consequence of publication bias is overestimation of the meta-analytic effect size. To address this, various methods have been developed to correct for publication bias in a meta-analysis and also to test for its presence.
This presentation will start with providing a short overview of evidence for the presence of publication bias in the literature. I will then introduce several methods to test and correct for publication bias in a meta-analysis. Both traditional methods (e.g., fail-safe N and the trim-and-fill method) and nowadays recommended methods (e.g., selection model approaches and regression based methods) will be discussed. Finally, I will highlight recent advances in the field and outline directions for future research.
To do
The field of mixed methods research continues to evolve, pushing the boundaries of methodological innovation to address complex and multifaceted research problems. This keynote address introduces the Integrated Mixed Methods Transformation Approach (IMMTA) as a meta-framework that systematically transforms monomethod research designs into fully integrated mixed methods research approaches. IMMTA fosters seamless integration of qualitative and quantitative elements across all research stages, resulting in richer, more comprehensive findings and maximizing methodological rigor. By embedding integration at all phases—design, data collection, analysis, and interpretation—IMMTA enhances the depth and applicability of research, particularly in interdisciplinary settings such as those at RAND.
In the second part of my keynote address, I will explore Critical Dialectical Pluralism (CDP) 2.0, an evolution of its predecessor, CDP 1.0, now positioned as a transformative multidimensional metaparadigm and metaphilosophy for mixed methods research. Grounded in the five pillars of social justice, inclusion, diversity, equity, and social responsibility (SIDES), CDP 2.0 represents a shift toward socially responsive and ethically engaged research practices. This meta-framework promotes participant empowerment by redefining their role as co-researchers and challenges traditional research hierarchies to foster an egalitarian and impactful research paradigm.
By bridging IMMTA and CDP 2.0, this keynote address offers a transformative perspective on mixed methods research, one that is methodologically rigorous and ethically profound. Attendees will leave with an enriched understanding of how to apply these paradigms to advance research, making an impact on policy and practice. This session promises to be a forward-looking discussion that reimagines the future of integrated research methodologies.