Are we capturing individual differences? Evaluating the test-retest reliability of experimental tasks used to measure social cognitive abilities.

Charlotte R. Pennington*, Kayley Birch-Hurst, Matthew Ploszajski, Kait Clark, Craig Hedge, Daniel J. Shaw

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Social cognitive skills are crucial for positive interpersonal relationships, health, and wellbeing and encompass both automatic and reflexive processes. To assess this myriad of skills, researchers have developed numerous experimental tasks that measure automatic imitation, emotion recognition, empathy, perspective taking, and intergroup bias and have used these to reveal important individual differences in social cognition. However, the very reason these tasks produce robust experimental effects – low between-participant variability – can make their use as correlational tools problematic. We performed an evaluation of test–retest reliability for common experimental tasks that measure social cognition. One-hundred and fifty participants completed the race-Implicit Association Test (r-IAT), Stimulus–Response Compatibility (SRC) task, Emotional Go/No-Go (eGNG) task, Dot Perspective-Taking (DPT) task, and State Affective Empathy (SAE) task, as well as the Interpersonal Reactivity Index (IRI) and indices of Explicit Bias (EB) across two sessions within 3 weeks. Estimates of test–retest reliability varied considerably between tasks and their indices: the eGNG task had good reliability (ICC = 0.63–0.69); the SAE task had moderate-to-good reliability (ICC = 0.56–0.77); the r-IAT had moderate reliability (ICC = 0.49); the DPT task had poor-to-good reliability (ICC = 0.24–0.60); and the SRC task had poor reliability (ICC = 0.09–0.29). The IRI had good-to-excellent reliability (ICC = 0.76–0.83) and EB had good reliability (ICC = 0.70–0.77). Experimental tasks of social cognition are used routinely to assess individual differences, but their suitability for this is rarely evaluated. Researchers investigating individual differences must assess the test–retest reliability of their measures.
Original languageEnglish
Article number82
Number of pages19
JournalBehavior Research Methods
Volume57
Issue number2
Early online date31 Jan 2025
DOIs
Publication statusPublished - Feb 2025

Bibliographical note

Copyright © The Author(s) 2025. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit https://creativecommons.org/licenses/by/4.0/

Data Access Statement

All experimental materials, data, and analysis scripts are publicly available via the Open Science Framework: (https://osf.io/q569f/). Code availability All analysis scripts are publicly available at: (https://osf.io/q569f/).

Keywords

  • test-retest
  • social cognition
  • reliability
  • social behaviour
  • task reliability
  • individual differences

Fingerprint

Dive into the research topics of 'Are we capturing individual differences? Evaluating the test-retest reliability of experimental tasks used to measure social cognitive abilities.'. Together they form a unique fingerprint.

Cite this