Identifying relationships between cognitive processes across tasks, contexts, and time

Abstract

It is commonly assumed that a specific testing occasion (task, design, procedure, etc.) provides insights that generalize beyond that occasion. This assumption is infrequently carefully tested in data. We develop a statistically principled method to directly estimate the correlation between latent components of cognitive processing across tasks, contexts, and time. This method simultaneously estimates individual-participant parameters of a cognitive model at each testing occasion, group-level parameters representing across-participant parameter averages and variances, and across-task correlations. The approach provides a natural way to “borrow” strength across testing occasions, which can increase the precision of parameter estimates across all testing occasions. Two example applications demonstrate that the method is practical in standard designs. The examples, and a simulation study, also provide evidence about the reliability and validity of parameter estimates from the linear ballistic accumulator model. We conclude by highlighting the potential of the parameter-correlation method to provide an “assumption-light” tool for estimating the relatedness of cognitive processes across tasks, contexts, and time.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57(3), 153–178.

    Article  Google Scholar 

  2. Demant, K. M., Vinberg, M., Kessing, L. V., & Miskowiak, K. W. (2015). Effects of short-term cognitive remediation on cognitive dysfunction in partially or fully remitted individuals with bipolar disorder: results of a randomised controlled trial. PLoS One, 10(6), e0127955.

    Article  Google Scholar 

  3. Folstein, M.F., Folstein, S.E., & McHugh, P. R. (1975). Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189– 198.

    Article  Google Scholar 

  4. Forstmann, B. U., Dutilh, G., Brown, S., Neumann, J., Von Cramon, D. Y., Ridderinkhof, K. R., & Wagenmakers, E. -J. (2008). Striatum and pre-SMA facilitate decision-making under time pressure. Proceedings of the National Academy of Sciences, 105(45), 17538–17542.

    Article  Google Scholar 

  5. Forstmann, B. U., Tittgemeyer, M., Wagenmakers, E. -J., Derffuss, J., Imperati, D., & Brown, S. D. (2011). The speed-accuracy tradeoff in the elderly brain: A structural model-based approach. The Journal of Neuroscience, 34(47), 17242–17249.

    Article  Google Scholar 

  6. Gunawan, D., Hawkins, G. E., Tran, M. -N., Kohn, R., & Brown, S. D. (2020). New estimation approaches for the hierarchical linear ballistic accumulator model. Journal of Mathematical Psychology, 96, 102368.

    Article  Google Scholar 

  7. Heathcote, A., Suraev, A., Curley, S., Gong, Q., & Love, J. (2015). Decision processes and the slowing of simple choices in schizophrenia. Journal of Abnormal Psychology, 124, 961–974.

    Article  Google Scholar 

  8. Hedge, C., Vivian-Griffiths, S., Powell, G., Bompas, A., & Sumner, P. (2019). Slow and steady? strategic adjustments in response caution are moderately reliable and correlate across tasks. Consciousness and Cognition, 75, 102797.

    Article  Google Scholar 

  9. Ho, T. C., Yang, G., Wu, J., Cassey, P., Brown, S. D., Hoang, N., & Yang, T.T. (2014). Functional connectivity of negative emotional processing in adolescent depression. Journal of Affective Disorders, 155, 65–74.

    Article  Google Scholar 

  10. Huang, A., & Wand, M. P. (2013). Simple marginally noninformative prior distributions for covariance matrices. Bayesian Analysis, 8(2), 439–452.

    Article  Google Scholar 

  11. John, A. P., Yeak, K., Ayres, H., & Dragovic, M. (2017). Successful implementation of a cognitive remediation program in everyday clinical practice for individuals living with schizophrenia. Psychiatric Rehabilitation Journal, 40(1), 87.

    Article  Google Scholar 

  12. Kaneda, Y., Sumiyoshi, T., Keefe, R., Ishimoto, Y., Numata, S., & Ohmori, T. (2007). Brief assessment of cognition in schizophrenia: Validation of the Japanese version. Psychiatry and Clinical Neurosciences, 61(6), 602–609.

    Article  Google Scholar 

  13. Kvam, P.D., Romeu, R. J., Turner, B., Vassileva, J., & Busemeyer, J. R. (2020). Testing the factor structure underlying behavior using joint cognitive models: Impulsivity in delay discounting and Cambridge gambling tasks. Psychological Methods. https://doi.org/10.1037/met0000264

  14. Lerche, V., & Voss, A. (2017). Retest reliability of the parameters of the Ratcliff diffusion model. Psychological Research, 81(3), 629–652.

    Article  Google Scholar 

  15. Logan, G. D., & Cowan, W. B. (1984). On the ability to inhibit thought and action: a theory of an act of control. Psychological Review, 91, 295–327.

    Article  Google Scholar 

  16. Matzke, D., Hughes, M., Badcock, J. C., Michie, P., & Heathcote, A. (2017). Failures of cognitive control or attention? The case of stop-signal deficits in schizophrenia. Attention, Perception, and Psychophysics, 79 (4), 1078–1086.

    Article  Google Scholar 

  17. Matzke, D., Love, J., & Heathcote, A. (2017). A Bayesian approach for estimating the probability of trigger failures in the stop-signal paradigm. Behavior Research Methods, 49(1), 267–281.

    Article  Google Scholar 

  18. Morey, R. D., & Rouder, J. N. (2013). BayesFactor: Computation of Bayes factors for simple designs [Computer software manual]. (R package version 0.9.4).

  19. Mueller, C. J., White, C. N., & Kuchinke, L. (2019). Individual differences in emotion processing how similar are diffusion model parameters across tasks? Psychological Research, 83(6), 1172–1183.

    Article  Google Scholar 

  20. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 59–108.

    Article  Google Scholar 

  21. Ratcliff, R., & Rouder, J. N. (1998). Modeling response times for two-choice decisions. Psychological Science, 9(5), 347–356.

    Article  Google Scholar 

  22. Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20, 260–281.

    Article  Google Scholar 

  23. Ratcliff, R., Spieler, D., & McKoon, G. (2004). Analysis of group differences in processing speed: Where are the models of processing? Psychonomic Bulletin & Review, 11, 755–769.

    Article  Google Scholar 

  24. Ratcliff, R., Thapar, A., & McKoon, G. (2006). Aging, practice, and perceptual tasks: A diffusion model analysis. Psychology and Aging, 21, 353–371.

    Article  Google Scholar 

  25. Ratcliff, R., Thapar, A., & McKoon, G. (2007). Application of the diffusion model to two-choice tasks for adults 75-90 years old. Psychology and Aging, 22, 56–66.

    Article  Google Scholar 

  26. Ratcliff, R., Thapar, A., & McKoon, G. (2010). Individual differences, aging, and IQ in two-choice tasks. Cognitive Psychology, 60(3), 127–157.

    Article  Google Scholar 

  27. Ratcliff, R., Thompson, C. A., & McKoon, G. (2015). Modeling individual differences in response time and accuracy in numeracy. Cognition, 137, 115–136.

    Article  Google Scholar 

  28. Robbins, T., James, M., Owen, A., Sahakian, B., McInnes, L., & Rabbitt, P. (1996) A neural systems approach to the cognitive psychology of aging: studies with cantab on a large sample of the normal elderly population. Methodology of frontal and executive function, (pp. 215–238). Hove: Lawrence Erlbaum Associates.

    Google Scholar 

  29. Rouder, J. N., & Haaf, J. M. (2019). A psychometrics of individual differences in experimental tasks. Psychonomic Bulletin & Review, 26(2), 452–467.

    Article  Google Scholar 

  30. Rouder, J. N., Kumar, A., & Haaf, J.M. (2019). Why most studies of individual differences with inhibition tasks are bound to fail. https://doi.org/10.31234/osf.io/3cjr5.

  31. Terry, A., Marley, A. A. J., Barnwal, A., Wagenmakers, E. -J., Heathcote, A., & Brown, S. D. (2015). Generalising the drift rate distribution for linear ballistic accumulators. Journal of Mathematical Psychology, 68, 49–58.

    Article  Google Scholar 

  32. Turner, B. M., Forstmann, B. U., Wagenmakers, E. -J., Brown, S. D., Sederberg, P. B., & Steyvers, M. (2013). A Bayesian framework for simultaneously modeling neural and behavioral data. NeuroImage, 72, 193–206.

    Article  Google Scholar 

  33. Turner, B. M., Sederberg, P. B., Brown, S. D., & Steyvers, M. (2013). A method for efficiently sampling from distributions with correlated dimensions. Psychological Methods, 18, 368–384.

    Article  Google Scholar 

  34. Van Maanen, L., Forstmann, B. U., Keuken, M. C., Wagenmakers, E. -J., & Heathcote, A. J. (2016). The impact of MRI scanner environment on perceptual decision-making. Behaviour Research Methods, 48, 184–200.

    Article  Google Scholar 

  35. Voss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: An empirical validation. Memory & Cognition, 32, 1206–1220.

    Article  Google Scholar 

  36. Weigard, A., & Huang-Pollock, C. (2014). A diffusion modeling approach to understanding contextual cueing effects in children with ADHD. Journal of Child Psychology and Psychiatry, 55(12), 1336–1344.

    Article  Google Scholar 

  37. White, C. N., Ratcliff, R., Vasey, M. W., & McKoon, G. (2010). Anxiety enhances threat processing without competition among multiple inputs: A diffusion model analysis. Emotion, 10(5), 662.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Guy E. Hawkins.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was partially supported by the Australian Research Council (ARC) Discovery Project scheme (DP180102195, DP180103613). Hawkins was partially supported by an ARC Discovery Early Career Researcher Award (DECRA; DE170100177).

Appendices

Appendix A: The new experiment

A.1 Method

A.1.1 Design

The experiment used a 3 (task) × 3 (set size) within-subjects design: all three tasks had a three-level manipulation of the number of items in the stimulus array (set size). In the search task, participants were required to look for a target (always present) amongst one, three or seven distractors (implying search set sizes of 2, 4, and 8). The stop-signal task was identical to the search task except that on 25% of trials a stop-signal was presented after the onset of the search array. The time between the onset of the search array and the stop-signal (called the stop-signal delay) was dynamically adjusted for each participant and each set size, using a staircase algorithm. In the match task participants were required to identify if the currently presented stimulus set was a match (the same shapes and colors) or not a match (at least one difference) to the stimulus set presented on the previous trial. The number of stimuli present on screen in each trial was either one, two, or three, and this was manipulated between blocks of trials. Response time and the response itself were recorded for all trials.

A.1.2 Participants

Participants were students from first- and second-year psychology courses at the University of Newcastle who received course credit for their participation. Informed consent was obtained for all participants. Participants had the opportunity to complete the task online (N = 106) or in a lab (N = 81).

Although 187 students participated in the study, only 148 participants are included in the combined analysis. Participants were excluded if they had greater than 0.05% of non-responses due to “too fast” or “too slow” feedback cut offs, as defined in the procedure (n = 8 match, n = 10 search, n = 13 stop) or had accuracy lower than 75%, 85% or 90% for the match, search and stop-signal tasks respectively (n = 17 match, n = 8 search, n = 27 stop). The exclusion criteria were set by investigating the data and removing outliers indicating the participant was performing considerably worse at the task than the bulk of the other participants. This resulted in n = 145 complete data sets for the match task, n = 157 for the search and n = 133 for the stop-signal. There were n = 110 participants who had valid data for all three tasks. These were the participants used in all analyses.

A.1.3 Materials and stimuli

All three tasks were written in JavaScript and HTML5. Although it was impossible to keep screen size and resolution identical across subjects who completed the task online, the relative size and positioning of stimuli was constant. Instructions at the beginning of the experiment required participants to alter the zoom settings to ensure maximum consistency in the displays across participants. On a 1920 x 1080 resolution and 13.3-inch screen, with the participants 60cm away from the screen, each shape subtended approximately 1 of visual angle, and was approximately 5 of visual angle away from the center of the screen. The fixation point was a small cross subtending much less than 1 of visual angle. Stimuli only ever appeared in eight different locations, representing equally spaced points around a circle 5 in radius. For stimulus displays with fewer than eight stimuli, locations were sampled randomly without replacement from the eight possible positions.

The search arrays for all three tasks used just four stimuli: a red circle, green circle, red square, and green square; see Fig. 8. A small gap in the shape, on either the left or right side, was used as the decision stimulus in the search and stop-signal tasks. There was no gap in any stimulus during the match task. The three colors used (red and green for the stimuli, and blue for the stop-signal) were presented at the maximum intensity of their respective hue in the computer’s RGB color model.

Fig. 8
figure8

Screenshots from: (left) the search task, set size eight, with the green circle as the target; (middle) the stop-signal task, set size eight, target red square, with the stop-signal present; and (right) the match task, set size two

A.1.4 Procedure

Participants completed all three tasks in one sitting with opportunities for self-timed breaks within and between tasks. Participants were randomly allocated to one of the six possible task orders. Each task contained on-screen instructions with examples, followed by a series of practice trials and then three experimental blocks with a fixed number of trials each.

For the search task, participants were first presented with instructions that identified which of the four stimuli would constitute their target stimulus; e.g., “search for a red square”. The target was randomly allocated to participants at the start of the task and remained consistent for the duration of that task. This process also occurred in the stop-signal task and thus the target could change across tasks but not within a task. Participants were informed that all shapes have a gap in the left or the right side and that once participants had located the target they should indicate via the “z” and “/” keys if the gap was on the left or right side of the target respectively. Participants were told to respond as quickly as possible.

There were three blocks of 200 trials each, with ten practice trials at the start. At the beginning of each trial a fixation cross was presented for 700 ms. This was then replaced by the search array. The location of the target and the target gap side were randomly chosen at the beginning of each trial. The number of distractors, their shape, color, location, and gap side were also randomly chosen at the start of each trial. A trial concluded after a response. If a response was faster than 250 ms or slower than 2000 ms, then feedback of “TOO FAST” or “TOO SLOW” was provided, displayed for 1500 ms or 5000 ms, respectively. Participants also received accuracy feedback for the first ten experimental trials. This feedback was presented for 1000 ms and 2500 ms for correct and error responses, respectively.

Stop-signal tasks typically have a very simple, almost automatic task for most trials in which participants rapidly press a key to respond on each trial, these are called the “go” trials. A stop-signal appears during the other trials (the “stop” trials), after some delay from the onset of the trial, and participants must withhold their response. In our stop-signal task, the go trials were identical to the search task. All details of the search task were identical except that in the instructions participants were shown a large blue square (see Fig. 8) and told “when you see this symbol DO NOT RESPOND”. They were reminded to respond as quickly as possible when the symbol is not presented to ensure the easy, automatic response style.

Each trial had an independent and identical 25% probability of being a stop-signal trial. At the beginning of the experiment the stop-signal delay (SSD; the time between the presentation of the stimuli and the presentation of the stop-signal) was set to zero across all set sizes, and then adjusted by a staircase procedure independently for each set size. After each correct inhibition, SSD was increased by 50 ms (thus making it harder to inhibit) and after each failed inhibition, SSD was decreased by 50 ms, with a minimum of zero. These staircases converge to the SSDs corresponding to 50% successful inhibition in each set size. Figure 8 shows the stop-signal task, consisting of a blue square in the center of the screen, inside the eight pointed star of shapes (subtending approximately 5 x 5 of visual angle), and a larger outline of a square outside the eight pointed star (approximately 15 x 15 visual angle in size, width of outline approximately 1.75 visual angle. To reduce so-called “trigger failure” (see Matzke et al., 2017), the stop-signal was maintained on screen until the end of the trial. To also prevent participants ignoring the stop-signal they were provided with feedback after every stop trial. Successful inhibitions produced “Good stopping!” while failed inhibitions resulted in “You should have stopped”.

The match task commenced with on-screen instructions that informed the participant green and red circles and squares would be shown on a trial and they needed to remember what they saw for the next trial. At first, they would only be shown one shape to remember, then two and then three shapes. If the stimulus array on any trial consisted of the same stimuli as the previous trial (i.e., with the same shapes and colors), then participants were to press a key indicating match (“/” key). If any shape or color differed, participants were to indicate a non-match (“z” key, as seen in Fig. 9). Participants were explicitly instructed that the position of the stimuli on screen was irrelevant.

Fig. 9
figure9

Example trial sequence from set size two in the match task. The correct response for the second screen would be “non-match”, and the correct response for the third screen would be “match”. Both colors and shapes must be the same, but location does not matter

Unlike the other two tasks, set size was not randomized from trial-to-trial for the match task, because match vs. non-match is not well defined for arrays of unequal size. Instead, set sizes occurred in blocks in a fixed order from one to three, with 100 trials per block. Half of the trials were randomly selected to be matching trials, and the other half were non-matching trials. During the experiment, if the upcoming trial was a match trial, the stimuli were kept fixed from the previous trial, however their locations were randomly resampled. If the upcoming trial was not a match then both the stimuli and location were randomly sampled, subject to the constraint that a match did not occur by chance. The feedback was the same as for the search task; however, the timeout for “too long” responses was 3000 ms and the accuracy feedback continued for the duration of the experiment in the match task. These changes in feedback were implemented after pilot testing, to allow for the greater difficulty of the match task.

A.1.5 Results

Figure 10 shows the mean response time (RT) and accuracy for different conditions, and for each of the three tasks.

Fig. 10
figure10

Accuracy (top row) and response time (RT; bottom row) for the three tasks (columns) in the experiment. Accuracy and median RT were calculated for each participant and each condition. Lines show the averages of these across conditions for data, and symbols show the same but for posterior predictive data generated by the LBA model described in the main text. Red and green colors indicate trials in which the features of the target stimulus appeared in the distractor items (“conjunction”) and when they did not (“feature”), respectively. Error bars show ± 1 standard error of the mean for differences between participants in the model fits

Bayesian repeated measures ANOVAs (Morey and Rouder, 2013) were conducted on the mean RT and accuracy data for each set size, separately for the three tasks. Figure 10 shows that there is a strong effect of set size on RT for all three tasks, reflected in the RT Bayes factors in Table 3. This is not true of accuracy. Although the match task shows an effect of set size on accuracy, for both the search task and the stop task accuracy did not change reliably across set size; see Table 3. This may be due to ceiling effects, as in both those tasks average accuracy is around 95% or above for all conditions. Even in the match task, accuracy does not decrease monotonically over set size as expected. Instead, there is a small increase from the smallest set size (one) to the medium set size (two) and then decreases to the largest set size (three). This is most likely due to practice effects, as the conditions were administered in blocks with increasing set size.

Table 3 Bayes factors in favor of a model including an effect of set size vs. a null model, for the three tasks, for mean RT and accuracy

The RT data are simpler to interpret. The only noteworthy point is that RT is substantially faster for the search task than both the stop and match tasks. As the RTs presented in the table and figure for the stop task are the RTs from the go trials (which are identical to trials in the search task), this increase in RT from the search to the stop task suggests that participants slow their responses when a stop is introduced to the task.

Appendix B: Estimated correlation matrices

Tables 4 and 5 show the full correlation matrices for the two applications of the new method reported in the main text.

Table 4 The lower triangle of the estimated posterior mean of the correlation matrix for Forstmann et al.’s (2008) experiment; the main text gives further details
Table 5 The lower triangle of the estimated posterior mean of the correlation matrix for our experiment; the main text gives further details

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wall, L., Gunawan, D., Brown, S.D. et al. Identifying relationships between cognitive processes across tasks, contexts, and time. Behav Res 53, 78–95 (2021). https://doi.org/10.3758/s13428-020-01405-4

Download citation

Keywords

  • Cognitive model
  • Correlation
  • Covariance
  • Individual differences
  • Latent processes
  • LBA model