Advertisement

Individual Differences in Cortical Processing Speed Predict Cognitive Abilities: a Model-Based Cognitive Neuroscience Account

  • Anna-Lena SchubertEmail author
  • Michael D. Nunez
  • Dirk Hagemann
  • Joachim Vandekerckhove
Article

Abstract

Previous research has shown that individuals with greater cognitive abilities display a greater speed of higher-order cognitive processing. These results suggest that speeded neural information processing may facilitate evidence accumulation during decision making and memory updating and thus yield advantages in general cognitive abilities. We used a hierarchical Bayesian cognitive modeling approach to test the hypothesis that individual differences in the velocity of evidence accumulation mediate the relationship between neural processing speed and cognitive abilities. We found that a higher neural speed predicted both the velocity of evidence accumulation across behavioral tasks and cognitive ability test scores. However, only a negligible part of the association between neural processing speed and cognitive abilities was mediated by individual differences in the velocity of evidence accumulation. The model demonstrated impressive forecasting abilities by predicting 36% of individual variation in cognitive ability test scores in an entirely new sample solely based on their electrophysiological and behavioral data. Our results suggest that individual differences in neural processing speed might affect a plethora of higher-order cognitive processes, that only in concert explain the large association between neural processing speed and cognitive abilities, instead of the effect being entirely explained by differences in evidence accumulation speeds.

Keywords

Cognitive abilities Processing speed Cognitive latent variable model Reaction times ERP latencies Diffusion model 

Introduction

Individual differences in cognitive abilities are important predictors for real-world achievements such as job performance and highest level of educational attainment (Schmidt and Hunter 2004). Cognitive ability differences also predict differences in individuals’ health (Deary 2008; Der et al. 2009), happiness (Nikolaev and McGee 2016), and well-being (Pesta et al. 2010). However, what remains largely unexplored are the fundamental biological processes that give rise to individual differences in cognitive abilities across individuals. In this study, we explore how individual differences in cognitive abilities are associated with individual differences in neural processing speed, and how this association can be explained by individual differences in the velocity of evidence accumulation as an intermediate cognitive process.

Previous research has suggested that those individuals with greater cognitive abilities have a higher speed of information processing, typically measured as reaction or inspection times in elementary cognitive tasks on a behavioral level (Kyllonen and Zu 2016; Sheppard and Vernon 2008), or as latencies of event-related potential (ERP) components on a neurophysiological level (e.g., Bazana and Stelmack 2002; Schubert et al. 2015; Troche et al. 2015). Neuroimaging studies have shown that the association between the speed of information processing and cognitive abilities may reflect individual differences in white-matter tract integrity, either as an overall brain property (Penke et al. 2012) or in specific brain regions such as the forceps minor and the corticospinal tract (Kievit et al. 2016).

However, those with greater cognitive abilities do not seem to benefit from a higher speed of information processing during all stages of information processing. Instead, individuals with greater cognitive abilities show a higher speed of information processing only in higher-order cognitive processes such as decision making and memory updating (Schmiedek et al. 2007; Schubert et al. 2017). In particular, the velocity of evidence accumulation during decision making has been repeatedly associated with individual differences in cognitive abilities (Schmiedek et al. 2007; Schmitz and Wilhelm 2016; Schubert et al. 2015; van Ravenzwaaij et al. 2011). Moreover, cognitive abilities have been specifically associated with the latencies of ERP components reflecting higher-order cognitive functions such as memory and context updating (Bazana and Stelmack 2002; McGarry-Roberts et al. 1992; Schubert et al. 2017; Troche et al. 2009). Taken together, these results suggest that a greater speed of information processing may facilitate evidence accumulation during decision making and memory updating and may give rise to advantages in general cognitive abilities. In the present study, we explore this hypothesis by using a hierarchical Bayesian cognitive modeling approach to investigate if individual differences in the velocity of evidence accumulation mediate the relationship between neural processing speed and general cognitive abilities.

Measuring the Speed of Higher-Order Cognitive Processes

Reaction time measures are affected by a variety of cognitive and motivational processes and differences across individuals are not solely due to differences in the specific processes of interest (Nunez et al. 2015; Schubert et al. 2015). Therefore, mean reaction times and differences in reaction times between certain experimental conditions can only provide very imprecise measurements of the speed of specific higher-order cognitive processes. One approach to measure the speed of higher-order cognitive processes is to use validated mathematical models of decision making, which allow estimating the speed and efficiency of specific cognitive processes (Voss et al. 2004). One of the most influential model types used to jointly describe reaction time distributions and accuracies in binary choice tasks are diffusion models. Diffusion models assume that information accumulation follows a continuous, stochastic Wiener process that terminates once one of two decision thresholds has been reached (Stone 1960; Ratcliff 1978; Ratcliff and McKoon 2008). That is, it is assumed that on any given trial an individual will accumulate evidence for one choice over another in a random walk evidence accumulation process with an infinitesimal time step (while neural coding may be more sequential in nature, the infinitesimal approximation should hold true for small time steps). It is predicted that the change in relative evidence Et follows a Wiener (i.e., Brownian motion) process with an average evidence accumulation rate δ and instantaneous variance σ2 (Ross 2014).

Typically, the variance σ2 is fixed to some standardized value for reasons of identifiability (but see Nunez et al. 2017). The drift rate (δ) measures the relative velocity of evidence accumulation during decision making and individual differences in this parameter have been suggested to be associated with individual differences in cognitive abilities (Schmiedek et al. 2007; Ratcliff et al. 2010; 2011; Schubert et al. 2015; Schmitz and Wilhelm 2016). The evidence units per second of the drift rate (δ) are relative to a predetermined decision criterion for evidence (α), which reflects speed–accuracy trade-offs (Voss et al. 2004). In addition, a basic diffusion model consists of one more additional parameter describing and complementing the decision process: The non-decision time (ter) encompasses all non-decisional processes such as encoding and motor reaction time.

It is not surprising that the drift rate parameter in particular has become widely popular in individual differences research (Frischkorn and Schubert 2018), because it allows quantifying the speed of information uptake free of confounding process parameters such as encoding and motor times or decision cautiousness, which are captured by other model parameters and are largely irrelevant for cognitive abilities research. Individual differences in drift rates have been shown to exhibit trait-like properties (i.e., they show temporal stability and trans-situational consistency; Schubert et al. 2016) and to be associated with individual differences in cognitive abilities (Ratcliff et al. 2010; 2011; Schmiedek et al. 2007; Schmitz and Wilhelm 2016; Schubert et al. 2015), attention (Nunez et al. 2015), and word recognition (Yap et al. 2012). The drift rate can even be interpreted in the framework of item response theory (IRT), in which it can under certain assumptions be decomposed into an ability and difficulty parameter (van der Maas et al. 2011).

Moreover, several studies suggest a direct link between drift rates and neural processing correlates in the EEG. In particular, it has been shown that the P3, an ERP component occurring typically about 250–500 ms after stimulus onset with a positive deflection that is maximal at parietal electrodes (Polich 2007), is a neural correlate of the evidence accumulation process captured in the drift rate (Kelly and O’Connell 2013; O’Connell et al. 2012; Ratcliff et al. 2009, 2016; van Ravenzwaaij et al. 2017). O’Connell et al. (2012) and Kelly and O’Connell (2013) even suggested that the buildup rate of this positive centroparietral positive potential may directly reflect the rate of evidence accumulation on a neural level.

Particularly intriguing from an individual-differences perspective is the observation that individual differences in P3 amplitudes across conditions have been shown to explain about 74% of the variance in drift rates δ (Ratcliff et al. 2009). Because both individual differences in drift rates and individual differences in P3 characteristics have been shown to explain cognitive abilities, a theoretical framework of the neurocognitive processes underlying cognitive abilities needs to specify if individual differences in P3 characteristics and drift rates contribute jointly or independently to intelligence differences.

Bridging the Gap Between Neural and Behavioral Correlates of Cognitive Abilities to Outline a Cognitive Theory of Intelligence

As of yet, researchers from the fields of mathematical modeling and cognitive neuroscience have largely independently contributed to our understanding of the basic processes underlying individual differences in cognitive abilities. While mathematical modeling researchers have suggested that the velocity of evidence accumulation may be specifically related to cognitive abilities (Ratcliff et al. 2010; 2011; Schmiedek et al. 2007; Schubert et al. 2015), cognitive neuroscience researchers have characterized the time course of information processing and identified structural and function neural correlates of cognitive abilities(Basten et al. 2015; Neubauer and Fink 2009; Jung and Haier 2007). However, neurophysiological correlates of cognitive abilities still need to be integrated into a theoretical framework that outlines how advantages in neural processing translate into advantages in cognitive information processing that give rise to advantages in cognitive abilities to meaningfully explain the processes underlying individual differences in intelligence.

Based on the associations of P3 latencies and drift rates with intelligence, it may be proposed that the relationship between ERP latencies reflecting higher-order cognition and cognitive abilities is mediated by individual differences in drift rates. Such a mediation account is empirically supported by the result that reaction times partly mediate the relationship between ERP latencies and cognitive abilities (Schubert et al. 2015). Moreover, it has been shown that advantages in a larger number of white-matter tract integrity measures gave rise to advantages in a smaller number of behavioral processing speed measures, which in turn explained about 60% of variance in fluid intelligence in a many-to-one way (Kievit et al. 2016). On the other hand, individual differences in both neural processing speed and drift rates may reflect some confounding variable (e.g., functional brain properties) that is also substantially related to cognitive abilities. This confounding variable account was supported by a recent study that failed to find any transfer of an experimentally induced increase in both neural and behavioral processing speed by transdermal nicotine administration on intelligence test scores (Schubert et al. 2018). Candidate confounding variables may be properties of the salience network that have been associated both with P3 elicitation and individual differences in cognitive abilities (Hilger et al. 2017; Menon and Uddin 2010; Soltani and Knight 2000).

Recent advancements in the emerging field of model-based cognitive neuroscience have demonstrated the advantages of integrating mathematical modeling and cognitive neuroscience to generate and test theoretical accounts that jointly account for neural correlates and cognitive models of psychological processes (e.g., Forstmann et al. 2011; Nunez et al.2017; Palmeri et al. 2017; Turner et al. 2017). In the present study, we used a model-based cognitive neuroscience approach to test the hypothesis that the relationship between ERP latencies reflecting higher-order cognition and cognitive abilities is mediated by individual differences in drift rates. If evidence in favor of the mediation hypothesis is found, the mediation model will provide a clear theoretical outline how advantages in neural processing speed give rise to advantages in cognitive abilities. However, if evidence against the mediation model is found, this will imply that a confounding variable is likely to explain the association of neural processing and drift rates with cognitive abilities.

A Model-Based Cognitive Neuroscience Account of Individual Differences in Cognitive Abilities

Jointly analyzing behavioral and brain data improves inferences about human cognition, because it is assumed that both measures reflect properties of the same latent cognitive process. In particular, the joint analysis of both behavioral and brain data allows to explicitly test theories regarding the cognitive processes and mechanisms governing the association between neural correlates and observable behavior. This simultaneous analysis can be achieved in a hierarchical Bayesian framework using formal mathematical models such as the diffusion model to constrain or inform inferences based on the brain data (Forstmann et al. 2011; Turner et al. 2017). The hierarchical Bayesian framework provides many advantages (Lee 2011; Shiffrin et al. 2008). First and foremost, joint models are fit to all data simultaneously and do not require separate parameter estimation stages that lead to an underestimation of parameter uncertainty or standard errors (Vandekerckhove 2014). Both empirical and simulation studies have shown that ignoring the hierarchy in hierarchically structured data can bias inferences drawn from these data (Boehm et al. 2018; Vandekerckhove 2014).

Second, hierarchical Bayesian models can easily handle low observation counts or missing data structures (Lee and Wagenmakers 2014), which is an ideal property when the cost of collecting neural measurements is high. In particular, Bayesian Markov Chain Monte Carlo (MCMC) sampling finds posterior distributions of model parameters without the need for strong assumptions regarding the sampling distribution of these parameters (Levy and Choi 2013). Moreover, Bayesian statistical modeling approaches do not rely on asymptotic theory (Lee and Song 2004). These two properties make convergence issues in multivariate regression models in smaller samples less likely. Another favorable property of Bayesian hierarchical modeling is shrinkage, which describes the phenomenon that individual parameter estimates are informed by parameter estimates for the rest of the sample. Because less reliable and outlier estimates are pulled toward the group mean, shrinkage has been used in neuroimaging research to improve the reliability of individual functional connectivity estimates by 25 to 30% (Dai and Guo 2017; Mejia et al. 2018; Shou et al. 2014). Taken together, these desirable properties of hierarchical Bayesian models open up the possibility to use multivariate regression models such as structural equation models (SEM) or latent growth curve models in neuroimaging research, where sample sizes are usually smaller than in behavioral research due to the cost associated with the collection of neural measures.

The joint analysis of behavioral and neural data can be expanded into a cognitive latent variable model (CLVM) by including data from multiple conditions and/or tasks and by introducing covariates such as cognitive ability tests or personality questionnaires into the hierarchical model (Vandekerckhove et al. 2011; Vandekerckhove 2014). In addition to jointly modeling behavioral and neural data, the cognitive latent variable framework allows estimating correlations between higher-order variables, which reflect the covariances between behavioral, neural, and cognitive abilities data across experimental tasks or ability tests. As such, a CLVM is a computationally expensive, but a highly flexible, tool that strongly resembles structural equation modeling (SEM) in the way that it allows specifying associations between latent variables and distinguishing between constructs and their measurements. Vandekerckhove (2014) demonstrated the advantages of a CLVM in comparison to a more conventional two-stage analysis when modeling the latent association between evidence accumulation rates in executive function tasks and psychometric measures of dysphoria.

In the present study, we constructed CLVMs to assess the latent relationship between latencies of ERP components reflecting higher-order processing (P2, N2, P3), reaction times and accuracies in elementary cognitive tasks, and general cognitive abilities (see Fig. 1). For this purpose, we reanalyzed data from a study with multiple measurement occasions previously reported in Schubert et al. (2017). In particular, we wanted to test if the association between latencies of ERP components associated with higher-order cognitive functions and general cognitive abilities established with conventional structural equation modeling could be explained by individual differences in the velocity of evidence accumulation.
Fig. 1

Simple visualization of both linking models (such that the mediation-linking model includes dashed connections). Shaded nodes represent observed data across participants i. Bi, δI, and gi represent the highest latent variables of neural processing speed (left, describing shared variance across ERP latencies), evidence accumulation velocity (top, describing shared variance across reaction time distributions), and cognitive ability (right, describing shared variance across intelligence test scores)

For this purpose, we constructed one measurement model for each of the three variable domains (ERP latencies, behavioral data, intelligence test performance). In each of these measurement models, a superordinate latent variable provides an estimate of the common variance of conditions or subtests within each variable domain. This latent variable can be considered a latent trait free of measurement error and task-specific variances. The main reason for estimating those latent traits is that they allow the estimation of individual differences on the construct level and are therefore not restricted to specific measurements or operationalizations of constructs. For ERP latencies, this latent variable reflects an error-free estimate of the neural processing speed of higher-order cognitive processes. For behavioral data, this latent variable reflects an error-free estimate of velocity of evidence accumulation across different elementary cognitive tasks and their conditions. While we used a cognitive model (the diffusion model) to describe performance in these cognitive tasks, we could also have estimated behavioral processing speed as mean reaction times in these tasks. Finally, for intelligence test performance, the superordinate latent variable reflects an error-free measurement of general intelligence across different intelligence subtests.

Each of these latent traits contain a surplus meaning that allows the generalization of any results to other measurements of the same construct, i.e., any association between general intelligence and neural processing speed should not only hold for the specific tests used in the present study, but also for similar cognitive ability tests. To test the mediation hypothesis, we only used those superordinate latent variables and regressed general intelligence on neural processing speed and evidence accumulation velocity, which was in turn regressed on neural processing speed. Hence, the core of our hypothesis that individual differences in the velocity of evidence accumulation mediate the association between neural processing speed and general intelligence is reflected in this regression model of latent variables. The measurement models giving rise to the latent variables only serve to provide error-free and task-/test-general estimates of these three traits.

We also conducted out-of-sample forecasts to validate how well this mediation model was able to predict individual cognitive ability test scores solely based on new participants’ electrophysiological and behavioral data. We expected that a greater speed of neural information-processing would facilitate evidence acquisition during decision making and memory updating, and that this advantage in the velocity of evidence accumulation would mediate the predicted association between neural processing speed and general cognitive abilities.

Materials and Methods

Participants

N = 122 participants (72 females, 50 males) from different occupational and educational backgrounds participated in three sessions of the study. They were recruited via local newspaper advertisements, social media platforms, and flyer distributions in the Rhine-Neckar metropolitan region. Participants were between 18 and 60 years old (M = 36.7, Med = 35.0, SD = 13.6), had normal or corrected to normal vision, and reported no history of mental illness. All participants signed an informed consent prior to their participation in the experiment. The study was approved by the ethics committee of the faculty of behavioral and cultural studies, Heidelberg University.

Procedure

The study consisted of three sessions that were each approximately 4 months apart. Participants completed the experimental tasks in the first and third sessions while their EEG was recorded in a dimly lit, sound-attenuated cabin. The order of tasks (choice reaction time task, recognition memory task, letter matching task) was the same for all participants and both sessions. During the second session, participants completed the cognitive ability tests, a personality questionnaire (data reported in Kretzschmar et al. 2018), and a demographic questionnaire. Each session lasted approximately 3–3.5 h in duration with EEG being collected for approximately 2.5 h. Participants were given breaks between tasks and conditions to reduce mental fatigue.

Measures

Experimental Tasks

Choice Reaction Time Task (CR)

Participants completed a choice reaction time task with two conditions, a two-alternative (CR2) and a four-alternative (CR4) choice condition. Four white squares were presented in a row on a black screen. Participants’ middle and index fingers rested on four keys directly underneath the squares. After a delay of 1000–1500 ms, a cross appeared in one of the four squares and participants had to press the corresponding key as fast and accurate as possible. The screen remained unchanged for 1000 ms after their response to allow the recording of post-decision neural processes. Then, a black screen was shown for 1000–1500 ms between subsequent trials; the length of the inter-trial interval (ITI) was uniformly distributed. See the left part of Fig. 2 for an overview of the experimental procedure. While the task may suggest that the stimulus might simply “pop out,” resulting in immediate stimulus detection after its onset, this is not corroborated by empirical data. An increase in the logarithm of stimulus alternatives leads to a linear increase in RTs (Hick’s law, Hick 1952), which indicates that evidence is accumulated continuously until a decision point is reached and that this process takes longer the more stimulus alternatives are presented, either because more evidence has to be considered or because the process gets noisier. The slope of a regression across choice alternatives in Hick-like tasks is supposed to reflect the “rate of gain of information” (Hick 1952), which is conceptually very similar to the drift rate as a measure of the rate of evidence accumulation.
Fig. 2

Participants completed three experimental tasks. The choice reaction time task (CR) consisted of 2-choice (CR2) and 4-choice (CR4) conditions with 200 trials each, the letter matching task of a physical identity (PI) and name identity (NI) condition with 300 trials each, and the recognition memory task (RM) of memory set sizes 1 (RM1), 3 (RM3), and 5 (RM5) with 100 trials each

In the two-choice response time condition, the number of choices was reduced to two squares in which the cross could appear for 50 subsequent trials. In the four-choice response time condition, the cross could appear in any of the four squares. Both conditions began with ten practice trials with immediate feedback followed by 200 test trials without feedback. The order of conditions was counterbalanced across participants. In the four-choice condition, we treated all three responses that were not the correct one as incorrect, allowing us to model the decision process with two decisions thresholds. Due to the high accuracy in the four-choice condition, it is unlikely that this simplification of the decision process has distorted the results, which is also supported by the similar and high factor loadings of the latent choice reaction time factor on the two- and four-choice conditions (see “Results”).

Letter Matching Task (LM)

Participants saw two white letters on a black screen and had to decide whether they were physically (physical identity condition) or semantically (name identity condition) identical by pressing one of two keys. Letters were identical in 50% of the trials. Each trial was followed by an inter-trial interval (ITI) of 1000–1500 ms. See the middle part of Fig. 2 for an overview of the experimental procedure. Conditions were presented block-wise. Each condition began with ten practice trials with immediate feedback followed by 300 test trials without feedback. All participants completed the physical identity condition first at the first measurement occasion, and second at the second measurement occasion.

Recognition Memory Task (RM)

Participants viewed memory sets of white, numerical digits (0 to 9) on a black screen. Digits were presented sequentially for 1000 ms each followed by a blank inter-stimulus interval shown for 400–600 ms. After the final digit was presented, participants saw a black screen with a white question mark for 1800–2200 ms. Subsequently, they were shown a single digit and had to decide whether the digit had been included in the previously presented memory set by pressing one of two keys. Each trial was followed by a uniformly distributed ITI of 1000–1500 ms. The probe digit was included in the memory set in 50% of the trials. There were three conditions of the experiment with the memory set consisting of either one, three, or five digits. See the right part of Fig. 2 for an overview of the experimental procedure in the set size 3 condition. The three conditions were presented block-wise and the order of presentation was counterbalanced across participants. Each condition consisted of ten practice trials with immediate feedback followed by 100 test trials without feedback.

Cognitive Abilities Tests

Berlin Intelligence Structure Test (BIS)

We administered the Berlin intelligence structure test (Jäger and Süß 1997), which distinguishes between four operation-related (processing speed, memory, creativity, processing capacity) and three content-related (verbal, numerical, figural) components of cognitive abilities. Each of the 45 tasks included in the test consists of a combination of one operation- (related) with one content-related component. Following the manual, we calculated participants’ scores in the four operation-related components by aggregating the normalized z-scores of tasks reflecting the specific operational components irrespective of content. The mean score of the processing capacity (PC) component was M = 101.70 (SD = 7.99), the mean score of the processing speed (PS) component was M = 98.00 (SD = 7.10), the mean score of the memory (M) component was M = 99.40 (SD = 6.51), and the mean score of the creativity (C) component was M = 98.02 (SD = 6.14). We then transformed these scores to z-scores for further analyses.

Advanced Progressive Matrices (APM)

Participants completed a computer-adapted version of Raven’s Advanced Progressive Matrices (Raven et al. 1994). The APM is a fluid intelligence test that consists of 36 items. Each item consists of a 3 × 3-matrix with geometric figures that follow certain logical rules and symmetries. The last element of the matrix is missing and must be chosen out of eight alternatives without time limit (see Fig. 3 for a fictional sample item). Following the manual, participants’ performance was calculated as the number of correctly solved items of the second set. Moreover, we calculated performance in the odd and even trials of the test separately to construct two indicators of latent APM performance. We then transformed these raw test sores to z-scores for further analyses. Participants solved on average M = 23.43 (SD = 6.71) items correctly, which corresponds to a mean IQ score of M = 98.80 (SD = 15.68). Performance on even trials, Meven = 12.23 (SD = 3.51) correctly solved items, was comparable to performance on odd trials, Modd = 11.20 (SD = 3.52) correctly solved items.
Fig. 3

Example stimuli of Raven’s Progressive Matrices. Each item consists of a 3 × 3-matrix with geometric figures that follow certain logical rules and symmetries. The last element of the matrix is missing and must be chosen out of eight alternatives

EEG Recording

Participants’ EEG was recorded with 32 equidistant silver-silver chloride electrodes, a 32-channel BrainAmp DC amplifier (Brain Products, Munich), and a sampling rate of 1000 Hz (software bandpass filter of 0.1–100 Hz with a slope of 12 db/octave). In addition, participants’ electrooculogram (EOG) was recorded bipolarly with two electrodes positioned above and below the left eye and two electrodes positioned at the outer corners of the eyes. Electrode impedances were kept below 5 kΩ during recording. Data were collected with a central electrode reference but later offline re-referenced to the average activity of all electrodes (average reference). The data were filtered offline with a low-pass filter of 16 Hz with a slope of 12 db/octave.

Data Analysis

Behavioral Data

To remove outliers in the behavioral data, we discarded any reaction time faster than 100 ms or slower than 3000 ms. In the second step, we discarded any trial with logarithmized reaction times exceeding ± 3 standard deviations from the mean reaction time of each condition. Deviations in criteria (i.e., less strict criteria) did not affect the covariance structure between variables, suggesting adequate robustness.

Evoked Electrophysiological Measures

Event-related potentials (ERPs) were analyzed separately for each task and condition. ERPs were calculated by averaging all experimental trials, time-locked to the onset of the task-relevant visual stimuli, with windows of interest that were 1000 ms long with a preceding baseline of 200 ms. We corrected for ocular artifacts with the regression procedure suggested by Gratton et al. (1983). Windows of EEG data with amplitudes exceeding ± 70 μV at least once within the time window, with amplitude changes exceeding 100 μV within 100 ms, or with activity lower than 0.5 μV were discarded as artifacts.

Latencies of three ERP components were calculated for each participant in each experiment. Grand-average waveforms of event-related potentials are presented in Fig. 4. P2 peak latencies were determined with regard to the greatest positive local maxima at the fronto-central electrode on the midline, which roughly corresponds to the Fz electrode in the 10-20 system, in a 120 to 320 ms time window. N2 and P3 peak latencies were determined with regard to the greatest negative and positive local maxima at the parietal electrode on the midline, which roughly corresponds to the Pz electrode in the 10-20 system, in a 140 to 370 ms time window (N2) and a 200 to 630 ms time window (P3), respectively. Peak latencies were determined separately for each condition of each experimental task, then averaged across conditions within each experiment, and then z-standardized for further analyses. Prior to averaging across experimental conditions, we discarded any peak latencies exceeding ± 3 SDs from the mean peak latency of each condition. If any peak latencies were discarded, the average across conditions was calculated based on the remaining conditions.
Fig. 4

Grand averages of event-related potentials at frontal, central, and parietal electrodes over midline. ERPs were elicited by stimulus onset and averaged across laboratory sessions and conditions for each experimental task

Cognitive Latent Variable Models

We constructed hierarchical Bayesian models to assess the latent relationship between reaction times, latencies of the three ERP components (P2, N2, P3), and cognitive ability test scores. For this purpose, we defined three separate sub-models describing the domain-specific associations between (a) ERP latencies in experimental tasks across two measurement occasions, (b) behavioral data in experimental tasks across two measurement occasions, and (c) performance in cognitive ability tests.

Then, we constructed two models using either (1) only ERP latencies or (2) ERP latencies and behavioral data to predict performance in cognitive ability tests. To test the hypothesis that drift rates mediate the relationship between neural processing speed and cognitive abilities, we compared performance of a direct regression model, in which ERP latencies predicted cognitive abilities (“Regression Model”), to a mediation model, in which the effect of ERP latencies on cognitive abilities was mediated by drift rates (“Mediation Model”).

We used Just Another Gibbs Sampler (JAGS; Plummer 2003) with a module that adds a diffusion model distribution to JAGS (jags-wiener; Wabersich and Vandekerckhove 2014) to find parameter estimates for the hierarchical model. Each model was fit with three Markov Chain Monte Carlo (MCMC) chains run in parallel. Each chain contained 2000 burn-in samples and 100,000 additional samples with a thinning parameter of 10, resulting in 10,000 posterior samples per chain. Posterior samples from the three chains were combined to one posterior sample consisting of 30,000 samples for each model parameter. Model convergence was evaluated based on the Gelman-Rubin convergence statistic R̂, which compares the estimated between-chains and within-chain variances for each model parameter (Gelman and Rubin 1992). Negligible differences between these variances were indicated by R̂ values close to 1.

Submodel: ERP Latencies in Experimental Tasks

ERP latencies were modeled in a hierarchical structural equation model (SEM) inspired by the parameter expansion approach suggested by Merkle and Rosseel (2018). Each of the three ERP latencies (P2, N2, P3) was quantified in three tasks at two sessions. Hence, six observed variables (3 tasks j× 2 sessions m) loaded onto each of the three first-order component (c)-specific ERP factors η(P2), η(N2), and η(P3). These three latent components were loaded onto a second-order latent factor B that was estimated per participant i.

Latent factors and observed variables had normally distributed prior and hyperprior distributions. The means of these priors reflected linear regressions of the respective higher-order factors. For reasons of identifiability, the loading γ(P2) of the first lower-order factor ηP2 on the higher-order factor B was fixed to 1, while the other loadings, γ(N2) and γ(P3), were given standard normal priors: γ(P2) = 1 and \(\gamma _{(N2)}, \gamma _{(P3)} \sim \mathcal {N}(0,1)\).

Finally, precisions ψ (inverses of variances) of all latent variables were modeled as gamma distributed variables: ΨB, ψ(P2), ψ(N2), ψ(P3) ∼Γ(1,0.5).
$$\begin{array}{@{}rcl@{}} \eta_{i(P2)} &\sim& \mathcal{N}(\gamma_{(P2)}\cdot B_{i} \ , \ \psi_{(P2)} )\\ \eta_{i(N2)} &\sim& \mathcal{N}(\gamma_{(N2)}\cdot B_{i} \ , \ \psi_{(N2)} )\\ \eta_{i(P3)} &\sim& \mathcal{N}(\gamma_{(P3)}\cdot B_{i} \ , \ \psi_{(P3)} )\\ \end{array} $$
For the second-order latent factor,
$$B_{i} \sim \mathcal{N}(0 \ , \ {\Psi}_{B}) $$
Subsequently, the observed latencies ERPicjm of ERP components c, tasks j, and measurement occasions m for each participant i were regressed onto the first-order latent variables. These regressions were defined by the respective factor loadings λcjm, the respective higher-order latent variables ηic, and the respective precisions 𝜃cjm . Factor loadings λcjm on the first-order latent variables were fixed to 1 for task j = CR and measurement occasion m = 1 for all three ERP components for reasons of identifiability. See the bottom left parts of Figs. 56, and 7 for a graphical illustration of the measurement model of ERP latencies.
$$\begin{array}{@{}rcl@{}} ERP_{icjm} &\sim& \mathcal{N}(\lambda_{cjm}\cdot\eta_{ic} \ , \ \theta_{cjm})\\ \lambda_{c(CR)1} &=& 1\\ \lambda_{cjm} &\sim& \mathcal{N}(0,1) \ \ \ \forall \ \ (j,m) \notin \{j = {CR}\} \cap \{m = 1\} \\ \theta_{cjm} &\sim& {\Gamma}(1, 0.5)\\ \end{array} $$
Fig. 5

Graphical visualization of both the regression-linking and mediation-linking models (such that the mediation-linking model includes dashed connections). An alternate way of understanding the neurocognitive models presented in this manuscript is by viewing the graphical notation for hierarchical models as described by Lee and Wagenmakers (2014). Shaded nodes represent observed data while unshaded nodes represent unknown (fitted) parameters. Arrows represent direction of influence such that hierarchical parameters influence lower level parameters and observed data. Plates denote the number of observations for each variable and data point of participant i, experimental task j, experimental condition k, measurement occasion m, ERP component c, cognitive abilities task t, and trial n. Behavioral data y is a vector of both reaction time and accuracy observations

Fig. 6

Structural equation modeling visualization of the regression linking model. Posterior medians of standardized regression weights are shown next to paths. Asterisks indicate factor loadings fixed to 1. CR/CR2/CR4 = choice reaction time task with two or four alternatives; RM/RM1/RM3/RM5 = recognition memory task with memory set size of 1, 3, or 5; LM/PI/NI = letter matching task with physical identity or name identity condition; PC, ; PS, processing speed; M, memory; C, creativity

Fig. 7

Structural equation modeling visualization of the mediation linking model. Posterior medians of standardized regression weights are shown next to paths. Asterisks indicate factor loadings fixed to 1. CR/CR2/CR4 = choice reaction time task with two or four alternatives; RM/RM1/RM3/RM5 = recognition memory task with memory set size of 1, 3, or 5; LM/PI/NI = letter matching task with physical identity or name identity condition; PC, processing capacity; PS, processing speed; M, memory; C, creativity

Submodel: Behavioral Data in Experimental Tasks

We used a combination of the SEM approach based on parameter expansion described above and the hierarchical diffusion model approach described by Vandekerckhove et al. (2011) to model individual differences in reaction times and accuracies in experimental tasks j, conditions k, and measurement occasions m.

In a first step, we modeled task-, condition-, and measurement occasion-specific drift rates in a hierarchical SEM with three task-specific first-order factors ηij. These three latent components loaded onto a second-order latent factor Δi. Again, latent factors and observed variables had normally distributed priors and hyperpriors. The means of these priors reflected linear regressions of the respective higher-order factors.

For reasons of identifiability, the loading γ(CR) of the first lower-order factor η(CR) on the higher-order factor Δ was fixed to 1, while the other loadings, γ(RM) and γ(LM), were given standard normal priors: γ(CR) = 1 and \(\gamma _{(RM)}, \gamma _{(LM)} \sim \mathcal {N}(0,1)\). Precisions ψ (inverses of variances) of all latent variables were modeled as gamma distributed variables: ψ(CR), ψ(RM), ψ(LM) ∼Γ(1,0.5).
$$\begin{array}{@{}rcl@{}} \eta_{i(CR)} &\sim& \mathcal{N}(\gamma_{(CR)}\cdot{\Delta}_{i} \ , \ \psi_{(CR)})\\ \eta_{i(RM)} &\sim& \mathcal{N}(\gamma_{(RM)}\cdot{\Delta}_{i} \ , \ \psi_{(RM)})\\ \eta_{i(LM)} &\sim& \mathcal{N}(\gamma_{(LM)}\cdot{\Delta}_{i} \ , \ \psi_{(LM)})\\ \end{array} $$
Subsequently, the condition, task-, and measurement-occasion-specific drift rates δijkm were regressed onto the first-order latent variables ηij. Factor loadings on the respective first-order latent variables were fixed to 1 for condition k = 1, referring to the condition with lowest information processing demands within each task, and measurement occasion m = 1 for all three tasks for reasons of identifiability. The other loadings λjkm were given standard normal priors: \(\lambda _{jkm} \sim \mathcal {N}(0,1)\). Precisions of drift rates were modeled as gamma distributed variables: 𝜃jkm ∼Γ(1,0.5). In addition, we estimated intercepts νjkm for the lowest-order drift rates, because the behavioral data were not z-standardized: \(\nu _{jkm} \sim \mathcal {N}(2,1.5^{2})\).
$$\delta_{ijkm} \ \sim \ \mathcal{N}(\nu_{jkm} + \lambda_{jkm}\cdot\eta_{ij}, \theta_{jkm}) $$
In a second step, these drift rates were entered into the diffusion model distribution in addition to task-, condition-, measurement occasion-, and person-specific boundary separation αijkm and non-decision time τijkm parameters (with the starting point parameter fixed at 0.5). Both boundary separation parameters and non-decision times were given standard normal priors: \(\alpha _{ijkm} \sim \mathcal {N}(1, 0.5^{2})\), \(\tau _{ijkm} \sim \mathcal {N}(0.3, 0.2^{2})\). See the top parts of Figs. 56, and 7 for a graphical illustration of the measurement model of behavioral data in experimental tasks.
$$\mathbf{y}_{ijkmn} \ \sim \ Wiener(\alpha_{ijkm}, 0.5, \tau_{ijkm}, \delta_{ijkm}) $$

Submodel: Performance in Cognitive Abilities Tests

Performance in the two cognitive abilities tests was modeled with a SEM. The four operation-related components of the BIS and the two halves of the APM were loaded onto a first-order latent factor gi.

Subsequently, the observed test scores IQit per cognitive ability test t were regressed onto the first-order latent variable gi. For reasons of identifiability, the loading λ1 of the processing capacity score of the BIS η1 on the higher-order factor g was fixed to 1, while the other loadings, λ2, λ3, λ4, λ5, λ6 were given standard normal priors: λ1 = 1 and \(\lambda _{2}, \lambda _{3}, \lambda _{4}, \lambda _{5}, \lambda _{6} \sim \mathcal {N}(0,1)\). Precisions 𝜃 (inverse of variances) of observed IQ scores were given gamma distributed priors: 𝜃t ∼Γ(1,0.5). See the bottom right parts of Figs. 56, and 7 for a graphical illustration of the measurement model of cognitive abilities tests.
$$IQ_{it} \ \sim \mathcal{N}(\lambda_{t}\cdot g_{i} \ , \ \theta_{t}) $$

Linking Models

Finally, we linked all submodels in two linking structures. Whereas the three submodels only established latent measurement models for each of the three variable domains (neural data, behavioral data, and cognitive abilities data), the two linking structures specified structural associations between variable domains. Hence, the comparison of the two linking models contained the critical comparison: If the velocity of evidence accumulation mediated the relationship between neural speed and cognitive abilities, the mediation model should outperform a direct regression of cognitive abilities on ERP latencies.

We therefore specified two linking structures. In the first linking structure, we specified a regression model and predicted cognitive abilities test scores solely through neural processing speed by regressing the latent cognitive abilities factor gi on the latent ERP latency factor Bi (see Fig. 1 and compare to Fig. 6), while the latent drift rate factor Δi was unrelated to the other two latent variables.
$$\begin{array}{@{}rcl@{}} g_{i} &\sim& \mathcal{N}(\beta\cdot B_{i} \ , \ {\Psi}_{g}), \\ {\Delta}_{i} &\sim& \mathcal{N}(0 \ , \ {\Psi}_{\Delta}), \\ \beta &\sim& \mathcal{N}(0,1), \\ {\Psi}_{g}, {\Psi}_{\Delta} &\sim& {\Gamma}(1, 0.5)\\ \end{array} $$
The second linking structure consisted of a mediation model, in which the latent cognitive abilities factor gi was regressed onto both the latent ERP latency factor Bi and the latent drift rate factor Δi, which was in turn regressed onto the latent ERP latency factor Bi (see Fig. 7).
$$\begin{array}{@{}rcl@{}} g_{i} &\sim& \mathcal{N}(\beta_{1}\cdot B_{i} + \beta_{2}\cdot{\Delta}_{i}, \ {\Psi}_{g}), \\ {\Delta}_{i} &\sim& \mathcal{N}(\beta_{3}\cdot B_{i}, \ {\Psi}_{\delta}), \\ \beta_{1}, \beta_{2}, \beta_{3} &\sim& \mathcal{N}(0,1), \\ {\Psi}_{g}, \theta_{\delta} &\sim& {\Gamma}(1, 0.5)\\ \end{array} $$

The data of 92 randomly drawn participants (of 114 total; drawn without replacement) were used as a training set to find posterior distributions of cognitive latent variables (i.e., samples from probability distributions that reflect certainty/uncertainty about parameter estimates as reflected by the data). Standardized regression weights were calculated by multiplying unstandardized regression weights with the quotient between the ratio of standard deviation between the predictor (the higher-order latent variable) to the criterion (the lower-order latent or observed variable): \(\beta = b\cdot \frac {\sigma _{y}}{\sigma _{x}}\). The indirect mediation effect βindirect was calculated by multiplying the standardized regression weights β2 and β3 in the Mediation model as discussed by Baron and Kenny (1986). We report the median and 2.5th, and 97.5th percentiles, forming a 95% credible interval (CI) as an equal-tailed interval to describe the posterior distributions of standardized regression weights.

Model Evaluation

The performance of both linking structures was compared based on their in-sample prediction ability, their Deviance Information Criterion (Spiegelhalter et al. 2014), and, crucially, their out-of-sample-prediction ability of new participants’ data.
Table 1

Mean RTs (SD in parentheses) for all conditions of the three experimental tasks

 

Session 1

Session 2

Task

Accuracies

RTs

Accuracies

RTs

Choice reaction time task

CRT2

.99 (.01)

382.79 (58.02)

1.00 (.01)

381.27 (61.01)

CRT4

.99 (.01)

477.22 (82.64)

.98 (.02)

467.31 (85.70)

Recognition memory task

Set size 1

.97 (.02)

590.96 (115.67)

.98 (.02)

584.02 (135.64)

Set size 3

.97 (.02)

728.46 (167.21)

.98 (.03)

706.61 (176.81)

Set size 5

.97 (.03)

890.03 (240.74)

.95 (.09)

850.98 (223.18)

Letter matching task

Physical identity

.98 (.02)

617.79 (93.93)

.98 (.02)

605.19 (102.41)

Name identity

.98 (.02)

699.50 (113.02)

.97 (.02)

704.38 (126.36)

In-Sample Prediction

Fitting the model with the training set, we created posterior predictive distributions by simulating new neural, behavioral, and cognitive abilities data separately for each participant based on each participant’s posterior distributions of model parameters and on model specifications. Hence, we simulated two posterior predictive data sets for each of the 92 participants in the training set: One of these posterior predictive data sets was based on model specifications and parameter estimates of the regression model, and the other one based on model specifications and parameter estimates of the mediation model. Subsequently, we assessed how strongly these simulated data were related to the observed data for the whole sample of 92 participants separately for each of the two candidate models. For this purpose, we compared (a) observed and predicted ERP latencies for each ERP component c, experimental task j, and session m; (b) observed and predicted RT distributions and accuracies for each condition c, experimental task j, and session m; and (c) observed and predicted IQ test scores for each subtest t. Because accuracies in elementary cognitive tasks are typically near ceiling, the prediction of accuracies was considered less critical than the prediction of the other three variables in the present study. RT distributions were compared by comparing the 25th, 50th, and 75th percentiles of the observed and predicted RT distributions. To quantify the association between observed and predicted values, we calculated \(R^{2}_{pred}\) as the proportion of variance of values T (ERP latencies, percentiles of the RT distribution, accuracies in the experimental tasks, cognitive abilities test scores) explained by model predictions. This statistic is based on the mean squared error of prediction of T, MSEPT, and the estimated variance of T across participants, \(\widehat {Var(T)}\).
$$R^{2}_{pred} = 1 - \frac{{\sum}^{I}_{i = 1}{(T_{(i)} - T_{pred(i)})^{2}}/(I-1)}{{\sum}^{I}_{i = 1}{(T_{(i)} - \overline{T})^{2} / (I-1)}} = 1 - \frac{MSEP_{T}}{\widehat{Var(T)}} $$

Deviance Information Criterion (DIC)

DIC is a measure of goodness-of-fit for hierarchical models that provides a penalty for model complexity (Spiegelhalter et al. 2014). DIC can be thought of as an extension of Akaike information criterion (AIC) for hierarchical models that enforce shrinkage, such that the number of parameters k is no longer useful as a penalty for model complexity. Another alternative is the Bayesian information criterion (BIC), which approximates the logarithm of the Bayes Factor (i.e., the ratio of Bayesian probabilities for two comparison hypotheses), but which is difficult to estimate in most hierarchical models (Kass and Raftery 1995). Due to ease of estimation and implementation in JAGS (Plummer 2003), we used DIC as a known model comparison metric. Smaller DIC values indicate more favorable models. However, we consider out-of-sample prediction of new participants to be the ultimate test of models that natively penalizes model complexity due to overfitting of in-sample data.

Out-of-Sample Prediction

A test set of 22 new participants (the randomly drawn remaining participants) was used to find a second set of posterior predictive distributions for each participant. This test set allowed us to assess how well models were able to predict new participants’ data in one domain (e.g., cognitive abilities) based on data from the other two domains (e.g., electrophysiological and behavioral data). We iteratively predicted data from each of the three domains (electrophysiological, behavioral, and cognitive abilities data) by the other two for each new participant and each of the two models. Out-of-sample prediction was then evaluated in each of the three data domains using \(R^{2}_{pred}\) as a measure of variance explained in variables of one domain by variables from the other two domains. Note that there is no constraint of \(R^{2}_{pred}\) in out-of-sample evaluation to values above 0. Negative values indicate that there is more deviation of the predicted values from the true values than there is variance in the true values themselves.

Open-Source Data and Analysis Code

MATLAB, Python, and JAGS analysis code and data are available at https://osf.io/de75n/ and in the following repository (as of February 2018): https://github.com/mdnunez/ERPIQRT/

Results

Mean performance (reaction times and accuracies) in the three experimental tasks is shown in Table 1. Grand-average waveforms of event-related potentials are presented in Fig. 4. See Table 2 for mean ERP latencies in both sessions.
Table 2

Mean ERP Latencies (SD in parentheses) averaged across conditions of each of the three experimental tasks

Task

P2

N2

P3

Session 1

Choice reaction time task

211.54 (32.82)

206.15 (27.71)

330.67 (44.26)

Recognition memory task

234.08 (34.48)

251.11 (42.05)

374.35 (74.76)

Letter matching task

222.26 (33.74)

247.87 (36.80)

414.97 (86.45)

Session 2

Choice reaction time task

208.44 (33.77)

210.38 (29.62)

324.40 (42.04)

Recognition memory task

230.35 (28.19)

248.48 (43.74)

382.39 (81.13)

Letter matching task

218.16 (25.27)

240.02 (44.65)

377.74 (75.09)

In-Sample Prediction

The first linking model (see Figs. 5 and 6), in which cognitive abilities were solely predicted by neural processing speed, provided an acceptable account of the training data. On average, it explained 63% of the variance in cognitive abilities tests, 62% of the variance in ERP latencies, 87% of the variance in the 25th percentile of the RT distribution, 89% of the variance in the 50th (median) percentile of the RT distribution, 83% of the variance in the 75th percentile of the RT distribution, and 30% of the variance in accuracies in reaction time tasks. Note that the cognitive latent variable model may have explained more variance in reaction times than in ERP latencies and cognitive abilities tests because the measurement model of reaction times was more complex (allowing the task-, condition-, and session-specific estimation of boundary separation and non-decision time models not depicted in the structural equation model visualization) than the other two more parsimonious measurement models. The DIC of the overall hierarchical model with the first linking structure was − 3.2012 × 105 and was thus the favored model by the DIC (compared to the second linking structure DIC below). The latent neural processing speed variable predicted the latent cognitive abilities variable to a large degree, β = .84, CI 95% [.75; .91], suggesting that participants with greater cognitive abilities showed a substantially higher neural processing speed.

The second linking model (see Figs. 7 and 5), in which the effect of neural processing speed was partly mediated by drift rates, also provided a good account of the training data. It explained on average 63% of the variance in cognitive abilities tests, 63% of the variance in ERP latencies, 89% of the variance in the 25th percentile of the RT distribution, 90% of the variance in the 50th (median) percentile of the RT distribution, 83% of the variance in the 75th percentile of the RT distribution, and 25% of the variance in accuracies in reaction time tasks. The explained variance is therefore nearly identical to the first linking model. The DIC of the model with the second linking structure was − 3.2007 × 105, a larger, and thus unfavored, DIC compared to the previous model. Again, the latent neural processing speed variable predicted the latent cognitive abilities variable, β1 = .78, CI 95% [.63; .89]. Individual latent neural processing speeds also predicted individual latent drift rates, β3 = .17, CI 95% [.05; .33]. However, there was only weak evidence that greater latent drift rates predicted greater cognitive abilities, β2 = .23, CI 95% [−.05; .52]. In addition, we found some evidence for a negligible indirect effect of neural processing speed on cognitive ability test scores that was mediated by drift rates, βindirect = .04, CI 95% [−.01; .09]. See Fig. 8 for posterior density distributions of the standardized regression weights. To compare both models, we calculated DICs as measures of model fit. The difference between DICs of ΔDIC = 43.27 indicated that the mediation model could not provide a better account of the data than the more parsimonious regression model.
Fig. 8

Posterior density distributions of the standardized regression weights of the mediation linking model. Boxes indicate the interquartile range with the median as a horizontal line. β1, regression of latent cognitive abilities factor on latent neural processing speed factor; β2, regression of latent cognitive abilities factor on latent drift rate factor; β3, regression of latent drift rate factor on latent neural processing speed factor; βindirect, indirect effect

Out-of-Sample Prediction of New Participants

To evaluate the ability to predict unknown data of a new participant in one domain (e.g., unknown cognitive ability test scores) from observed data in another domain (e.g., observed ERP latencies), we assessed out-of-sample-prediction ability for both models in a test set of 22 randomly drawn participants.

Given a new participant’s ERP and RT data, the regression linking model (see Fig. 6) yielded the ability to make somewhat accurate predictions of that participant’s cognitive abilities test scores and ERP latencies. That is, out-of-sample prediction explained 39% of the variance in cognitive abilities tests across participants and tasks and 22% of the variance in ERP latencies across participants and tasks. However, out-of-sample prediction of reaction time data was not successful, R2 = −.51 in the 25th percentile of the RT distribution, R2 = −.50 in the 50th (median) percentile of the RT distribution, and R2 = −.67 in the 75th percentile of the RT distribution. Accuracies could also not be predicted successfully, R2 = − 1.22. Note that \(R^{2}_{pred}\) is not constrained to values above 0 in out-of-sample prediction. Hence, negative values indicated that there was more deviation of the predicted values from the true values than there was variance in the true values themselves. The lack of a successful prediction of behavioral data is not surprising, as the regression model contained no link between drift rates and the other covariates.

The mediation linking model (see Fig. 7) produced very similar predictions of participants’ cognitive ability test scores and ERP latencies. Out-of-sample prediction explained 36% of the variance in cognitive abilities tests across participants and tasks and 23% of the variance in ERP latencies across participants and tasks. Again, prediction of out-of-sample reaction time data was not successful, R2 = − 1.10 in the 25th percentile of the RT distribution, R2 = −.96 in the 50th (median) percentile of the RT distribution, R2 = − 2.09 in the 75th percentile of the RT distribution, and R2 = − 1.46 for accuracies in the reaction time tasks. This lack of a successful prediction of the behavioral data indicates that the covariation of drift rates with ERP latencies and intelligence test scores on the latent level was insufficient to account for observed reaction time data in specific tasks and conditions. The predictive failure likely results from the small latent association of drift rates with ERP latencies and cognitive abilities, but also from large proportions of task- and condition-specific variances in condition-specific drift rates that were not predicted by any covariates.

Discussion

We investigated whether the association between neural processing speed and general cognitive abilities was mediated by the velocity of evidence accumulation. For this purpose, we used a Bayesian cognitive latent variable modeling approach that allowed the joint modeling of behavioral, neural, and cognitive abilities data and estimation of relationships between higher-order latent variables. The cognitive latent variable model was able to predict a substantial amount of variance in cognitive ability test scores in new participants solely based on those participants’ cortical processing speeds.

We observed a strong association between neural processing speed and general cognitive abilities in the way that individuals with greater cognitive abilities showed shorter latencies of ERP components associated with higher-order cognition. Moreover, we found that individuals with greater neural processing speed also showed a greater velocity of evidence accumulation. Given an individual’s speed of neural information processing and evidence accumulation, we could predict about 40 percent of their variance in intelligence test scores. However, the association between neural processing and general cognitive abilities was only mediated by drift rates to a very small degree, and the more complex mediation model did not provide a better account of the data than the more parsimonious regression model.1

These results support the idea that a greater speed of neural information processing facilitates evidence accumulation, and that this increase in the velocity of evidence accumulation translates to some negligible degree to advantages in general cognitive abilities. Although previous studies reported substantial correlations between drift rates and cognitive abilities (Schmiedek et al. 2007; Schmitz and Wilhelm 2016; van Ravenzwaaij et al. 2011), and although preliminary results suggested that measures of neural processing speed and drift rates can load onto the same factor (Schubert et al. 2015), the present study provided the first direct test of the hypothesis that the velocity of evidence accumulation mediates the relationship between neural processing speed and cognitive abilities. Our results suggest that only a very small amount of the shared variance between neural processing speed and cognitive abilities can be explained by individual differences in the velocity of evidence accumulation as a mediating cognitive process. In the following sections, we provide three conceptual explanations why the velocity of evidence accumulation may only explain little of the natural variation in human cognitive abilities associated with cerebral processing speed. Subsequently, we discuss methodological advantages, challenges, and possible extensions of the cognitive latent variable model used in the present study.

1. A Common Latent Process

Both neural processing speed and the velocity of evidence accumulation may reflect properties of the same latent process that is related to general cognitive abilities. However, the drift rate may be a more impure measure of this latent process or may be contaminated by properties of other processes unrelated to cognitive abilities. This position is supported by the observation that we found an association between ERP latencies and drift rates, and by our result that drift rates mediated the relationship between ERP latencies and cognitive abilities at least partially. Moreover, this explanation is consistent with previous research, which suggested that the P3 may be a neural correlate of the evidence accumulation process captured by drift rates (Kelly and O’Connell 2013; O’Connell et al. 2012; Ratcliff et al. 2009, 2016; van Ravenzwaaij et al. 2017). The fact that the associations between neural processing speed and drift rates were lower than the correlations reported in the literature may be due to deviations from previous studies: First, the current study focused on ERP latencies as measures of neural processing speed, whereas previous studies analyzed the relationship between amplitude and capacity-related measures of the EEG and drift rates. Second, previous studies focused mostly on late centro-parietal potentials, whereas the current study included a more diverse time course and topography of ERP components. Third, we only related the latent neural processing speed factor, which reflected the shared variance between different ERP latencies across different tasks, to the latent drift rate factor, and did not inspect task- or component-specific correlations. Considering the psychometric properties of both ERP latencies and drift rates (Schubert et al. 2015; Schubert et al. 2017), it is highly likely that associations between ERP latencies and drift rates would have been higher if we had modeled correlations separately for each condition of each experimental task. However, this task- or condition-specific variance in ERP latencies and drift rates is not of interest regarding general cognitive abilities.

2. Other Candidate Cognitive Processes

The velocity of evidence accumulation may not be the appropriate candidate process mediating the relationship between neural processing speed and cognitive abilities. Instead, shorter latencies of ERP components associated with higher-order cognitive processing may reflect a faster inhibition of extraneous processes and may thus be a neural correlate of the efficiency of selective attention (Polich 2007). The idea that attentional processes underlie individual differences in cognitive abilities has been discussed numerous times. Process overlap theory (Kovacs and Conway 2016), for example, proposes that a limited number of domain-general and domain-specific cognitive processes contribute to individual differences in general cognitive abilities. In the framework of process overlap theory, attentional processes represent a central domain-general bottleneck that constrains cognitive performance across different tasks. This notion is supported by several studies reporting substantial associations between measures of attentional control and executive processes and general cognitive abilities (e.g., Unsworth et al. 2014; Wongupparaj et al.2015).

Additionally, a greater neural processing speed may directly facilitate the storage and updating of information in working memory (Polich 2007), and may thus lead to a greater working memory capacity, which may positively affect performance in a large number of cognitive tasks. This notion is supported by numerous studies reporting large and even near-unity correlations between measures of cognitive abilities and working memory capacity (e.g., Engle et al. 1999; Conway et al. 2002; Kyllonen and Christal 1990). Individual differences in these working memory processes may not be reflected in drift rates estimated in simple binary decision tasks. Instead, future studies could use mathematical models of working memory, such as mathematical implementations of the time-based resource sharing model (Barrouillet et al. 2004) or the SOB-CS (Oberauer et al. 2012), to explicitly model individual differences in parameters of working memory processes and relate these parameters to neural data in a cognitive latent variable model.

Finally, it might even be possible that several cognitive processes mediate the relationship between neural processing speed and cognitive abilities, and that parameters of each single cognitive process only account for a small amount of the substantial association. Larger multivariate studies incorporating cognitive models of these candidate cognitive processes would be required to quantify additive and multiplicative effects of different cognitive processes on the relationship between neural processing speed and general cognitive abilities.

3. Brain Properties as Confounding Variables

Individual differences in neural processing speed may reflect structural properties of the brain that give rise to individual differences in cognitive abilities. Brain properties may be related both to neural processing speed and general cognitive abilities and may thus explain the substantial association between the two variables. Previous research has shown that individuals with greater cognitive abilities showed greater nodal efficiency in the right anterior insula and the dorsal anterior cingulate cortex (Hilger et al. 2017). These brain regions are core components of the salience network that is assumed to be responsible for the detection of salient information and its evaluation with regard to behavioral relevance and an individual’s goals (Downar et al. 2002; Menon and Uddin 2010; Seeley et al. 2007). Dynamic source imaging and lesion studies have revealed that the relative timing of responses of the anterior insula and the dorsal anterior cingulate cortex to stimuli can be indexed by the N2b/P3a component of the ERP, followed by an elicitation of the P3b in neocortical regions in response to the attentional shift (Soltani and Knight 2000; Menon and Uddin 2010). Hence, a more efficient functional organization of the salience network may affect the timing of these ERP components and may also positively affect performance in cognitive ability tests by facilitating the goal-driven selection of task-relevant information.

Cognitive Latent Variable Models

The use of cognitive latent variable models allows the simultaneous modeling of cognitive, neural, and behavioral data across different tasks and ability tests. CLVMs thus allow estimating latent correlations between different measurement areas that are free of unsystematic measurement error. This property is particularly useful when dealing with time-related electrophysiological data, which have been shown to be very inconsistent in their reliability (Cassidy et al. 2012; Schubert et al. 2017). Moreover, CLVMs allow modeling the shared variance between diffusion model parameters across different tasks and conditions in a hierarchical way and can thus solve the problem of low-to-moderate consistencies of model parameters in individual differences research (Schubert et al. 2016).

Three advantages of the hierarchical Bayesian approach have been highlighted by the present study: First, the CLVM demonstrated advantages over classical structural equation modeling approaches in its predictive abilities in small-to-moderate sample sizes. The model has been developed based on only 92 participants and has successfully predicted 62 to 89% of the within-sample variance in neural, behavioral, and cognitive abilities data. A conventional structural equation model with the same number of free parameters would require a substantially larger sample size. Following the rule of thumb to collect at least five observations per estimated parameter (Bentler and Chou 1987), the same model would require a sample size of at least 480 participants in a conventional SEM framework. Taking into account the ratio of indicators to free parameters r (r = number of indicators/number of free parameters), a sample size of at least 930 participants would be required according to the equation n = 50 ⋅ r2 − 450 ⋅ r + 1100 proposed by Westland (2010) based on the simulation results by Marsh et al. (1998). Such large sample sizes are hardly feasible for neuroimaging research except in large-scale collaborative research projects. The Bayesian approach presented here enabled us to fit a structural equation model of great complexity to a sample of only 92 participants. Most importantly, one of the main results previously shown in a more parsimonious conventional structural equation model applied to the same data set (i.e., the great association between neural processing speed and cognitive abilities reported by Schubert et al. 2017) was adequately recovered by the Bayesian model.

Moreover, the latent drift rate trait and task-, condition-, and state-specific boundary separation and non-decision time parameters could account for nearly 90% of the in-sample reaction time data. In comparison, latent diffusion model parameter traits have been shown to account for only 36 to 39% of variance in single-task parameter estimates in a conventional structural equation model (Schubert et al. 2016). This in-sample prediction ability demonstrates that it may be beneficial to model only parameters with known trait properties (e.g., drift rate, see Schubert et al. 2016) as hierarchical factors, while the other model parameters that are known to be more strongly affected by task-specific influences (e.g., non-decision time and boundary separation, see Schubert et al. 2016) are estimated separately for each task and condition.

Second, both the cognitive model and the structural model were fitted to the data in a single step, allowing an accurate representation of parameter uncertainty in posterior distributions (Vandekerckhove 2014), whereas previous studies relating diffusion model parameters to cognitive abilities tests have relied on a two-step process (e.g., Schmiedek et al.2007; Schmitz and Wilhelm 2016; Schubert et al. 2015).

Third, posterior distributions of model parameters were used to predict cognitive ability test scores from neural and behavioral data in a second independent sample. This is the first study to show that posterior predictives of regression weights relating ERP latencies, behavioral data, and cognitive ability test scores may be used to successfully generalize predictions to another independent sample and to predict a substantial amount of new individuals’ cognitive ability test scores solely based on their electrophysiological and behavioral data. That about 40% of new participants’ variance in intelligence test scores could be predicted by the model demonstrates that individual differences in cortical and behavioral processing speed are closely related to general intelligence, and that both models retained their ability to predict previously unseen data despite their complexity.

The model developed in the present study can be easily adjusted to include different sources of neural data, such as functional magnetic resonance imaging or diffusion tensor imaging data, and to relate these data to diffusion model parameters and cognitive ability tests. Within the same hierarchical framework, parameters of different cognitive models could be related to neural and cognitive abilities data. This would, for example, allow testing hypotheses about the relationship between parameters of working memory processes and neural and cognitive abilities data. The flexibility of the hierarchical Bayesian approach allows specifying model and linking structures directly guided by theoretical assumptions, which in turn allows direct comparisons of contradicting theories. In related areas of research, the joint modeling of neural and behavioral data has contributed to our understanding of episodes of mind wandering (Mittner et al. 2014; Hawkins et al. 2015), the dynamic inhibitory processes underlying intertemporal choice (Turner et al. 2018), stopping behavior (Sebastian et al. 2018), the role of attention in perceptual decision making (Nunez et al. 2017), the neurocognitive processes contributing to individual differences in mental rotation (van Ravenzwaaij et al. 2017), and the neurocognitive mechanisms underlying several other cognitive processes. All of these fields of research are of great relevance for individual differences research and may contribute to our understanding of the neurocognitive mechanisms underlying general cognitive abilities. In order to relate covariates to joint models of neural and cognitive behavioral data, different linking strategies have been suggested, ranking from simple regression models to multivariate factor-analytical approaches (e.g., Turner et al. 2017; Turner et al. 2017; Ly et al. 2017; de Hollander et al.2016).

Limitations

One limitation of the present study is that the tasks used to assess individual differences in the efficiency of information processing are so-called elementary cognitive tasks. Elementary cognitive tasks are cognitively relatively undemanding tasks typically used in individual differences research to minimize the influence of individual differences in strategy use and of previous experience with these tasks on task performance. However, cognitively more demanding tasks might yield a stronger association between the velocity of evidence accumulation and cognitive abilities. Whether drift rates based on performance in more demanding tasks such as working memory tasks mediate the association between neural processing speed and cognitive abilities remains an open question. In addition, low error rates may have limited the estimation and interpretation of diffusion model parameters. In particular, identifying drift rate and boundary separation parameters becomes difficult in tasks with few incorrect responses. Although diffusion model parameters provided a good account of the behavioral data in all three tasks, drift rate parameters might have reflected participants’ decision times to a larger degree than their evidence accumulation rates.

Conclusions

We used a cognitive latent variable model approach to show that a higher neural information processing speed predicted both the velocity of evidence acquisition and general cognitive abilities, and that a negligible part of the association between neural processing speed and cognitive abilities was mediated by individual differences in the velocity of evidence accumulation. The model demonstrated impressive forecasting abilities by predicting 35 to 40% of the variance of individual cognitive ability test scores in an entirely new sample solely based on their electrophysiological and behavioral data.

Our results illustrate, however, that the assumption of a unidirectional causal cascade model, in which a higher neural processing speed facilitates evidence accumulation, which may in turn give rise to advantages in general cognitive abilities, was not supported by the data. This result provides important novel insights for intelligence research, because the great associations between both neural and behavioral processing speed and cognitive abilities reported in previous studies may have suggested that a greater neural processing speed gives rise to greater cognitive abilities by facilitating the velocity of evidence accumulation (Schmiedek et al. 2007; Schubert et al. 2017). Our results contradict this hypothesis and instead suggest that neural correlates of higher-order information processing and drift rates might reflect the same latent process that is strongly related to general intelligence. Future research will reveal whether structural or functional brain properties may act as confounding variables giving rise to the association between mental speed and mental abilities by affecting both the speed of information processing and general cognitive abilities.

Footnotes

  1. 1.

    We fitted another variant of the mediation model, in which reaction times were described by a normal distribution instead of a diffusion model distribution to evaluate the benefits of diffusion modeling and the generalizability of our results (for details regarding modeling choices and results, see the online repository). The model predicted the same amount of in-sample variance in ERP latencies and intelligence test scores, but was less accurate in predicting reaction time data (75–84% of explained variance in percentiles of the RT distribution). The out-of-sample prediction of both reaction time data and cognitive ability test scores also deteriorated, with R2s ranging from − 1.79 to − 2.40 for the percentiles of the RT distribution and only 30% of explained variance in cognitive ability test scores. Taken together, these results illustrate the benefits of diffusion modeling and support the notion of a small mediating effect of drift rate, as predictability of cognitive abilities decreased when drift was not included in the model.

Notes

Acknowledgments

The authors thank Gidon T. Frischkorn, Ramesh Srinivasan, and members of the Human Neuroscience Laboratory for their constructive criticism on work related to this manuscript.

Funding Information

This work was supported by the National Science Foundation [No. 1658303] and the G.A.-Lienert-Foundation.

References

  1. Baron, R.M., & Kenny, D.A. (1986). The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173.PubMedPubMedCentralGoogle Scholar
  2. Barrouillet, P., Bernardin, S., Camos, V. (2004). Time constraints and resource sharing in adults’ working memory spans. Journal of Experimental Psychology: General, 133(1), 83–100.  https://doi.org/10.1037/0096-3445.133.1.83.Google Scholar
  3. Basten, U., Hilger, K., Fiebach, C.J. (2015). Where smart brains are different: a quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27.  https://doi.org/10.1016/j.intell.2015.04.009.Google Scholar
  4. Bazana, P.G., & Stelmack, R.M. (2002). Intelligence and information processing during an auditory discrimination task with backward masking: an event-related potential analysis. Journal of Personality and Social Psychology, 83 (4), 998–1008.PubMedGoogle Scholar
  5. Bentler, P.M., & Chou, C.-P. (1987). Practical issues in structural modeling. Sociological Methods Research, 16(1), 78–117.  https://doi.org/10.1177/0049124187016001004.Google Scholar
  6. Boehm, U., Marsman, M., Matzke, D., Wagenmakers, E.-J. (2018). On the importance of avoiding shortcuts in applying cognitive models to hierarchical data. Behavior Research Methods.  https://doi.org/10.3758/s13428-018-1054-3.
  7. Cassidy, S.M., Robertson, I.H., O’Connell, R.G. (2012). Retest reliability of event-related potentials: evidence from a variety of paradigms. Psychophysiology, 49(5), 659–664.  https://doi.org/10.1111/j.1469-8986.2011.01349.x.PubMedGoogle Scholar
  8. Conway, A.R., Cowan, N., Bunting, M.F., Therriault, D.J., Minkoff, S.R. (2002). A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence, 30(2), 163–183.  https://doi.org/10.1016/S0160-2896(01)00096-4.Google Scholar
  9. Dai, T., & Guo, Y. (2017). Predicting individual brain functional connectivity using a bayesian hierarchical model. NeuroImage, 147, 772–787.  https://doi.org/10.1016/j.neuroimage.2016.11.048.PubMedGoogle Scholar
  10. Deary, I. (2008). Why do intelligent people live longer? Nature, 456(7219), 175–176.  https://doi.org/10.1038/456175a.PubMedGoogle Scholar
  11. de Hollander, G., Forstmann, B.U., Brown, S.D. (2016). Different ways of linking behavioral and neural data via computational cognitive models. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 1 (2), 101–109.  https://doi.org/10.1016/j.bpsc.2015.11.004. Retrieved from http://www.sciencedirect.com/science/article/pii/S2451902215000166.Google Scholar
  12. Der, G., Batty, G.D., Deary, I.J. (2009). The association between iq in adolescence and a range of health outcomes at 40 in the 1979 us national longitudinal study of youth. Intelligence, 37(6), 573–580.  https://doi.org/10.1016/j.intell.2008.12.002.PubMedPubMedCentralGoogle Scholar
  13. Downar, J., Crawley, A.P., Mikulis, D.J., Davis, K.D. (2002). A cortical network sensitive to stimulus salience in a neutral behavioral context across multiple sensory modalities. Journal of Neurophysiology, 87 (1), 615–620.  https://doi.org/10.1152/jn.00636.2001.PubMedGoogle Scholar
  14. Engle, R.W., Tuholski, S.W., Laughlin, J.E., Conway, A.R. (1999). Working memory, short-term memory, and general fluid intelligence: a latent-variable approach. Journal of Experimental Psychology: General, 128 (3), 309–331.Google Scholar
  15. Forstmann, B.U., Wagenmakers, E.-J., Eichele, T., Brown, S., Serences, J.T. (2011). Reciprocal relations between cognitive neuroscience and formal cognitive models: opposites attract? Trends in Cognitive Sciences, 15(6), 272–279.  https://doi.org/10.1016/j.tics.2011.04.002.PubMedPubMedCentralGoogle Scholar
  16. Frischkorn, G.T., & Schubert, A.-L. (2018). Cognitive models in intelligence research: Advantages and recommendations for their application. Journal of Intelligence 6(3), 1–22.  https://doi.org/10.3390/jintelligence6030034.
  17. Gelman, A., & Rubin, D.B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457–472.  https://doi.org/10.1214/ss/1177011136.Google Scholar
  18. Gratton, G., Coles, M.G., Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55(4), 468–484.  https://doi.org/10.1016/0013-4694(83)90135-9.PubMedGoogle Scholar
  19. Hawkins, G.E., Mittner, M., Boekel, W., Heathcote, A., Forstmann, B.U. (2015). Toward a model-based cognitive neuroscience of mind wandering. Neuroscience, 310, 290–305.  https://doi.org/10.1016/j.neuroscience.2015.09.053.PubMedGoogle Scholar
  20. Hick, W.E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11–26.  https://doi.org/10.1080/17470215208416600.Google Scholar
  21. Hilger, K., Ekman, M., Fiebach, C.J., Basten, U. (2017). Efficient hubs in the intelligent brain: nodal efficiency of hub regions in the salience network is associated with general intelligence. Intelligence, 60(Supplement C), 10–25.  https://doi.org/10.1016/j.intell.2016.11.001.Google Scholar
  22. Jäger, A.O., & Süß, H.M. (1997). Berlinger intelligenzstruktur-test form 4. Hogrefe: Göttingen.Google Scholar
  23. Jung, R.E., & Haier, R.J. (2007). The parieto-frontal integration theory (p-fit) of intelligence: converging neuroimaging evidence. Behavioral and Brain Sciences, 30(2), 135–154.  https://doi.org/10.1017/S0140525X07001185.PubMedGoogle Scholar
  24. Kass, R.E., & Raftery, A.E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795.  https://doi.org/10.1080/01621459.1995.10476572.Google Scholar
  25. Kelly, S.P., & O’Connell, R.G. (2013). Internal and external influences on the rate of sensory evidence accumulation in the human brain. Journal of Neuroscience, 33(50), 19434–19441.  https://doi.org/10.1523/JNEUROSCI.3355-13.2013.PubMedGoogle Scholar
  26. Kievit, R.A., Davis, S.W., Griffiths, J., Correia, M.M., Cam-CAN, Henson, R.N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198.  https://doi.org/10.1016/j.neuropsychologia.2016.08.008.PubMedPubMedCentralGoogle Scholar
  27. Kovacs, K., & Conway, A.R.A. (2016). Process overlap theory: a unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177.  https://doi.org/10.1080/1047840X.2016.1153946.Google Scholar
  28. Kretzschmar, A., Spengler, M., Schubert, A.-L., Steinmayr, R., Ziegler, M. (2018). The relation of personality and intelligence–what can the brunswik symmetry principle tell us?. Journal of Intelligence 6(3), 1–38.  https://doi.org/10.3390/jintelligence6030030.
  29. Kyllonen, P.C., & Christal, R.E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14(4), 389–433.  https://doi.org/10.1016/S0160-2896(05)80012-1.Google Scholar
  30. Kyllonen, P.C., & Zu, J. (2016). Use of response time for measuring cognitive ability. Journal of Intelligence 4(4), 1–29.  https://doi.org/10.3390/jintelligence4040014.
  31. Lee, M.D. (2011). How cognitive modeling can benefit from hierarchical bayesian models. Journal of Mathematical Psychology, 55(1), 1–7.  https://doi.org/10.1016/j.jmp.2010.08.013.Google Scholar
  32. Lee, M.D., & Wagenmakers, E.-J. (2014). Bayesian cognitive modeling: a practical course. Cambridge: Cambridge University Press.Google Scholar
  33. Lee, S.Y., & Song, X.Y. (2004). Evaluation of the bayesian and maximum likelihood approaches in analyzing structural equation models with small sample sizes. Multivariate Behavioral Research, 39(4), 653–686. PMID: 26745462  https://doi.org/10.1207/s15327906mbr3904_4.
  34. Levy, R., & Choi, J. (2013). Bayesian structural equation modeling. In Hancock, G.R., & Mueller, R.O. (Eds.) Structural equation modeling: a second course (pp. 563–623). Information Age: Charlotte and NC.Google Scholar
  35. Ly, A., Boehm, U., Heathcote, A., Turner, B.M., Forstmann, B., Marsman, M., Matzke, D. (2017). A flexible and efficient hierarchical bayesian approach to the exploration of individual differences in cognitive-model-based neuroscience. In Computational models of brain and behavior (pp. 467–479): Wiley-Blackwell.  https://doi.org/10.1002/9781119159193.ch34.
  36. Marsh, H.W., Hau, K.T., Balla, J.R., Grayson, D. (1998). Is more ever too much? the number of indicators per factor in confirmatory factor analysis. Multivariate Behavioral Research, 33(2), 181–220.  https://doi.org/10.1207/s15327906mbr3302_1.PubMedGoogle Scholar
  37. McGarry-Roberts, P.A., Stelmack, R.M., Campbell, K.B. (1992). Intelligence, reaction time, and event-related potentials. Intelligence, 16(3–4), 289–313.  https://doi.org/10.1016/0160-2896(92)90011-F.Google Scholar
  38. Mejia, A.F., Nebel, M.B., Barber, A.D., Choe, A.S., Pekar, J.J., Caffo, B.S., Lindquist, M.A. (2018). Improved estimation of subject-level functional connectivity using full and partial correlation with empirical bayes shrinkage. NeuroImage, 172, 478–491.  https://doi.org/10.1016/j.neuroimage.2018.01.029.PubMedGoogle Scholar
  39. Menon, V., & Uddin, L.Q. (2010). Saliency, switching, attention and control: a network model of insula function. Brain Structure and Function, 214(5), 655–667.  https://doi.org/10.1007/s00429-010-0262-0.PubMedGoogle Scholar
  40. Merkle, E., & Rosseel, Y. (2018). blavaan: Bayesian structural equation models via parameter expansion. Journal of Statistical Software, 85(4), 1–30.  https://doi.org/10.18637/jss.v085.i04.Google Scholar
  41. Mittner, M., Boekel, W., Tucker, A.M., Turner, B.M., Heathcote, A., Forstmann, B.U. (2014). When the brain takes a break: a model-based analysis of mind wandering. Journal of Neuroscience, 34(49), 16286–16295.  https://doi.org/10.1523/JNEUROSCI.2062-14.2014.PubMedGoogle Scholar
  42. Neubauer, A.C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience Biobehavioral Reviews, 33(7), 1004–1023.  https://doi.org/10.1016/j.neubiorev.2009.04.001.PubMedGoogle Scholar
  43. Nikolaev, B., & McGee, J.J. (2016). Relative verbal intelligence and happiness. Intelligence, 59, 1–7.  https://doi.org/10.1016/j.intell.2016.09.002.Google Scholar
  44. Nunez, M.D., Srinivasan, R., Vandekerckhove, J. (2015). Individual differences in attention influence perceptual decision making. Frontiers in Psychology, 8, 18.  https://doi.org/10.3389/fpsyg.2015.00018.PubMedPubMedCentralGoogle Scholar
  45. Nunez, M.D., Vandekerckhove, J., Srinivasan, R. (2017). How attention influences perceptual decision making: single-trial eeg correlates of drift-diffusion model parameters. Journal of Mathematical Psychology, 76, 117–130.  https://doi.org/10.1016/j.jmp.2016.03.003.PubMedGoogle Scholar
  46. Oberauer, K., Lewandowsky, S., Farrell, S., Jarrold, C., Greaves, M. (2012). Modeling working memory: an interference model of complex span. Psychonomic Bulletin Review, 19(5), 779–819.  https://doi.org/10.3758/s13423-012-0272-4.PubMedGoogle Scholar
  47. O’Connell, R.G., Dockree, P.M., Kelly, S.P. (2012). A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nature Neuroscience, 15(12), 1729–1735.  https://doi.org/10.1038/nn.3248.PubMedGoogle Scholar
  48. Palmeri, T.J., Love, B.C., Turner, B.M. (2017). Model-based cognitive neuroscience. Journal of Mathematical Psychology.  https://doi.org/10.1016/j.jmp.2016.10.010.
  49. Penke, L., Maniega, S.M., Bastin, M.E., Valdes Hernandez, M.C., Murray, C., Royle, N.A., Deary, I.J. (2012). Brain white matter tract integrity as a neural foundation for general intelligence. Molecular Psychiatry, 17(10), 1026–1030.  https://doi.org/10.1038/mp.2012.66.PubMedGoogle Scholar
  50. Pesta, B.J., McDaniel, M.A., Bertsch, S. (2010). Toward an index of well-being for the fifty u.s. states. Intelligence, 38(1), 160–168.  https://doi.org/10.1016/j.intell.2009.09.006.Google Scholar
  51. Plummer, M. (2003). Jags: a program for analysis of bayesian graphical models using gibbs sampling.Google Scholar
  52. Polich, J. (2007). Updating p300: an integrative theory of p3a and p3b. Clinical Neurophysiology, 118(10), 2128–2148.  https://doi.org/10.1016/j.clinph.2007.04.019.PubMedPubMedCentralGoogle Scholar
  53. Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59–108.Google Scholar
  54. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: theory and data for two-choice decision tasks. Neural Computation, 20(4), 873–922.  https://doi.org/10.1162/neco.2008.12-06-420.
  55. Ratcliff, R., Philiastides, M.G., Sajda, P. (2009). Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the eeg. Proceedings of the National Academy of Sciences, 106(16), 6539–6544.  https://doi.org/10.1073/pnas.0812589106.Google Scholar
  56. Ratcliff, R., Sederberg, P.B., Smith, T.A., Childers, R. (2016). A single trial analysis of eeg in recognition memory: tracking the neural correlates of memory strength. Neuropsychologia, 93(Pt A), 128–141.  https://doi.org/10.1016/j.neuropsychologia.2016.09.026.PubMedPubMedCentralGoogle Scholar
  57. Ratcliff, R., Thapar, A., McKoon, G. (2010). Individual differences, aging, and iq in two-choice tasks. Cognitive Psychology, 60(3), 127–157.  https://doi.org/10.1016/j.cogpsych.2009.09.001.PubMedGoogle Scholar
  58. Ratcliff, R., Thapar, A., McKoon, G. (2011). Effects of aging and iq on item and associative memory. Journal of Experimental Psychology: General, 140(3), 464–487.  https://doi.org/10.1037/a0023810.Google Scholar
  59. Raven, J.C., Court, J.H., Raven, J. (1994). Manual for raven’s progressive matrices and mill hill vocabulary scales advanced progressive matrices. Oxford: Oxford University Press.Google Scholar
  60. Ross, S.M. (2014). Introduction to probability models. New York: Academic Press.Google Scholar
  61. Schmidt, F.L., & Hunter, J. (2004). General mental ability in the world of work: occupational attainment and job performance. In Work and organisational psychology: Research methodology; assessment and selection; organisational change and development; human resource and performance management; emerging trends: Innovation/globalisation/technology. Schmidt, Frank L., Tippie College of Business, University of Iowa, Iowa City, IA, US, 52242 (pp. 35–58): Sage Publications, Inc.Google Scholar
  62. Schmiedek, F., Oberauer, K., Wilhelm, O., Suss, H.-M., Wittmann, W.W. (2007). Individual differences in components of reaction time distributions and their relations to working memory and intelligence. Journal of Experimental Psychology: General, 136(3), 414–429.  https://doi.org/10.1037/0096-3445.136.3.414.Google Scholar
  63. Schmitz, F., & Wilhelm, O. (2016). Modeling mental speed: decomposing response time distributions in elementary cognitive tasks and correlations with working memory capacity and fluid intelligence. Journal of Intelligence 4(13), 1–23.  https://doi.org/10.3390/jintelligence4040013.
  64. Schubert, A.-L., Frischkorn, G.T., Hagemann, D., Voss, A. (2016). Trait characteristics of diffusion model parameters. Journal of Intelligence 4(7), 1–22.  https://doi.org/10.3390/jintelligence4030007.
  65. Schubert, A.-L., Hagemann, D., Frischkorn, G.T. (2017). Is general intelligence little more than the speed of higher-order processing? Journal of Experimental Psychology: General, 146(10), 1498–1512.  https://doi.org/10.1037/xge0000325.Google Scholar
  66. Schubert, A.-L., Hagemann, D., Frischkorn, G.T., Herpertz, S.C. (2018). Faster, but not smarter: An experimental analysis of the relationship between mental speed and mental abilities. Intelligence, 71, 66–75.  https://doi.org/10.1016/j.intell.2018.10.005.Google Scholar
  67. Schubert, A.-L., Hagemann, D., Voss, A., Schankin, A., Bergmann, K. (2015). Decomposing the relationship between mental speed and mental abilities. Intelligence, 51, 28–46.  https://doi.org/10.1016/j.intell.2015.05.002.Google Scholar
  68. Sebastian, A., Forstmann, B.U., Matzke, D. (2018). Towards a model-based cognitive neuroscience of stopping - a neuroimaging perspective. Neuroscience Biobehavioral Reviews, 90, 130–136.  https://doi.org/10.1016/j.neubiorev.2018.04.011.PubMedGoogle Scholar
  69. Seeley, W.W., Menon, V., Schatzberg, A.F., Keller, J., Glover, G.H., Kenna, H., Greicius, M.D. (2007). Dissociable intrinsic connectivity networks for salience processing and executive control. Journal of Neuroscience, 27(9), 2349–2356.  https://doi.org/10.1523/JNEUROSCI.5587-06.2007.PubMedGoogle Scholar
  70. Sheppard, L.D., & Vernon, P.A. (2008). Intelligence and speed of information-processing: a review of 50 years of research. Personality and Individual Differences, 44(3), 535–551.  https://doi.org/10.1016/j.paid.2007.09.015.Google Scholar
  71. Shiffrin, R.M., Lee, M.D., Kim, W., Wagenmakers, E.-J. (2008). A survey of model evaluation approaches with a tutorial on hierarchical bayesian methods. Cognitive Science, 32(8), 1248–1284.  https://doi.org/10.1080/03640210802414826.PubMedGoogle Scholar
  72. Shou, H., Eloyan, A., Nebel, M. B., Mejia, A., Pekar, J.J., Mostofsky, S., Crainiceanu, C.M. (2014). Shrinkage prediction of seed-voxel brain connectivity using resting state fmri. NeuroImage, 102, 938–944.  https://doi.org/10.1016/j.neuroimage.2014.05.043.PubMedPubMedCentralGoogle Scholar
  73. Soltani, M., & Knight, R.T. (2000). Neural origins of the p300. Critical Reviews in Neurobiology, 14(3-4), 199–224.PubMedGoogle Scholar
  74. Spiegelhalter, D.J., Best, N.G., Carlin, B.P., van der Linde, A. (2014). The deviance information criterion: 12 years on. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(3), 485–493.  https://doi.org/10.1111/rssb.12062.Google Scholar
  75. Stone, M. (1960). Models for choice-reaction time. Psychometrika, 25(3), 251–260.Google Scholar
  76. Troche, S.J., Houlihan, M.E., Stelmack, R.M., Rammsayer, T.H. (2009). Mental ability, p300, and mismatch negativity: analysis of frequency and duration discrimination. Intelligence, 37(4), 365–373.  https://doi.org/10.1016/j.intell.2009.03.002.Google Scholar
  77. Troche, S.J., Indermühle, R., Leuthold, H., Rammsayer, T.H. (2015). Intelligence and the psychological refractory period: a lateralized readiness potential study. Intelligence, 53, 138–144.  https://doi.org/10.1016/j.intell.2015.10.003.Google Scholar
  78. Turner, B.M., Forstmann, B.U., Love, B.C., Palmeri, T.J., van Maanen, L. (2017). Approaches to analysis in model-based cognitive neuroscience. Journal of Mathematical Psychology.  https://doi.org/10.1016/j.jmp.2016.01.001.
  79. Turner, B.M., Rodriguez, C.A., Liu, Q., Molloy, M.F., Hoogendijk, M., McClure, S.M. (2018). On the neural and mechanistic bases of self-control. Cerebral cortex.  https://doi.org/10.1093/cercor/bhx355.
  80. Turner, B.M., Wang, T., Merkle, E.C. (2017). Factor analysis linking functions for simultaneously modeling neural and behavioral data. NeuroImage, 153, 28–48.  https://doi.org/10.1016/j.neuroimage.2017.03.044 Retrieved from http://www.sciencedirect.com/science/article/pii/S1053811917302525.PubMedGoogle Scholar
  81. Unsworth, N., Fukuda, K., Awh, E., Vogel, E.K. (2014). Working memory and fluid intelligence: capacity, attention control, and secondary memory retrieval. Cognitive Psychology, 71, 1–26.  https://doi.org/10.1016/j.cogpsych.2014.01.003.PubMedPubMedCentralGoogle Scholar
  82. Vandekerckhove, J. (2014). A cognitive latent variable model for the simultaneous analysis of behavioral and personality data. Journal of Mathematical Psychology, 60, 58–71.  https://doi.org/10.1016/j.jmp.2014.06.004.Google Scholar
  83. Vandekerckhove, J., Tuerlinckx, F., Lee, M.D. (2011). Hierarchical diffusion models for two-choice response times. Psychological Methods, 16(1), 44–62.  https://doi.org/10.1037/a0021765.PubMedGoogle Scholar
  84. van der Maas, H.L.J., Molenaar, D., Maris, G., Kievit, R.A., Borsboom, D. (2011). Cognitive psychology meets psychometric theory: On the relation between process models for decision making and latent variable models for individual differences. Psychological Review, 118(2), 339–356.PubMedGoogle Scholar
  85. van Ravenzwaaij, D., Brown, S., Wagenmakers, E.-J. (2011). An integrated perspective on the relation between response speed and intelligence. Cognition, 119(3), 381–393.  https://doi.org/10.1016/j.cognition.2011.02.002.PubMedGoogle Scholar
  86. van Ravenzwaaij, D., Provost, A., Brown, S.D. (2017). A confirmatory approach for integrating neural and behavioral data into a single model. Journal of Mathematical Psychology, 76, 131–141.  https://doi.org/10.1016/j.jmp.2016.04.005.Google Scholar
  87. Voss, A., Rothermund, K., Voss, J. (2004). Interpreting the parameters of the diffusion model: an empirical validation. Memory Cognition, 32(7), 1206–1220.PubMedGoogle Scholar
  88. Wabersich, D., & Vandekerckhove, J. (2014). Extending jags: a tutorial on adding custom distributions to jags (with a diffusion model example). Behavior Research Methods, 46(1), 15–28.  https://doi.org/10.3758/s13428-013-0369-3.PubMedGoogle Scholar
  89. Westland, J.C. (2010). Lower bounds on sample size in structural equation modeling. Electronic Commerce Research and Applications, 9(6), 476–487.  https://doi.org/10.1016/j.elerap.2010.07.003.Google Scholar
  90. Wongupparaj, P., Kumari, V., Morris, R.G. (2015). The relation between a multicomponent working memory and intelligence: the roles of central executive and short-term storage functions. Intelligence, 53(Supplement C), 166–180.  https://doi.org/10.1016/j.intell.2015.10.007.Google Scholar
  91. Yap, M.J., Balota, D.A., Sibley, D.E., Ratcliff, R. (2012). Individual differences in visual word recognition: insights from the english lexicon project. Journal of Experimental Psychology: Human Perception and Performance, 38(1), 53–79.  https://doi.org/10.1037/a0024177.PubMedGoogle Scholar

Copyright information

© Society for Mathematical Psychology 2018

Authors and Affiliations

  1. 1.Institute of PsychologyHeidelberg UniversityHeidelbergGermany
  2. 2.Department of Cognitive SciencesUniversity of CaliforniaIrvineUSA
  3. 3.Department of Biomedical EngineeringUniversity of CaliforniaIrvineUSA
  4. 4.Department of StatisticsUniversity of CaliforniaIrvineUSA
  5. 5.Institute of Mathematical Behavioral SciencesUniversity of CaliforniaIrvineUSA

Personalised recommendations