Advertisement

Psychonomic Bulletin & Review

, Volume 25, Issue 6, pp 2200–2207 | Cite as

Eye-movement patterns in face recognition are associated with cognitive decline in older adults

  • Cynthia Y. H. Chan
  • Antoni B. Chan
  • Tatia M. C. Lee
  • Janet H. Hsiao
Brief Report
  • 414 Downloads

Abstract

The Hidden Markov Modeling approach for eye-movement data analysis is able to quantitatively assess differences and similarities among individual patterns. Here we applied this approach to examine the relationships between eye-movement patterns in face recognition and age-related cognitive decline. We found that significantly more older than young adults adopted “holistic” patterns, in which most eye fixations landed around the face center, as opposed to “analytic” patterns, in which eye movements switched among the two eyes and the face center. Participants showing analytic patterns had better performance than those with holistic patterns regardless of age. Interestingly, older adults with lower cognitive status (as assessed by the Montreal Cognitive Assessment), particularly in executive and visual attention functioning (as assessed by Tower of London and Trail Making Tests) were associated with a higher likelihood of holistic patterns. This result suggests the possibility of using eye movements as an easily deployable screening assessment for cognitive decline in older adults.

Keywords

Eye movement Aging Face recognition Cognitive ability Hidden Markov Model (HMM) 

Introduction

Aging is associated with declines in cognitive functioning that can significantly impair quality of life. With the rapidly growing aging population, ways for early identification of cognitive decline have become urgent issues. Current methods are typically based on neuroimaging technologies (Sperling et al., 2011), which are expensive, expertise-demanding, time-consuming, and not commonly available. In contrast, eye movements are shown to reflect mental processes underlying cognition (e.g., Henderson, Williams, & Falk, 2005; Rayner, 1979) and can potentially be used as an easily deployable and low-cost method for screening purposes. Indeed, recent research suggests that executive and visuospatial function deficits in neurodegenerative disorders such as mild cognitive impairment (MCI) and Alzheimer’s disease (AD) may emerge earlier than memory impairment, and these deficits are likely to be reflected in eye-movement patterns (Alichniewicz, Brunner, Klünemann, & Greenlee, 2013; Pereira, Camargo, Aprahamian, & Forlenza, 2014). Consistent with this finding, AD patients were reported to have less efficient and less exploratory eye movements in cognitive tasks than age-matched healthy controls (Mosimann, Felblinger, Ballinari, Hess, & Müri, 2004). Thus, eye-movement pattern analysis may be important for detecting early presence of cognitive impairments in the course of MCI or AD.

Nevertheless, currently there is limited research on the relationship between eye-movement patterns and cognitive abilities in older adults. This may be due to limitations of eye-movement analysis methods available. Most methods focused on group-level analysis of fixation locations, such as using predefined regions of interest (ROIs; Henderson et al., 2005) or fixation heat maps (Caldara & Miellet, 2011). These methods fall short on reflecting individual differences in eye movements in spatial (e.g., person-specific ROIs) and temporal (e.g., transitions between ROIs) dimensions.

Indeed, recent studies have demonstrated substantial individual differences in eye-movement patterns in cognitive tasks. For example, Peterson and Eckstein (2013) reported that individuals had different preferred fixation locations when viewing human faces that persisted over time. Kanan, Bseiso, Ray, Hsiao, and Cottrell (2015) reported participants’ identities could be inferred from eye movements in various face judgment tasks using machine learning algorithms. These idiosyncratic eye-movement patterns may be indicative of individual differences in underlying mental processes.

To reflect individual differences in eye movement analysis, Chuk, Chan, and Hsiao (2014) have proposed a Hidden Markov Model (HMM, a type of time-series model in machine learning)-based approach, in which each participant’s eye-movement pattern was summarized with an HMM. It was based on the assumption that current eye fixation is conditioned on previous fixations; thus eye movements may be considered a Markovian stochastic process, which can be better understood using HMMs. An HMM contains a number of hidden states, each representing an ROI of the stimulus. A participant’s eye movements among the ROIs are summarized in a transition probability matrix. The HMM parameters are estimated from the participant's eye-movement data. Thus, this approach is able to discover person-specific ROIs and eye movements among the ROIs in a data-driven fashion. In addition, participants' HMMs can be clustered according to their similarities to discover common patterns. The level at which an individual’s pattern resembles a common pattern can be calculated as the likelihood of his/her pattern being generated by the common pattern HMM. This likelihood reflects individual differences along a continuum, and can be used to discover relationships between eye-movement patterns and other cognitive measures.

Through this method, Chuk et al. (2014) discovered two common eye-movement patterns in face recognition: the holistic pattern, in which participants’ ROIs were around the vertical face midline and fixations most frequently landed at the face center (i.e., the nose), and the analytic pattern, in which participants’ ROIs were more spread out, locating on individual facial features, and fixations transited among the ROIs with similar frequencies (Fig. 1). They also found that more young adults exhibited analytic patterns than holistic patterns, and participants who adopted analytic patterns outperformed those with holistic patterns (Chuk, Chan, & Hsiao, in press). This result demonstrated well the power of the HMM methods for discovering the link between eye movements and task performance.
Fig. 1

The representative holistic (top) and analytic (bottom) patterns discovered by Chuk et al. (2014) among young Asian adults. The background image shows the average face of the face images used in the experiment

Thus, here we applied this HMM approach to examine the link between eye movements and cognitive decline in aging. Older adults were found to have worse performance (Lamont, Stewart-Williams, & Podd, 2005) and different eye-movement patterns in face recognition (Firestone, Turk-Browne, & Ryan, 2007) as compared with young adults. Their difference in eye-movement pattern may be related to aging-related cognitive decline, which typically involves impaired executive and visuospatial functions in the prefrontal and parietal cortex (Pereira et al., 2014; Zhang, Han, Verhaeghen, & Nilsson, 2007). Here we recruited healthy older adults to perform a face recognition memory task and examined how their eye movements differ from the young adults’ reported in Chuk et al. (2014). We modeled each participant’s eye movements with an HMM, and clustered the HMMs according to their similarities to discover patterns specific to older adults. We then examined the association between individual patterns and cognitive performances through the likelihood analysis. We hypothesized that eye-movement patterns specific to older adults were related to their poor performance in face recognition and cognitive ability.

Experiment 1

Method

Participants

Forty older (age 65–80 years; six of them did not finish the experiment and were excluded from the analysis; age for the remaining 34 was: M = 69.9 years; SD = 3.98) and 34 young (age 18–24 years; M = 20.3; SD = 1.71; 32 young adults’ data were from Chuk et al.’s study, 2014) Asian adults were recruited. All were right-handed and had normal or corrected-to-normal vision. All older participants had normal cognitive functioning as assessed using the Montreal Cognitive Assessment (MoCA), a brief cognitive screening tool for MCI (≥ 22 out of 30; Yeung et al., 2014; M = 26.85; SD = 1.74). The research protocol was approved by the Ethics Review Board at the University of Hong Kong.

Materials and apparatus

Following Chuk et al. (2014), 40 frontal view, gray-scale young Asian face images (20 females; age 18–30 years) with neutral expressions and standardized distances between the eyes and the mouth were used. They were cropped according to the face shape without hair (Fig. 1). Each face image subtended about 8° of visual angle vertically and 6° horizontally on the screen under a 60-cm viewing distance, approximately equivalent to the size of a 14-cm wide real face viewed from a distance of 133 cm. Eye movements were recorded with EyeLink 1000 (SR Research).

Procedure

Participants sat in front of a computer with their head resting on a chinrest. A 9-point calibration procedure was conducted prior to the experiment. Participants’ right eye (or left eye if tracking error of the right eye was larger than 1° of visual angle during calibration) was tracked throughout the experiment. The tracking mode was pupil with corneal reflection. In the study phase, participants viewed 20 face images one at a time for 5 s each. In the test phase, participants were displayed with 20 old and 20 new face images one at a time and judged whether they saw the face in the study phase. In each trial, after a 500-ms central fixation cross, the face was displayed either at the upper or lower center of the screen randomly.

Eye movement data analysis

The Eye Movement analysis with Hidden Markov Model (HMM) toolbox (EMHMM, available at http://visal.cs.cityu.edu.hk/research/emhmm/) was used to analyze eye-movement data. A participant’s eye movements during the test phase were modeled by an HMM. The (hidden) states in the HMM represent regions of interest (ROIs) of the stimulus and fixations in an ROI are distributed according to a 2D Gaussian distribution. The model assumes that the target ROI of next fixation depends solely on the current ROI, and the probabilities of eye fixations transiting among the ROIs are summarized in a transition matrix. A vector of prior values represents the probability that the first fixation is located at each ROI (Fig. 1).

Following previous studies (Chuk et al., 2014, 2017), we used the first three fixations in each trial in our data analysis, since these early fixations were associated with face recognition performance but later fixations were not (Chuk et al., 2017).1We used a variational Bayesian approach to simultaneously estimate the HMM parameters, including the Gaussian ROIs, the transition matrix, and the vector of priors (for details see Chuk et al., 2014). To determine the optimal number of ROIs for each participant, we trained six HMMs with different numbers of ROIs ranging from one to six and selected the HMM with the highest log-likelihood of the data. Each HMM was trained 100 times with different initial Gaussian ROIs to avoid convergence to a local maximum.

To discover common eye-movement patterns among participants, we clustered all the young and older participants’ HMMs into subgroups based on their similarities using the variational hierarchical EM algorithm (Coviello, Chan, & Lanckriet, 2014), and produced a representative HMM model for each subgroup. Since most participants’ HMMs resulted in three hidden states, we set the number of hidden states of the representative HMMs as three in the algorithm. We performed the algorithm 100 times and chose the clustering output with the highest expected log-likelihood. A representative holistic HMM model and a representative analytic HMM model were generated in the current study. In order to quantify the degree at which an individual’s eye-movement pattern resembles the representative holistic and analytic HMM models, we measured the H-A scale for each participant, which is the difference of the log-likelihoods of the individual’s eye-movement data being generated by the representative holistic and analytic HMM models, divided by the sum of the two log-likelihoods:
$$ \mathrm{H}-\mathrm{A}\ \mathrm{Scale}=\frac{\mathrm{Holistic}\ \log -\mathrm{likelihood}-\mathrm{A}\mathrm{nalytic}\ \log -\mathrm{likelihood}}{\left|\mathrm{Holistic}\ \log -\mathrm{likelihood}\right|+\left|\mathrm{Analytic}\ \log -\mathrm{likelihood}\right|} $$

A positive value indicates a holistic pattern, and vice versa for analytic patterns. We then examined correlations between the H-A scale and cognitive measures.

Results

In eye-movement data, Fig. 2a shows the representative HMMs of young and older adults when grouped by age, whereas Fig. 2b shows the representative HMMs of the two common patterns discovered by clustering according to HMM similarities. The pattern in Fig. 2b (top) resembled holistic patterns: a scan path typically started at the nose/mouth region, and then stayed around the same region. Whereas the other pattern resembled analytic patterns: a scan path typically started at the vertical midline, and then had similar probabilities to transit among the midline, left eye, and right eye (i.e., more frequent gaze transitions between the eyes; Chuk et al., 2014; Chuk et al., in press). The two representative HMMs significantly differed from each other based on Kullback-Leibler divergence estimation (using data from holistic HMMs, t(32) = 7.37, p < .001, d = .74; using data from analytic HMMs, t(34) = 8.02, p < .001, d = 1.18; Chuk et al., 2014). This can be visualized in the difference map in Fig. 2b (bottom), demonstrating clear spatial separation on facial features between the two patterns, in contrast to the difference map between the two age groups (Fig. 2a, bottom). This was because both age groups consisted of participants with holistic and analytic patterns (Table 1). A Chi-square test showed that significantly more older adults exhibited holistic patterns and more young adults exhibited analytic patterns, χ2(2) = 4.77, p = .029, V = .27. This difference was not clearly visible in the age group heat maps (Fig. 2a), well demonstrating the power of the eye-movement HMM (EMHMM) approach in reflecting individual differences.
Fig. 2

The representative Hidden Markov Models (HMMs) of (a) young and older adults’ eye-movement patterns and (b) holistic and analytic patterns discovered through clustering according to similarities. Each representative HMM included three regions of interest (ROIs) indicated by different colors. The smaller images show the assignment of actual fixations to different ROIs and the corresponding heat map. The transition probabilities of eye movements among the ROIs were summarized in the transition matrix. The priors in the matrix represented the probability that the first fixation in a trial located at each ROI. The difference heat map between the two patterns was shown on the bottom. (c) Examples of older adults’ individual HMM were shown along the H-A scale

Table 1

Numbers of young and older participants classified as having holistic and analytic patterns using eye-movement Hidden Markov Model (EMHMM), and their face recognition performance and eye-movement behavior

  

Holistic

Analytic

n

n

No. of participants

Total

33

35

Young

12

22

Old

21

13

  

M (SD)

M (SD)

Face recognition performance d’

Total

1.04 (0.62)

1.47 (0.66)

Young

1.19 (0.85)

1.64 (0.64)

Old

0.95 (0.45)

1.18 (0.63)

Reaction time (s)

Total

1.94 (0.65)

1.84 (0.49)

Young

1.58 (0.63)

1.71 (0.52)

Old

2.15 (0.57)

2.04 (0.36)

No. of fixations per trial

Total

5.56 (1.81)

5.57 (1.95)

Young

4.59 (1.24)

5.32 (2.21)

Old

6.11 (1.87)

5.98 (1.40)

Mean saccade length (degrees)

Total

1.83 (0.41)

1.67 (0.32)

Young

1.58 (0.32)

1.57 (0.31)

Old

1.98 (0.38)

1.84 (0.26)

To understand how age and eye-movement patterns influenced participants’ recognition performance and behavior, we performed 2 (age group: young vs. old) × 2 (Eye-movement patterns: analytic vs. holistic) between-subjects ANOVAs on d’, reaction time, number of fixations per trial, and mean saccade length. The results showed main effects of age group: older adults had lower d’, F(1, 64), = 4.84, p = .031, ηp2 = .07, longer reaction time, F(1, 64) = 11.14, p = .001, ηp2 = .15, more fixations per trial, F(1, 64) = 5.70, p = .020, ηp2 = .08, and longer saccade length, F(1, 64) =16.36, p < 0.001, ηp2 = .20, as compared with young adults. There were also main effects of eye-movement pattern: analytic patterns yielded higher d’, F(1, 64) = 4.56, p = .037, ηp2 = 0.07 than holistic patterns. Note that there was no main effect of eye-movement pattern in either response time, F(1, 64) = 0.01, p = .927, ηp2 < 0.001, number of fixations per trial, F(1, 64) = 0.45, p = .507, ηp2 = 0.01, or saccade length, F(1, 64) = 0.74, p = .392, ηp2 = 0.01. This result suggested that the advantage of analytic patterns in face recognition was not due to longer viewing time or more fixations on the face, but due to difference in the spatial extent of the ROIs and transitions among the ROIs. There was no interaction between eye-movement pattern and age group in any of these measures (all Fs < 0.88, all ps > .352), indicating that the advantage of analytic patterns in recognition performance did not differ between the age groups.

Consistent with the above findings, participants’ recognition performance in d’ was negatively correlated with H-A scale, r(66) = -.25, p = .041: the lower the performance, the more holistic the pattern. In addition, older adults’ MoCA score was negatively correlated with the H-A scale, r(32) = -.48, p = .004 (Fig. 3a), suggesting the lower the cognitive status, the more holistic the pattern. Figure 2c showed some older adults’ patterns along the H-A scale.
Fig. 3

Scatter plot of Montreal Cognitive Assessment (MoCA) score and the H-A scale in (a) Experiment 1 and (b) Experiment 2 in older adults

Experiment 2

To examine whether the representative holistic and analytic models obtained in Experiment 1 can potentially be used to quantify eye-movement patterns in new participants for cognitive screening purposes, here we recruited 38 healthy older adults (without clinical diagnosis of neurodegenerative disorders or head injury, and could live independently as assessed by Lawton Instrumental Activities of Daily Living, Tong & Man, 2002) to perform a similar face recognition task with frontal view, colored Asian face images (20 young and 20 older adult faces; half of them were females). Five participants did not complete the task and were excluded. The remaining 33 participants (age 65–80 years; M = 71.5; SD = 4.89) had MoCA scores ranging from 17 to 30 (M = 23.03; SD = 3.15). Five additional tests commonly used for age-related cognitive decline were included to examine the associations between eye-movement patterns and different cognitive abilities, including executive functioning, visual attention, and verbal processing:

1. Tower of London test (TOL; Culbertson & Zillmer, 1999) for assessing executive functioning and planning. It had ten problems with increasing difficulty. Participants had a maximum of 20 moves and 120 s for each problem. The total number of moves and time were measured.

2. Trail Making test (TMT; Reitan, 1958) for assessing visual attention and cognitive processing speed. Participants connected 25 circles from numbers 1 to 25 in sequential order. The completion time was recorded.

3. Verbal and spatial two-back tasks (Lau, Eskes, Morrison, Rajda, & Spurr, 2010) for assessing working memory. Each task consisted of 52 trials. In each trial, participants judged whether the presented symbol/symbol location was the same as the one presented two trials back in the verbal/visuospatial task respectively. The symbol was presented for 1,000 ms, followed by a 2,500-ms blank screen. D-prime was measured.

4. Chinese Rey Auditory Verbal Learning Test (C-RAVLT; Lee, Yuen, & Chan, 2002) for verbal memory. Participants learned a list of 15 words followed by a free recall for five times. They then learned an interference word list with free recall before they recalled the first list again. They performed a delay recall of the first list after 30 min.

5. Category Verbal Fluency Test (CVFT; Lam, Ho, Lui, & Tam, 2006). Participants generated exemplars of two categories, animal and vegetable/fruit, for 1 min each. The number of exemplars generated was recorded.

Table 2 shows the results. H-A scale calculated using the representative holistic and analytic HMMs obtained in Experiment 1 (M = 0.00237; SD = 0.021) was negatively correlated with MoCA score (Fig. 3b), consistent with Experiment 1. H-A scale also correlated with performance in TOL and TMT, and marginally with working memory measures, but not with delayed verbal memory or verbal fluency, suggesting that it may be particularly related with executive functioning and visual attention abilities.
Table 2

Descriptive statistics of the cognitive tests and the Pearson’s correlation with the H-A scale

 

Descriptive Statistics

Correlation with the H-A scale

N

M

SD

r

p

General Cognitive Assessment – MoCA

 MoCA score (out of 30)

33

23.03

3.15

–.44

.010*

Executive Functioning – Tower of London (TOL)

 Total moves (min. 55)

32a

112.06

35.05

.36

.043*

 Total time (max. 1200 s)

32a

712.57

257.22

.35

.048*

Visual Attention – Trail Making Test (TMT)

 Total time (s)

33

54.93

29.53

.37

.034*

Working Memory – N-back

 Verbal 2-back d’

30b

0.58

1.33

–.35

.062+

 Spatial 2-back d’

28c

0.70

0.91

–.35

.066+

Verbal Memory – Chinese Rey Auditory Verbal Learning Test (C-RAVLT)

 List A: 1st trial recall

33

4.52

2.11

.10

.574

 List A: 2nd trial recall

33

7.73

2.57

–.17

.359

 List A: 3rd trial recall

33

9.48

2.03

–.19

.287

 List A: 4th trial recall

33

10.45

2.27

–.27

.124

 List A: 5th trial recall

33

10.94

2.30

–.44

.010*

 List B: recall

33

4.45

2.05

–.25

.154

 List B: post-interference recall

33

8.12

4.00

–.17

.337

 List A: delayed recall

33

7.94

3.61

–.27

.130

Category Verbal Fluency Test (CVFT)

 Animal (from MoCA)

33

17.52

3.78

–.03

.861

 Vegetable/Fruit

33

18.15

3.67

–.22

.219

 Total (Animal + Vegetable/Friut)

33

35.67

6.04

–.15

.394

*Significant at p < 0.05; +Significant at p< 0.1

aOne participant did not complete the task

bTwo participants did not complete the task and one participant was excluded due to low response rate (< 2 SD below mean)

cTwo participants did not complete the task and three participants were excluded due to low response rate (< 2 SD below mean)

Discussion

Here we investigated older adults’ eye movements in face recognition and their association with cognitive status using an HMM-based approach (Chuk et al., 2014). In Experiment 1, we recruited healthy older adults and compared their eye movements with healthy young adults’ in Chuk et al.’s study. Consistent with previous findings, we discovered holistic and analytic patterns through clustering (Fig. 2b). Interestingly, significantly more older adults adopted holistic patterns than young adults, consistent with the previous finding that older adults fixated more often at the nose and less often at the eyes than young adults (Firestone et al., 2007). Note that this difference between young and older adults was not readily observable in the group-level analysis (Fig. 2a), because both age groups consisted of participants using holistic and analytic patterns. In contrast, the holistic and analytic patterns discovered through clustering showed clear separation on facial features (Fig. 2b), demonstrating the power of machine learning methods.

In addition, we found that participants using analytic patterns outperformed those using holistic patterns regardless of age. This advantage was not due to either longer response times, more fixations, or longer saccade lengths, since participants using the two patterns did not differ in these measures. Also, the use of analytic patterns (lower H-A scale) was correlated with better recognition performance. These findings were consistent with our previous studies (Chuk et al., in press; Chuk, Crookes, Hayward, Chan, & Hsiao, 2017). Since more older adults adopted holistic patterns than young adults, the poorer recognition performance in older adults was likely due to their use of holistic patterns. Miellet, Caldara, and Schyns (2011) used face images with two different identities embedded in the global and local image information respectively in a face recognition task. They found that participants’ eye fixations on the eyes of the face images were associated with perception of local information, whereas those at the face center were associated with perception of global information. This phenomenon suggests that participants with analytic patterns (looking at both the eyes and the face center) may engage both local and global processing, whereas those with holistic patterns (mainly looking at the face center) may engage mainly global processing. In the literature, global configural information is reported to play an important role in face recognition as compared with local featural information (Bartlett & Searcy, 1993). However, this view has been increasingly challenged, as recent studies suggested that both featural and configural information are important (Burton, Schweinburger, Jenkins, & Kaufmann, 2015; Cabeza & Kato, 2000). Similarly in computer vision, a combination of local and global representations leads to best face recognition performance (Bonnen, Klare, & Jain, 2013). Our results are consistent with these findings, suggesting that active retrieval of local feature information in addition to global configural information through analytic eye-movement patterns may be optimal for face recognition.

Interestingly, older adults’ use of holistic patterns was associated with their cognitive status: the more holistic the pattern (higher H-A scale), the lower the MoCA score. This correlation was replicated when representative holistic and analytic HMMs developed in Experiment 1 were used to assess eye-movement patterns of new participants performing the same task with different stimuli in Experiment 2. This result suggests the possibility of developing representative models from the population for cognitive screening purposes. In addition, H-A scale was found to be particularly associated with executive functioning and visual attention abilities as assessed in TOL and TMT. Consistent with this finding, in an fMRI study, Chan et al. (2016) showed that in healthy young adults, holistic eye-movement patterns in face recognition were associated with lower activation in areas important for executive functioning and top-down control of visual attention including the dorsolateral prefrontal cortex, frontal eye field, and intraparietal sulcus (Noudoost, Chang, Steinmetz, & Moore, 2010). Indeed, recent research has suggested that executive and visuospatial function deficits in MCI/AD may emerge earlier than memory impairments (Pereira et al., 2014), and these deficits are likely to be reflected in eye movements (Mosimann et al., 2004), as top-down control of visual attention is an important mechanism for eye-movement planning (Hoffman & Subramaniam, 1995). Future work will examine the specificity and sensitivity of eye-movement measures in reflecting cognitive decline in patients with MCI/neurodegenerative disorders.

In conclusion, through analyzing eye-movement data with an HMM-based approach, we show that older adults’ poor face recognition performance is related to their use of holistic patterns. This pattern may be associated with engagement of global face processing, but not local feature processing that may also be important for recognition. More importantly, we demonstrate for the first time in the literature an explicit link between eye-movement patterns and cognitive status in older adults. This association may be related to decline in visuospatial attention and executive functioning, which has recently been suggested to emerge earlier than memory impairments in MCI and AD. The HMM approach made it possible to quantify similarities among eye-movement patterns and consequently reveal this association. This result provides strong evidence for the potential use of eye tracking as an efficient and low-cost method for early detection of cognitive decline and neurodegenerative changes in older adults, as well as other cognitive deficits such as autism and schizophrenia (Benson et al., 2012).

Footnotes

  1. 1.

    A similar phenomenon was observed here: the association between eye-movement pattern and MoCA score was significant only during the first three fixations.

Notes

Acknowledgements

We thank Tim Chuk for technical support, and Alison Hui, Joe Chan, and Wing Yan Sze for help in data collection. This research was supported by RGC (#17402814 to Hsiao; CityU110513 to Chan; #17613815 to Lee). We thank the Editor, Prof. Matthew Peterson, and two anonymous reviewers for the helpful comments.

Supplementary material

13423_2017_1419_MOESM1_ESM.docx (133 kb)
ESM 1 (DOCX 133 kb)

References

  1. Alichniewicz, K., Brunner, F., Klünemann, H. H., & Greenlee, M. W. (2013). Neural correlates of saccadic inhibition in healthy elderly and patients with amnestic mild cognitive impairment. Front. Psychol, 4, 467.CrossRefGoogle Scholar
  2. Bartlett, J. C., & Searcy, J. (1993). Inversion and configuration of faces. Cogn. Psychol., 25(3), 281-316.CrossRefGoogle Scholar
  3. Benson, P. J., Beedie, S. A., Shephard, E., Giegling, I., Rujescu, D., & St Clair, D. (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological Psychiatry, 72(9), 716-724.CrossRefGoogle Scholar
  4. Bonnen, K., Klare, B., & Jain, A. (2013). Component-based representation in automated face recognition. IEEE T. Inf. Foren. Sec., 8(1), 239-253.CrossRefGoogle Scholar
  5. Burton, A. M., Schweinberger, S. R., Jenkins, R., & Kaufmann, J. M. (2015). Arguments against a configural processing account of familiar face recognition. Perspect. Psychol. Sci., 10(4), 482-496.CrossRefGoogle Scholar
  6. Cabeza, R., & Kato, T. (2000). Features are also important: Contributions of featural and configural processing to face recognition. Psychol. Sci., 11(5), 429-433.CrossRefGoogle Scholar
  7. Caldara, R. & Miellet, S. (2011). iMap: a novel method for statistical fixation mapping of eye-movement data. Behav. Res. Methods, 43(3), 864-878.CrossRefGoogle Scholar
  8. Chan, C. Y. H., Wong, J. J., Chan, A. B., Lee, T. M. C., & Hsiao, J. H. (2016). Analytic eye-movement patterns in face recognition are associated with better performance and more top-down control of visual attention: an fMRI study. In Papafragou, A., Grodner, D., Mirman, D., & Trueswell, J.C. (Eds.), Proceeding of the 38th Annual Conference of the Cognitive Science Society (pp. 854-859). Austin, TX: Cognitive Science Society.Google Scholar
  9. Chuk, T., Chan, A. B., & Hsiao, J. H. (2014). Understanding eye movements in face recognition using hidden Markov models. J. Vision, 14(11): 1-14.CrossRefGoogle Scholar
  10. Chuk, T., Chan, A. B., & Hsiao, J. H. (in press). Is having similar eye-movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling. Vision Res.Google Scholar
  11. Chuk, T., Crookes, K., Hayward, W. G., Chan, A. B., & Hsiao, J. H. (2017). Hidden Markov model analysis reveals the advantage of analytic eye-movement patterns in face recognition across cultures. Cognition, 169, 102-117.CrossRefGoogle Scholar
  12. Coviello, E., Chan, A. B., & Lanckriet, G. R. G. (2014). Clustering hidden Markov models with variational HEM. J. Mach. Learn. Res., 15(1), 697-747.Google Scholar
  13. Culbertson, W. C., and Zillmer, E. A. (1999). The Tower of London, Drexel University, Research Version: Examiner’s Manual. New York, NY: Multi-Health Systems.Google Scholar
  14. Firestone, A., Turk-Browne, N. B., & Ryan, J. D. (2007). Age-related deficits in face recognition are related to underlying changes in scanning behavior. Neuropsychol. Dev. Cogn. B. Aging Neuropsycho. Cogn., 14(6), 594-607.CrossRefGoogle Scholar
  15. Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements are functional during face learning. Mem. Cognition, 33(1), 98-106.CrossRefGoogle Scholar
  16. Hoffman, J. E., & Subramaniam, B. (1995). The role of visual attention in saccadic eye movements. Percept. Psychophys, 57(6), 787-795.CrossRefGoogle Scholar
  17. Kanan, C., Bseiso, D. N., Ray, N. A., Hsiao, J. H., & Cottrell, G. W. (2015). Humans have idiosyncratic and task-specific scanpaths for judging faces. Vision Res., 108, 67-76.CrossRefGoogle Scholar
  18. Lam, L. C. W., Ho, P., Lui, V. W. C., & Tam, C. W. C. (2006). Reduced semantic fluency as an additional screening tool for subjects with questionable dementia. Dement. Geriatr. Cogn. Disord., 22(2), 159-164.CrossRefGoogle Scholar
  19. Lamont, A. C., Stewart-Williams, S., & Podd, J. (2005). Face recognition and aging: Effects of target age and memory load. Mem. Cognition, 33(6), 1017-1024.CrossRefGoogle Scholar
  20. Lau, E. Y. Y., Eskes G. A., Morrison, D. L., Rajda, M., Spurr, K. F. (2010). Executive function in patients with obstructive sleep apnea treated with continuous positive airway pressure. J. Int. Neuropsych. Soc., 16, 1077-1088.CrossRefGoogle Scholar
  21. Lee, T. M., Yuen, K. S., & Chan, C. C. (2002). Normative data for neuropsychological measures of fluency, attention, and memory measures for Hong Kong Chinese. J. Clin. Exp. Neuropsychol., 24(5), 615-632.CrossRefGoogle Scholar
  22. Miellet, S., Caldara, R., & Schyns, P. G. (2011). Local Jekyll and Global Hyde The Dual Identity of Face Identification. Psychol. Sci., 22(12), 1518-1526.CrossRefGoogle Scholar
  23. Mosimann, U. P., Felblinger, J., Ballinari, P., Hess, C. W., & Müri, R. M. (2004). Visual exploration behaviour during clock reading in Alzheimer’s disease. Brain, 127(2), 431-438.CrossRefGoogle Scholar
  24. Noudoost, B., Chang, M. H., Steinmetz, N. A., & Moore, T. (2010). Top-down control of visual attention. Curr. Opin Neurobiol, 20(2), 183-190.CrossRefGoogle Scholar
  25. Pereira, M. L. F., Camargo, M. V., Aprahamian, I., & Forlenza, O. V. (2014). Eye movement analysis and cognitive processing: detecting indicators of conversion to Alzheimer’s. Neuropsychiatr. Dis. Treat., 10, 1273-1285.CrossRefGoogle Scholar
  26. Peterson M. F., Eckstein M. P. (2013). Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation. Psychol. Sci., 24 (7), 1216–1225.CrossRefGoogle Scholar
  27. Rayner K. (1979). Eye guidance in reading: Fixation locations within words. Perception, 8, 21–30.CrossRefGoogle Scholar
  28. Reitan, R. M. (1958). The validity of the Trail Making Test as an indicator of organic brain damage. Percept. Mot. Skills, 8, 271-276CrossRefGoogle Scholar
  29. Sperling, R. A., Aisen, P. S., Beckett, L. A., Bennett, D. A., Craft, S., … Phelps C.H. (2011). Toward defining the preclinical stages of Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimers Dement., 7(3), 280-292.CrossRefGoogle Scholar
  30. Tong, A. Y., & Man, D. W. (2002). The validation of the Hong Kong Chinese version of the Lawton Instrumental Activities of Daily Living Scale for institutionalized elderly persons. OTJR (Thorofare N J), 22(4), 132-142.Google Scholar
  31. Yeung, P. Y., Wong, L. L., Chan, C. C., Leung, J. L., & Yung, C. Y. (2014). A validation study of the Hong Kong version of Montreal Cognitive Assessment (HK-MoCA) in Chinese older adults in Hong Kong. Hong Kong Med. J., 20(6), 504-510.Google Scholar
  32. Zhang, Y., Han, B., Verhaeghen, P., & Nilsson, L. G. (2007). Executive functioning in older adults with mild cognitive impairment: MCI has effects on planning, but not on inhibition. Aging Neuropsychol. C., 14(6), 557-570.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  • Cynthia Y. H. Chan
    • 1
  • Antoni B. Chan
    • 2
  • Tatia M. C. Lee
    • 3
  • Janet H. Hsiao
    • 1
  1. 1.Department of PsychologyUniversity of Hong KongPok Fu LamHong Kong
  2. 2.Department of Computer ScienceCity University of Hong KongKowloon TongHong Kong
  3. 3.Laboratory of NeuropsychologyUniversity of Hong KongPok Fu LamHong Kong

Personalised recommendations