Normative values for a tablet computer-based application to assess chromatic contrast sensitivity
Tablet computer displays are amenable for the development of vision tests in a portable form. Assessing color vision using an easily accessible and portable test may help in the self-monitoring of vision-related changes in ocular/systemic conditions and assist in the early detection of disease processes. Tablet computer-based games were developed with different levels of gamification as a more portable option to assess chromatic contrast sensitivity. Game 1 was designed as a clinical version with no gaming elements. Game 2 was a gamified version of game 1 (added fun elements: feedback, scores, and sounds) and game 3 was a complete game with vision task nested within. The current study aimed to determine the normative values and evaluate repeatability of the tablet computer-based games in comparison with an established test, the Cambridge Colour Test (CCT) Trivector test. Normally sighted individuals [N = 100, median (range) age 19.0 years (18–56 years)] had their chromatic contrast sensitivity evaluated binocularly using the three games and the CCT. Games 1 and 2 and the CCT showed similar absolute thresholds and tolerance intervals, and game 3 had significantly lower values than games 1, 2, and the CCT, due to visual task differences. With the exception of game 3 for blue-yellow, the CCT and tablet computer-based games showed similar repeatability with comparable 95% limits of agreement. The custom-designed games are portable, rapid, and may find application in routine clinical practice, especially for testing younger populations.
KeywordsColor vision Tablet computer Games CCT Trivector test Repeatability iPad
Chromatic contrast sensitivity (CCS) is defined as the ability to discriminate between stimuli based on their chromaticity difference alone, independent of any luminance contrast (Jacobs, 1993). Tests of CCS have clinical utility to assess or detect deficiencies in color vision. Color vision deficiencies may be congenital or secondary to disease and often manifest as a decreased ability to differentiate between shades of a color or between two or more colors. Ocular conditions such as diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are known to affect color vision prior to affecting visual acuity (VA) (Greenstein, Hood, Ritch, Steinberger, & Carr, 1989; O'Neill-Biba, Sivaprasad, Rodriguez-Carmona, Wolf, & Barbur, 2010). In individuals with diabetes mellitus (DM), color vision impairment has been documented to emerge in the early stages of the disease and may precede the development of DR (Feitosa-Santana et al., 2006; Kurtenbach, Schiefer, Neu, & Zrenner, 1999; Ventura, Costa, et al. 2003). Therefore, the assessment of acquired color vision abnormalities is a useful measure for clinicians as it may be the earliest manifestation of a disease condition.
Color vision is usually assessed in the clinical environment using screening tools rather than estimating threshold or sensitivity, as threshold assessment traditionally requires specialized, calibrated laboratory equipment, which requires a trained person to administer the test. Currently available clinical tests of color vision include screening tests such as pseudoisochromatic (PIC) plate tests and tests that determine the color discrimination thresholds. Screening tests are designed to fail individuals with even mild color vision deficiencies and have a single pass-or-fail criterion. Tests that measure color discrimination thresholds quantify the discrimination abilities of an individual and also help to understand the severity of color vision deficiency. Discrimination tests must be designed with care. Some discrimination tests have been found to require certain cognitive skills to perform the test (Cranwell, Pearce, Loveridge, & Hurlbert, 2015; Dain & Ling, 2009) or may have a theoretical bias towards tritan errors (Dain, 2004; Lakowski, 1969; Melamud, Hagstrom, & Traboulsi, 2004). Moreover, these tests require trained clinician administration and instrumentation. Along with these clinical color vision tests, there are other commercially available computer-based tests such as the Cambridge Colour Test (CCT) (Mollon & Regan, 2000), the modified CCT for children (Goulart et al., 2008) and the Colour Assessment and Diagnosis test (CAD) (Seshadri, Christensen, Lakshminarayanan, & Bassi, 2005) which do measure color thresholds and may be found in clinics that specialize in color vision. If color threshold measures are to be used as a diagnostic indicator of visual system dysfunction in chronic visual conditions such as DR, AMD, and glaucoma, then it would be beneficial if computer-based tests could be developed to have a simple interface that allows patients to self-administer the tests, and have a small form factor (Anderson, Burford, & Emmerton, 2016). Despite their diagnostic utility, the CCT and CAD tests are not designed for unsupervised self-administration and their physical dimensions preclude easy portability. Moreover, self-monitoring requires repeated testing, so it would also be beneficial if such tests were designed to be attractive to maintain the attention and compliance of users (Anderson et al., 2016).
Touchscreen technology found in tablet computers and personal mobile telephones have been harnessed as a supporting platform for the development of mobile health applications (apps). Technology has become increasingly personalized so that many individuals are in close proximity to such devices. Tablet computers are affordable, portable, and handy, and so are well placed to be developed into portable vision tests to monitor for any changes in vision due to systemic/ocular conditions. Several authors have reported the development and use of vision testing apps on the tablet computers (Aslam et al., 2013; Dorr, Lesmes, Lu, & Bex, 2013; Kollbaum, Jansen, Kollbaum, & Bullimore, 2014; Mulligan, 2013; Rodriguez-Vallejo, Remon, Monsoriu, & Furlan, 2015) and also as the tools for psychophysical experiments (Turpin, Lawson, & McKendrick, 2014). Thus, the ability to assess color vision routinely either in clinical practice or at home would be facilitated by the development of a portable and easy-to-administer instrument, such as a tablet computer-based app. The ability for self-administration of color vision tests would facilitate a monitoring role for such technology, which may be addressed by the gamification of these vision tests. Furthermore, it has been suggested that presenting vision tests as computer games (Abramov et al., 1984) or on portable tablet computers as digital games (Nguyen, Do, Chia, Wang, & Duh, 2014) may help in the assessment of vision in an engaging manner. In fact, tests of color vision which are designed for their entertainment value have reached the popular news (MailOnline-Australia, 2015), indicating that color vision tests have scope to be made more fun and engaging.
Therefore, three tablet computer-based games were developed with different levels of gamification, to assess chromatic contrast sensitivity (to detect small departures from normal chromatic contrast sensitivity but not to diagnose any congenital color vision abnormalities). The purpose of the present study was to determine the normal range of chromatic contrast thresholds (tolerance intervals) using the custom-designed tablet computer-based app and to test the repeatability of these new designs in comparison with an established test, the CCT Trivector test.
A total of 100 healthy control participants [median (range) age of 19.0 years (18–56 years)] with a VA of 6/6 or better, measured with a Bailey Lovie LogMAR VA chart, were recruited into the study. The VA was measured both monocularly and binocularly with the participants’ habitual correction. As monocular and binocular vision was better than 6/6 for all participants, the binocular values are reported here for brevity. All participants were screened for red–green or blue–yellow congenital color vision deficiencies using Ishihara’s pseudoisochromatic plate test (Ishihara, 1917) and the Standard Pseudoisochromatic Plate I (Mäntyjärvi, 1987) tests, respectively. The study protocol was approved by the Human Research Ethics Advisory (HREA: #14225) of the University of New South Wales and all the procedures followed the tenets of the Declaration of Helsinki. All the participants gave their written informed consent, after explanation about the study procedures prior to any testing. All participants were tested for their chromatic contrast thresholds using the three custom-designed tablet computer-based games and the CCT Trivector test.
Chromatic contrast sensitivity tests and procedure
The tablet computer-based application
The iPad mini retina display (Apple Inc.; display resolution: 2,048 × 1,536 pixels at 326 pixels per inch, 8-bit resolution and screen size: 7.9 in., with a measured screen luminance of 406 cd/m2) device was calibrated for its display characteristics prior to the development of the vision testing application (Bodduluri, Boon, & Dain, 2016). These characteristics informed the design of the visual stimuli used in the vision tests through the presentation of stimuli that were within the capabilities of the device’s display to accurately produce. Three custom-designed games were developed with different levels of gaming elements and were designed to assess chromatic contrast sensitivity. The three games had the same stimulus background with a fixed chromaticity of u’ = 0.197, v’ = 0.466 (corresponding to a central gray: R = G = B = 127 with a luminance of 88 cd/m2) and a stimulus that varied in chromaticity relative to the background according to a psychophysical staircase procedure. Dithering was employed in the presentation of the stimuli to enable accurate chromaticity display. In the test design, games 1 and 2 employed luminance noise in both the background and the stimulus, in order to avoid any luminance clues to assist in the identification of the stimulus. Game 3 did not employ any luminance noise due to its specific design characteristics (explained below).
Game 1, the “Color detective” game (Fig. 2a), was designed to be a clinical version of the vision test, without gaming elements. The design of the test was informed by a child-friendly version of the CCT test (Goulart et al., 2008) with the background and stimulus composed of circles of varying size with small variations of added luminance noise (±15% of the given RGB units). This luminance noise of ±15% was considered more than enough to mask any luminance cues from the iPad displays (Bodduluri et al., 2016). The test stimulus was a roughly circular amorphous patch which differs in chromaticity from the gray background and thus the luminance artefact may be relatively smaller. Game 2, the “Color combo rush” (Fig. 2b), was a gamified version of the clinical version of the test (game 1). Thus, it was designed to be similar to game 1 in terms of visual stimulus and task, with added fun elements such as feedback (correct/wrong), scores, and sound to facilitate self-administration, engagement, and enjoyment (Fig. 2b).
Game 3, the “Flying ace” (Fig. 2c), was a complete game which included the vision test. The design of the visual stimulus differed in shape and appearance, and the task was different from games 1 and 2. No luminance noise was employed as within the region of placement of the stimuli for the psychophysical task the variations in luminance were insignificant (Bodduluri et al., 2016). Unlike games 1 and 2, game 3 was a two-part game. In the first part, the chromatic contrast thresholds were assessed using an “odd one out” task: Four stars were presented in a diamond configuration, on a gray background, in the center of the tablet computer’s display. Three of the stars were filled with the same background gray but the fourth star was the target stimulus (colored). Thus it was a four-alternative forced choice, but the choice was to pick the “odd one out” (colored star) from the four stars. Thereafter, phase 2 began where the task was to use a flicking action on the touchscreen to launch a plane towards a target. The target was stationary, but wind speed and direction could vary and clouds could obscure the view (Fig. 2c). The task was designed to have numerous variations and not be too demanding on cognitive ability. Points were awarded if the plane intersected the target. In game 3, the color assessment and game aspects were separated in order for the game to look like a game without compromising the requirements of the vision assessment.
Cambridge Colour Test: Trivector test
The Cambridge Colour Test (CCT) Trivector test, CCT v1.5 (Cambridge Research Systems (CRS) Ltd., Rochester, UK) was run on a cathode ray tube (CRT) monitor (HP p1230, HP, UK) that was calibrated using the manufacturer-specified guidelines provided by CRS (ColorCAL II Colorimeter, VSG 72.12.40F1) prior to the experiment.
Analysis of our pilot data showed no significant differences between monocular and binocular assessment of chromatic contrast thresholds (F(1,4) = 1.55, p = 0.28) and thus all the testing was performed binocularly with the habitual correction (wherever applicable). The order of testing was randomized (for both the CCT Trivector test and for the three games) to minimize any learning and fatigue effects. A total of 100 participants were recruited and of those, repeatability measurements were obtained on 93 participants (71 participants on the same day with a 20- to 30-min interval between the administrations – intrasession, and the remaining 22 participants had the second set of measurements on a different day – intersession). As there were no significant differences between the intra- and intersession measurements, all the data was combined to analyze the repeatability. The chromatic contrast thresholds were reported as the Δu’v’ × 10–4.
As the data was not normally distributed, descriptive statistics were given as the median and interquartile range (IQR) wherever applicable. Tolerance intervals were calculated to report the lower and upper limits of the normative values for the three games as well as for the CCT Trivector test. The protan and deutan chromatic contrast thresholds from the CCT Trivector test were averaged to compare with the red–green thresholds from tablet computer-based games while the tritan thresholds were directly compared with the blue–yellow thresholds. Friedman analysis of variance (ANOVA) with post-hoc Wilcoxon signed-rank test was used to compare the chromatic contrast thresholds from the tablet computer-based app with that of the CCT Trivector test. The corresponding Bonferroni-adjusted significance levels were set at p < 0.008 for pairwise comparisons. Wilcoxon signed-rank test was used to test the repeatability of the tests and a complementary measure of agreement was given through the Bland-Altman analysis (Bland & Altman, 1999; Carkeet & Goh, 2016). Bland–Altman plots were plotted using mean (±95% limits of agreement (LoA): calculated using 1.96*SD) and median (±95% LoA: calculated using percentiles) for normally and non-normally distributed differences, respectively. The repeatability of the tablet computer-based app was compared with the CCT Trivector test by determining the 95% LoA. Statistical Package for Social Science (SPSS) version 22 and Minitab version 17 were used for statistical analyses and for determining the tolerance interval for tablet computer-based games, respectively.
Tolerance intervals for tablet computer-based application
The median (interquartile range) and tolerance intervals for the three games and for the Cambridge Colour Test (CCT) Trivector test. The thresholds were reported as u’v’ × 10–4
Comparison between CCT Trivector test and the three games
Repeatability of CCT Trivector test and tablet computer-based application
The current study documented adult normative limits (tolerance intervals), including the repeatabilities, for a custom-designed tablet computer-based app, which had three games, in comparison with an established test, the CCT Trivector test. The tolerance intervals for the three games and the CCT are lower than the published norms in the CCT Manual (Mollon & Regan, 2000). However, the tolerance intervals determined in the current study were slightly above the upper limit for the red–green thresholds, for games 1 and 2, than those reported by Ventura, Silveria, et al. (2003) (76 u’v’ × 10–4 in Ventura et al. versus game 1:104 u’v’ × 10–4 and game 2: 88.8 u’v’ × 10–4).
The chromatic contrast thresholds obtained using games 1 and 2 were comparable with that of the published norms of the CCT Trivector test (Ventura DF, 2003) and also with those obtained using the children friendly version of the CCT Trivector test (Goulart et al., 2008) in adults. This is likely to be due to the similarities in the test design and stimulus characteristics such as the pseudoisochromatic design, presence of luminance noise, and the similar stimulus presentation, i.e. the colored patch (in the case of the child-friendly version of CCT). It must be noted that although these similarities suggest that games 1 and 2 findings should be more similar to the CCT findings than to game 3, it does not necessarily follow that games 1, 2, and the CCT are expected to behave identically. That is because there are also some differences between these tests, including the stimulus size, test distance, the psychophysical methods (such as staircase characteristics) the color axes used to determine chromatic thresholds, and the stimulus durations that were employed.
There was a significant difference between the chromatic contrast thresholds obtained using game 3 (lower thresholds) and that of the CCT Trivector test and the other two games. This was expected as the test design and the stimulus characteristics of game 3 were different compared with the other games and the CCT test; most notably by the use of a black outline for the stars, the absence of luminance noise to mask artefactual luminance clues, and the “odd one out” task procedure. Moreover, luminance noise can affect the appearance of a stimulus and in turn the estimated visual thresholds. Therefore, the lower thresholds obtained using game 3 could possibly be due to the absence of luminance noise.
In order to understand the variability in the spread of chromatic contrast thresholds on repeated administrations, the 95% LoA for all the tests were compared. The 95% LoA of the three games and the CCT Trivector test were comparable, except for the blue–yellow thresholds using game 3 which showed greater variability indicating poor repeatability. The test-retest variability is relatively high compared to the spread of scores; this may be a drawback. However, as this finding is similar to the CCT Trivector test (J.D. Mollon, 2000), with the exception of the game 3 blue-yellow test, and given that the CCT Trivector test has been used to report statistically significant differences in color vision in people with diabetes (Gualtieri, Feitosa-Santana, Lago, Nishi, & Ventura, 2013), above community levels of exposure to occupational solvents (Costa et al., 2012) and in smokers (Fernandes & Santos, 2017), it may be reasoned that the tablet-computer-based games have the same potential to provide clinically significant measures.
The red-green thresholds finding also suggests that it is possible to have a psychophysical task nested within where the participant has to mentally switch between two tasks (the game and the psychophysical task) regularly and demanding cognitive tasks in the game component (in this case, participants were required to intersect the target with the plane and make judgments about speed, trajectory, or anticipated disruption due to wind speed within the game environment) without unduly affecting psychophysical thresholds.
Although there was no significant difference between the first and second sets of measurements, hence the results were repeatable, the chromatic contrast thresholds were lower on the second administration of the test for all types of thresholds tested. This may represent a small learning effect. This improvement ranged from 2% to 10% with a mean global improvement of 4% (corresponds to 1.5 u’v’ × 10–4) in the second administration. However, this difference is not clinically significant and is well within the 95% LoA of the repeatability.
We found from the analysis that small color differences such as chromatic contrast thresholds can be measured using the iPad mini retina display device along color axes and the results were comparable to those tested using the CCT. The increasing sophistication of tablet devices and their display resolutions has enabled the development of apps that can be used in the assessment of visual function (Dorr et al., 2013; Kollbaum et al., 2014; Rodriguez-Vallejo et al., 2015). Considering the future, it is likely that the development of these kinds of vision apps will further enable testing with greater technical sophistication such as within a gaming environment. This study’s findings also highlight that it is important to consider how more sophisticated forms of game tasks and psychophysical tasks may interact, particularly how measures of psychophysical thresholds may be affected.
Although this study provides a portable tablet computer-based vision testing app, it has certain limitations as discussed in here. It may not be possible to use this app interchangeably with other generations of tablet computers from the same or other manufacturers, or with other tablet computer-operating systems (e.g. Android or Windows). This is due to different manufacture specifications and variations in display screen technologies of other tablet computers and their color gamut sizes. For example, as reported by Dain, Kwan, and Wong (2016), a single stimuli lookup table cannot be used for different models of smart phones from the same manufacturer (for iPhone 4 s and iPhone 5) as this may lead to significant reproduction errors due to the various factors as listed above. Therefore, it is advisable that care must be taken to separately calibrate and develop device-specific stimuli look-up tables when applying this vision test for use on other devices.
In summary, the three tablet computer-based games, with the exception of the blue–yellow component of game 3, have been found to provide estimates of chromatic contrast thresholds, in a self-administrable and portable format. Additionally, games 1 and 2 yield visual thresholds comparable with the CCT Trivector test where the test and background stimulus are of similar shape and contain luminance noise (games 1 and 2). Their portable and self-administrable design will allow these tablet computer-based games to be used to assess chromatic contrast thresholds outside the research laboratory or in a routine clinical setting. The games presented in the current study were designed to assess normal age-related variations and acquired deficits in chromatic contrast sensitivity, not to detect and diagnose congenital color vision deficiencies. Further work would still be required to understand how tablet computer displays may be used to detect and diagnose congenital color vision deficiencies, which would require the stimuli to be presented along color confusion axes similar to that in the CCT (J.D Mollon, 2000).
Compliance with ethical standards
None of the authors has any potential competing interests.
- Abramov, I., Hainline, L., Turkel, J., Lemerise, E., Smith, H., Gordon, J., & Petry, S. (1984). Rocket-ship psychophysics. assessing visual functioning in young children. Investigative Ophthalmology & Visual Science, 25(11), 1307–1315.Google Scholar
- Aslam, T. M., Murray, I. J., Lai, M. Y. T., Linton, E., Tahir, H. J., & Parry, N. R. A. (2013). An assessment of a modern touch-screen tablet computer with reference to core physical characteristics necessary for clinical vision testing. Journal of the Royal Society Interface, 10(84), 20130239. doi: 10.1098/rsif.2013.0239 CrossRefPubMedCentralGoogle Scholar
- Bodduluri, L., Boon, M. Y., & Dain, S. J. (2016). Evaluation of tablet computers for visual function assessment. Behavior Research Methods, 1–11. doi: 10.3758/s13428-016-0725-1
- Carkeet, A., & Goh, Y. T. (2016). Confidence and coverage for Bland–Altman limits of agreement and their approximate confidence intervals. Stat Methods Med Res, 0962280216665419. doi: 10.1177/0962280216665419
- Costa, T. L., Barboni, M. T. S., de Araujo Moura, A. L., Bonci, D. M. O., Gualtieri, M., de Lima Silveira, L. C., & Ventura, D. F. (2012). Long-term occupational exposure to organic solvents affects color vision, contrast sensitivity and visual fields. PLoS ONE, 7(8), e42961.CrossRefPubMedPubMedCentralGoogle Scholar
- Cranwell, M. B., Pearce, B., Loveridge, C., & Hurlbert, A. C. (2015). Performance on the Farnsworth-Munsell 100-Hue Test Is Significantly Related to Nonverbal IQFM100 Test Significantly Related to NVIQ. Investigative Ophthalmology & Visual Science, 56(5), 3171–3178. doi: 10.1167/iovs.14-16094 CrossRefGoogle Scholar
- Dain, S. J., & Ling, B. Y. (2009). Cognitive abilities of children on a gray seriation test. Optometry and Vision Science, 86(6), E701–E707. doi: 10.1097/OPX.0b013e3181a59d46
- Feitosa-Santana, C., Oiwa, N. N., Paramei, G. V., Bimler, D., Costa, M. F., Lago, M., … Ventura, D. F. (2006). Color space distortions in patients with type 2 diabetes mellitus. Visual Neuroscience, 23(3–4), 663–668. doi: 10.1017/s0952523806233546
- Fernandes, T. M. d. P., & Santos, N. A. d. (2017). Comparison of color discrimination in chronic heavy smokers and healthy subjects [version 1; referees: Awaiting peer review]. F1000Research, 6(85). doi: 10.12688/f1000research.10714.1
- Greenstein, V. C., Hood, D. C., Ritch, R., Steinberger, D., & Carr, R. E. (1989). S (blue) cone pathway vulnerability in retinitis pigmentosa, diabetes and glaucoma. Investigative Ophthalmology & Visual Science, 30(8), 1732–1737.Google Scholar
- Ishihara, S. (1917). Test for colour-blindness. Tokyo: Hongo Harukicho.Google Scholar
- MailOnline-Australia. (2015). Can YOU spot the odd one out? KukuKube puts colour vision to the test. Retrieved from http://www.dailymail.co.uk/sciencetech/article-3033455/How-good-colour-vision-KukuKube-app-tests-ability-subtle-differences-shade-leave-cross-eyed.html
- Mollon, J. D., & Regan, B. C. (2000). Cambridge Colour Test Handbook. (Cambridge Research Systems Ltd., 2000), Version 1.1.Google Scholar
- Nguyen, L. C., Do, E. Y.-L., Chia, A., Wang, Y., & Duh, H. B.-L. (2014). DoDo game, a color vision deficiency screening test for young children. Paper presented at the Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, Toronto, Ontario, Canada.Google Scholar
- Ventura, D., Costa, M., Gualtieri, M., Nishi, M., Bernick, M., Bonci, D., & De Souza, J. (2003). Early vision loss in diabetic patients assessed by the Cambridge Colour Test. Normal and defective colour vision, 395–403. doi: 10.1093/acprof:oso/9780198525301.003.0042
- Ventura, D., Silveira, L., Rodrigues, A., De Souza, J., Gualtieri, M., Bonci, D., & Costa, M. (2003). Preliminary norms for the Cambridge Colour Test. In J. D. Mollon, J. Pokorny, & K. Knoblauch (Eds.), Normal and defective colour vision (pp. 331-339). Oxford: Oxford University Press.Google Scholar