Application of Dynamic Features of the Pupil for Iris Presentation Attack Detection
This chapter presents a comprehensive study on the application of stimulated pupillary light reflex to presentation attack detection (PAD) that can be used in iris recognition systems. A pupil, when stimulated by visible light in a predefined manner, may offer sophisticated dynamic liveness features that cannot be acquired from dead eyes or other static objects such as printed contact lenses, paper printouts, or prosthetic eyes. Modeling of pupil dynamics requires a few seconds of observation under varying light conditions that can be supplied by a visible light source in addition to the existing near-infrared illuminants used in iris image acquisition. The central element of the presented approach is an accurate modeling and classification of pupil dynamics that makes mimicking an actual eye reaction difficult. This chapter discusses new data-driven models of pupil dynamics based on recurrent neural networks and compares their PAD performance to solutions based on the parametric Clynes–Kohn model and various classification techniques. Experiments with 166 distinct eyes of 84 subjects show that the best data-driven solution, one based on long short-term memory, was able to correctly recognize 99.97% of attack presentations and 98.62% of normal pupil reactions. In the approach using the Clynes–Kohn parametric model of pupil dynamics, we were able to perfectly recognize abnormalities and correctly recognize 99.97% of normal pupil reactions on the same dataset with the same evaluation protocol as the data-driven approach. This means that the data-driven solutions favorably compare to the parametric approaches, which require model identification in exchange for a slightly better performance. We also show that observation times may be as short as 3 s when using the parametric model, and as short as 2 s when applying the recurrent neural network without substantial loss in accuracy. Along with this chapter we also offer: (a) all time series representing pupil dynamics for 166 distinct eyes used in this study, (b) weights of the trained recurrent neural network offering the best performance, (c) source codes of the reference PAD implementation based on Clynes–Kohn parametric model, and (d) all PAD scores that allow the reproduction of the plots presented in this chapter. To our best knowledge, this chapter proposes the first database of pupil measurements dedicated to presentation attack detection and the first evaluation of recurrent neural network-based modeling of pupil dynamics and PAD.
The authors would like to thank Mr. Rafal Brize and Mr. Mateusz Trokielewicz, who collected the iris images in varying light conditions under the supervision of the first author. The application of Kohn and Clynes model was inspired by research of Dr. Marcin Chochowski, who used parameters of this model as individual features in biometric recognition. This author, together with Prof. Pacut and Dr. Chochowski, has been granted a US patent No. 8,061,842 which partially covers the ideas related to parametric model-based PAD and presented in this work.
- 1.ISO/IEC: Information technology – Biometric presentation attack detection – Part 1: Framework, 30107–3 (2016)Google Scholar
- 3.Wei Z, Qiu X, Sun Z, Tan T (2008) Counterfeit iris detection based on texture analysis. In: International conference on pattern recognition, pp 1–4. https://doi.org/10.1109/ICPR.2008.4761673
- 4.Doyle JS, Bowyer KW, Flynn PJ (2013) Variation in accuracy of textured contact lens detection based on sensor and lens pattern. In: IEEE international conference on biometrics: theory applications and systems (BTAS), pp 1–7. https://doi.org/10.1109/BTAS.2013.6712745
- 7.Zhang L, Zhou Z, Li H (2012) Binary Gabor pattern: an efficient and robust descriptor for texture classification. In: IEEE international conference on image processing (ICIP), pp 81–84. https://doi.org/10.1109/ICIP.2012.6466800
- 9.Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE international conference on computer vision and pattern recognition (CVPR), vol 1, pp 886–893. https://doi.org/10.1109/CVPR.2005.177
- 13.Yambay D, Becker B, Kohli N, Yadav D, Czajka A, Bowyer KW, Schuckers S, Singh R, Vatsa M, Noore A, Gragnaniello D, Sansone C, Verdoliva L, He L, Ru Y, Li H, Liu N, Sun Z, Tan T (2017) LivDet iris 2017 – iris liveness detection competition 2017. In: IEEE international joint conference on biometrics (IJCB), pp 1–6Google Scholar
- 16.Pacut A, Czajka A (2006) Aliveness detection for iris biometrics. In: IEEE international Carnahan conferences security technology (ICCST), pp 122–129. https://doi.org/10.1109/CCST.2006.313440
- 17.Czajka A, Pacut A, Chochowski M (2011) Method of eye aliveness testing and device for eye aliveness testing, United States Patent, US 8,061,842Google Scholar
- 20.Sutra G, Dorizzi B, Garcia-Salitcetti S, Othman N (2017) A biometric reference system for iris. OSIRIS version 4.1. http://svnext.it-sudparis.eu/svnview2-eph/ref_syst/iris_osiris_v4.1/. Accessed 1 Aug 2017
- 24.Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: International joint conference on neural network (IJCNN), vol 3, pp 189–194. https://doi.org/10.1109/IJCNN.2000.861302
- 26.Cho K, van Merriënboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: encoder–decoder approaches. In: Workshop on syntax, semantics and structure in statistical translation (SSST), pp 1–6Google Scholar
- 27.Hinton G, Srivastava N, Swersky K (2017) Neural networks for machine learning. Lecture 6a: overview of mini-batch gradient descent. http://www.cs.toronto.edu/~tijmen/csc321. Accessed 28 April 2017
- 28.Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Teh YW, Titterington M (eds) International conference on artificial intelligence and statistics (AISTATS), Proceedings of machine learning research, vol 9. PMLR, Chia Laguna Resort, Sardinia, Italy, pp 249–256Google Scholar