Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 21, pp 30373–30395 | Cite as

Human authentication based on fusion of thermal and visible face images

  • Ayan SealEmail author
  • Chinmaya Panigrahy
Article

Abstract

In recent past, considerable amount of research has been done to increase the performance of a face authentication system in uncontrolled environment such as illumination. However, the performance has not been improved significantly since visible face images are dependent of illumination. To overcome the limitation of visible face images, researchers are using infrared (IR) face images. However, it also does not completely independent of illumination. Image fusion of visible and thermal face images is an alternative in research community nowadays. In this work, a fusion method is introduced to fuse visible and IR images for face authentication. The proposed fusion method relies on translation invariant À-trous wavelet transform and fractal dimension using differential box counting method. Five popular fusion metrics namely, ratio of spatial frequency error, normalized mutual information, edge information, universal image quality index, extended frequency comparison index are considered to measure the effectiveness of the proposed fusion algorithm quantitatively over four state-of-the-art methods. A new similarity measure is also proposed to check how close a fused face image from others are. All the experiments are performed on three databases namely, IRIS benchmark face database, UGC-JU face database and SCface face database. All the results depict that the proposed fusion method along with similarity measure for face authentication outperforms all the four state-of-the-art methods in terms of accuracy, precision and recall.

Keywords

Visible face Thermal face Fractal dimension Maximum bipartite matching Image fusion 

Notes

Acknowledgments

Ayan Seal thank to Media Lab Asia, Ministry of Electronics and Information Technology, Government of India for providing young faculty research fellowship. Portions of the research in this paper use the SCface database of facial images. Credit is hereby given to the University of Zagreb, Faculty of Electrical Engineering and Computing for providing the database of facial images. We thank the anonymous reviewers for their many insightful comments and suggestions.

References

  1. 1.
    Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S (2010) SLIC Superpixels, EPFL technical Report 149300Google Scholar
  2. 2.
    Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282CrossRefGoogle Scholar
  3. 3.
    Adini Y, Moses Y, Ullman S (1997) Face recognition: the problem of compensating for changes in illumination direction. IEEE Trans Pattern Anal Mach Intell 19(7):721–732CrossRefGoogle Scholar
  4. 4.
    Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: Application to face recognition. IEEE Trans Pattern Anal Mach Intell 28 (12):2037–2041zbMATHCrossRefGoogle Scholar
  5. 5.
    Bebis G, Gyaourova A, Singh S, Pavlidis I (2006) Face recognition by fusing thermal infrared and visible imagery. Image Vis Comput 24(7):727–742CrossRefGoogle Scholar
  6. 6.
    Ben-Arie J, Nandy D (1998) A volumetric/iconic frequency domain representation for objects with application for pose invariant face recognition. IEEE Trans Pattern Anal Mach Intell 20(5):449–457CrossRefGoogle Scholar
  7. 7.
    Bhattacharjee D, Seal A, Ganguly S, Nasipuri M, Basu DK (2012) Comparative study of human thermal face recognition based on Haar wavelet transform and local binary pattern. Comput Intell Neurosci 2012:6CrossRefGoogle Scholar
  8. 8.
    Bhowmik MK, Bhattacharjee D, Nasipuri M, Basu DK, Kundu M (2010) Fusion of wavelet coefficients from visual and thermal face images for human face recognition-a comparative study. arXiv:1007.0626
  9. 9.
    Bhowmik MK, Bhattacharjee D, Nasipuri M, Basu DK, Kundu M (2010) Image pixel fusion for human face recognition. arXiv:1007.0628
  10. 10.
    Chen J, Yi D, Yang J, Zhao G, Li SZ, Pietikainen M (2009) Learning mappings for face synthesis from near infrared to visual light images, IEEE Conference on Computer Vision and Pattern Recognition, pp 156–163Google Scholar
  11. 11.
    Cormen TH, Leiserson CE, Rivest RL, Stein C (2009) Introduction to algorithms, 3rd editGoogle Scholar
  12. 12.
    Davis JW, Sharma V (2005) Fusion-based background-subtraction using contour saliency, IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops, pp 11Google Scholar
  13. 13.
    Dutilleux Pierre (1990) An implementation of the “algorithme à trous” to compute the wavelet transform, Wavelets, pp 298–304Google Scholar
  14. 14.
    Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965CrossRefGoogle Scholar
  15. 15.
    Fowler JE (2005) The redundant discrete wavelet transform and additive noise. IEEE Signal Process Lett 12(9):629–632CrossRefGoogle Scholar
  16. 16.
    Gonzalo C, Lillo-Saavedra M (2008) A directed search algorithm for setting the spectral–spatial quality trade-off of fused images by the wavelet à trous method. Can J Remote Sens 34(4):367–375CrossRefGoogle Scholar
  17. 17.
    González-Audícana M, Otazu X, Fors O, Seco A (2005) Comparison between Mallat’s and the ’à trous’ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. Int J Remote Sens 26(3):595–614CrossRefGoogle Scholar
  18. 18.
    Grgic M, Delac K, Grgic S (2011) SCface–surveillance cameras face database. Multimed Tools Appl 51(3):863–879CrossRefGoogle Scholar
  19. 19.
    Hall DL, Llinas J (1997) An introduction to multisensor data fusion. Proc IEEE 85(1):6–23CrossRefGoogle Scholar
  20. 20.
    Hermosilla G, Gallardo F, Farias G, Martin CS (2015) Fusion of visible and thermal descriptors using genetic algorithms for face recognition systems. Sensors 15(8):17944–17962CrossRefGoogle Scholar
  21. 21.
    Hossny M, Nahavandi S, Creighton D (2008) Comments on ’Information measure for performance of image fusion’. Electron Lett 44(18):1066–1067CrossRefGoogle Scholar
  22. 22.
    Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol 14(1):4–20CrossRefGoogle Scholar
  23. 23.
    Keller JM, Chen S, Crownover RM (1989) Texture description and segmentation through fractal geometry. Computer Vision, Graphics, and image processing 45(2):150–166CrossRefGoogle Scholar
  24. 24.
    Kokar M, Kim K H (1993) Review of multisensor data fusion architectures and techniques, IEEE International Symposium on Intelligent Control, pp 261–266Google Scholar
  25. 25.
    Kong SG, Heo J, Boughorbel F, Zheng Y, Abidi BR, Koschan A, Yi M, Abidi MA (2007) Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition. Int J Comput Vis 71(2):215–233CrossRefGoogle Scholar
  26. 26.
    Lei Z, Pietikäinen M, Li SZ (2014) Learning discriminant face descriptor. IEEE Trans Pattern Anal Mach Intell 36(2):289–302CrossRefGoogle Scholar
  27. 27.
    Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875CrossRefGoogle Scholar
  28. 28.
    Mandelbrot BB (1983) The fractal geometry of nature/Revised and enlarged edition. WH Freeman and Co., New York, p 495Google Scholar
  29. 29.
    Panigrahy C, Garcia-Pedrero A, Seal A, Rodríguez-Esparragón D, Mahato NK, Gonzalo-Martín C (2017) An Approximated Box Height for Differential-Box-Counting Method to Estimate Fractal Dimensions of Gray-Scale Images. Entropy 19(10):534MathSciNetCrossRefGoogle Scholar
  30. 30.
    Pavlidis I, Symosek P (2000) The imaging issue in an automatic face/disguise detection system, Proceedings. In: IEEE workshop on computer vision beyond the visible spectrum: Methods and applications, pp 15–24Google Scholar
  31. 31.
    Peleg S, Naor J, Hartley R, Avnir D (1984) Multiple resolution texture analysis and classification. IEEE Trans Pattern Anal Mach Intell 6(4):518–523CrossRefGoogle Scholar
  32. 32.
    Pentland AP (1984) Fractal-based description of natural scenes. IEEE Trans Pattern Anal Mach Intell 6(6):661–674CrossRefGoogle Scholar
  33. 33.
    Piella G, Heijmans H (2003) A new quality metric for image fusion. International Conference on Image Processing 3:111–173Google Scholar
  34. 34.
    Qin X, Zheng J, Hu G, Wang J (2017) Multi-focus image fusion based on window empirical mode decomposition. Infrared Phys Technol 85:251–260CrossRefGoogle Scholar
  35. 35.
    Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315CrossRefGoogle Scholar
  36. 36.
    Ren X, Malik J (2003) Learning a classification model for segmentation, pp 10Google Scholar
  37. 37.
    Rockinger O, Fechner T (1998) Pixel-level image fusion: the case of image sequences. Signal Processing, Sensor Fusion, and Target Recognition VII 3374:378–389CrossRefGoogle Scholar
  38. 38.
    Sarkar N, Chaudhuri BB (1994) An efficient differential box-counting approach to compute fractal dimension of image. IEEE Trans Syst Man Cybern 24(1):115–120CrossRefGoogle Scholar
  39. 39.
    Seal A, Bhattacharjee D, Nasipuri M (2016) Human face recognition using random forest based fusion of à-trous wavelet transform coefficients from thermal and visible images. AEU Int J Electron Commun 70(8):1041–1049CrossRefGoogle Scholar
  40. 40.
    Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C (2014) Robust thermal face recognition using region classifiers. Int J Pattern Recognit Artif Intell 28 (5):1456008CrossRefGoogle Scholar
  41. 41.
    Seal A, Bhattacharjee D, Nasipuri M, Basu DKR (2015) UGC-JU face database and its benchmarking using linear regression classifier. Multimed Tools Appl 74(9):2913–2937CrossRefGoogle Scholar
  42. 42.
    Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C, Menasalvas E (2014) Histogram of bunched intensity values based thermal face recognition, Rough Sets and Intelligent Systems Paradigms, Springer, pp 367–374CrossRefGoogle Scholar
  43. 43.
    Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C, Menasalvas E (2017) Fusion of visible and thermal images using a directed search method for face recognition. Int J Pattern Recognit Artif Intell 31(4):1756005CrossRefGoogle Scholar
  44. 44.
    Seal A, Bhattacharjee D, Nasipuri M, Rodríguez-Esparragón D, Menasalvas E, Gonzalo-Martin C (2018) PET-CT image fusion using random forest and à-trous wavelet transform. Int J Numer Methods Biomed Eng 34(3):2933MathSciNetCrossRefGoogle Scholar
  45. 45.
    Shensa MJ (1992) The discrete wavelet transform: wedding the a trous and Mallat algorithms. IEEE Trans Signal Process 40(10):2464–2482zbMATHCrossRefGoogle Scholar
  46. 46.
    Singh R, Vatsa M, Noore A (2008) Hierarchical fusion of multi-spectral face images for improved recognition performance. Information Fusion 9(2):200–210zbMATHCrossRefGoogle Scholar
  47. 47.
    Singh R, Vatsa M, Noore A (2008) Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition. Pattern Recogn 41(3):880–893zbMATHCrossRefGoogle Scholar
  48. 48.
    Singh R, Vatsa M, Noore A (2009) Face recognition with disguise and single gallery images. Image Vis Comput 27(3):245–257CrossRefGoogle Scholar
  49. 49.
    Stramaglia S, Bassez I, Faes L, Marinazzo D (2017) Multiscale Granger causality analysis by à trous wavelet transform, 7th IEEE international workshop on advances in sensors and interfaces, pp 25–28Google Scholar
  50. 50.
    Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650MathSciNetzbMATHCrossRefGoogle Scholar
  51. 51.
    Tazebay MV, Akansu AN (1995) Adaptive subband transforms in time-frequency excisers for DSSS communications systems. IEEE Trans Signal Process 43(11):2776–2782CrossRefGoogle Scholar
  52. 52.
    Tian Y-I, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115CrossRefGoogle Scholar
  53. 53.
    Wang N, Li Q, El-Latif AA Abd, Peng J, Niu X (2013) Multibiometric complex fusion for visible and thermal face images, International Journal of Signal Processing. Image Processing and Pattern Recognition 6(3):1–16Google Scholar
  54. 54.
    Wang Z, Bovik AC (2002) A universal image quality index, Wang, Zhou and Bovik, Alan C. IEEE Signal Process Lett 9(3):81–84CrossRefGoogle Scholar
  55. 55.
    Xu Y, Li Z, Yang J, Zhang D (2017) A survey of dictionary learning algorithms for face recognition. IEEE Access 5:8502–8514CrossRefGoogle Scholar
  56. 56.
    Xydeas C S, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309CrossRefGoogle Scholar
  57. 57.
    Zheng Y, Elmaghraby AS, Frigui H (2006) Three-band MRI image fusion utilizing the wavelet-based method optimized with two quantitative fusion metrics, Medical Imaging 2006: Image Processing, vol 61440Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.PDPM Indian Institute of Information TechnologyDesign and ManufacturingJabalpurIndia

Personalised recommendations