Match Score Level Fusion of Face and Gait at a Distance

  • Bir Bhanu
  • Ju Han
Part of the Advances in Pattern Recognition book series (ACVPR)


This chapter introduces a video based recognition method to recognize non-cooperating individuals at a distance in video, who expose side views to the camera. Information from two biometric sources, side face and gait, is utilized and integrated for recognition. For side face, an enhanced side face image (ESFI), a higher resolution image compared with the image directly obtained from a single video frame, is constructed, which integrates face information from multiple video frames. For gait, the gait energy image (GEI), a spatio-temporal compact representation of gait in video, is used to characterize human walking properties. The features of face and gait are obtained separately using the principal component analysis (PCA) and the multiple discriminant analysis (MDA) combined method from ESFI and GEI, respectively. They are then integrated at the match score level by using different fusion strategies. The approach is tested on a database of video sequences, corresponding to 45 people, which are collected over seven months. The different fusion methods are compared and analyzed. The experimental results show that (a) better face features are extracted from ESFI compared to those from the original side face images; (b) the synchronization of face and gait is not necessary for face template ESFI and gait template GEI. The synthetic match scores combine information from them; and (c) integrated information from side face and gait is effective for human recognition in video.


Covariance Hull Shoe 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 9.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997) CrossRefGoogle Scholar
  2. 25.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Inf. Fusion 6(1), 5–20 (2005) CrossRefGoogle Scholar
  3. 61.
    Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006) CrossRefGoogle Scholar
  4. 84.
    Kale, A., Roy-chowdhury, A., Chellappa, R.: Fusion of gait and face for human identification. In: Proceedings of Acoustics, Speech, and Signal Processing, vol. 5, pp. 901–904 (2004) Google Scholar
  5. 87.
    Kinnunen, T., Hautamaki, V., Franti, P.: Fusion of spectral feature sets for accurate speaker identification. In: Proceedings of International Conference Speech and Computer, pp. 361–365 (2004) Google Scholar
  6. 153.
    Shakhnarovich, G., Darrell, T.: On probabilistic combination of face and gait cues for identification. In: Proceedings of Automatic Face and Gesture Recognition, vol. 5, pp. 169–174 (2002) Google Scholar
  7. 154.
    Shakhnarovich, G., Lee, L., Darrell, T.: Integrated face and gait recognition from multiple views. In: Proceedings of IEEE Workshop on Visual Motion, vol. 1, pp. 439–446 (2001) Google Scholar
  8. 155.
    Shipp, C.A., Kuncheva, L.I.: Relationships between combination methods and measures of diversity in combining classifiers. Inf. Fusion 3, 135–148 (2002) CrossRefGoogle Scholar
  9. 209.
    Zhou, X., Bhanu, B.: Integrating face and gait for human recognition at a distance in video. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 37, 55–62 (2007) CrossRefGoogle Scholar
  10. 211.
    Zhou, X., Bhanu, B., Han, J.: Human recognition at a distance in video by integrating face profile and gait. In: Proceedings of Audio- and Video-Based Biometric Person Authentication, pp. 533–543 (2005) Google Scholar

Copyright information

© Springer-Verlag London Limited 2010

Authors and Affiliations

  1. 1.Bourns College of EngineeringUniversity of CaliforniaRiversideUSA
  2. 2.Lawrence Berkeley National LaboratoryUniversity of CaliforniaBerkeleyUSA

Personalised recommendations