Advertisement

Interpreting Dynamic Meanings by Integrating Gesture and Posture Recognition System

  • Omer Rashid Ahmed
  • Ayoub Al-Hamadi
  • Bernd Michaelis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6468)

Abstract

Integration of information from different systems support enhanced functionality however it requires a rigorous pre-determined results for the fusion. This paper proposes a novel approach for determining the integration criteria using Particle filter for the fusion of hand gesture and posture recognition system at decision level. For decision level fusion, integration framework requires the classification of hand gesture and posture symbols in which HMM is used to classify the alphabets and numbers from hand gesture recognition system whereas ASL finger spelling signs (alphabets and numbers) are classified by posture recognition system using SVM. These classification results are input to integration framework to compute the contribution-weights. For this purpose, Condensation algorithm approximates the optimal a-posterior probability using a-prior probability and Gaussian based likelihood function thus making the weights independent of classification ambiguities. Considering the recognition as a problem of regular grammar, we have developed our production rules based on context free grammar (CFG) for the restaurant scenario. On the basis of contribution-weights, we mapped the recognized outcome over CFG rules and infer meaningful expressions. Experiments are conducted on 500 different combinations of restaurant orders with the overall 98.3% inference accuracy which proves the significance of proposed approach.

Keywords

Gesture Recognition Hand Gesture Context Free Grammar Hand Gesture Recognition Meaningful Expression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jaimes, A., Sebe, N.: Multimodal human-computer interaction: A survey. In: Computer Vision and Image Understanding, pp. 116–134 (2007)Google Scholar
  2. 2.
    Yoon, H., Soh, J., Bae, Y., Yang, H.: Hand gesture recognition using combined features of location, angle and velocity. Pattern Recognition 34, 1491–1501 (2001)CrossRefzbMATHGoogle Scholar
  3. 3.
    Liu, N., Lovel, B., Kootsookos, P.: Evaluation of hmm training algorithms for letter hand gesture recognition. In: IEEE Int. Sym. on SPIT, pp. 648–651 (2003)Google Scholar
  4. 4.
    Hunter, E., Schlenzig, J., Jain, R.: Posture estimation in reduced-model gesture input systems. In: International Workshop on Automatic Face-and Gesture-Recognition, pp. 290–295 (1995)Google Scholar
  5. 5.
    Hussain, M.: Automatic recognition of sign language gestures. Master thesis, Jordan University of Science and Technology (1999)Google Scholar
  6. 6.
    Malassiotis, S., Strintzis, M.: Real-time hand posture recognition using range data. Image and Vision Computing 26, 1027–1037 (2008)CrossRefGoogle Scholar
  7. 7.
    Licsar, A., Sziranyi, T.: Supervised training based hand gesture recognition system. In: International Conference on Pattern Recognition, pp. 999–1002 (2002)Google Scholar
  8. 8.
    Handouyahia, M., Ziou, D., Wang, S.: Sign language recognition using moment-based size functions. In: Int. Conference of Vision Interface, pp. 210–216 (1999)Google Scholar
  9. 9.
    Freeman, W., Roth, M.: Orientation histograms for hand gesture recognition. In: Int. Workshop on Automatic Face and Gesture Recognition, pp. 296–301 (1994)Google Scholar
  10. 10.
    Brunelli, R., Falavigna, D.: Person identification using multiple cues. IEEE Trans. on PAMI 17, 955–966 (1995)CrossRefGoogle Scholar
  11. 11.
    Ross, A., Jain, A.: Multimodal biometrics: An overview. In: 12th Signal Processing Conference, pp. 1221–1224 (2004)Google Scholar
  12. 12.
    Chang, K., Bowyer, K.W., Flynn, P.J.: Face recognition using 2d and 3d facial data. In: ACM Workshop on Multimodal User Authentication, pp. 25–32 (2003)Google Scholar
  13. 13.
    Kumar, A., Wong, D., Shen, H., Jain, A.: Personal verification using palmprint and hand geometry biometric. In: 4th Int. Conf. on Audio and Video-based Biometric Person Authentication, pp. 668–678 (2003)Google Scholar
  14. 14.
    Wu, Q., Wang, L., Geng, X., Li, M., He, X.: Dynamic biometrics fusion at feature level for video-based human recognition, pp. 152–157 (2007)Google Scholar
  15. 15.
    Rashid, O., Hamadi, A., Michaelis, B.: A framework for integration of gesture and posture recognition using hmm and svm. In: IEEE ICIS, pp. 572–577 (2009)Google Scholar
  16. 16.
    Hu, M.: Visual pattern recognition by moment invariants. IRE Transaction on Information Theory 8, 179–187 (1962)zbMATHGoogle Scholar
  17. 17.
    Isard, M., Blake, A.: Condensation - conditional density propagation for visual tracking. Int. Jour. of Computer Vision 29, 5–28 (1998)CrossRefGoogle Scholar
  18. 18.
    Monwar, M., Gavrilova, M.: A robust authentication system using multiple biometrics. In: Computer and Information Science, pp. 189–201 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Omer Rashid Ahmed
    • 1
  • Ayoub Al-Hamadi
    • 1
  • Bernd Michaelis
    • 1
  1. 1.Institute for Electronics, Signal Processing and Communications (IESK)Otto-von-Guericke-University MagdeburgGermany

Personalised recommendations