Advertisement

Towards Multimodal Capture, Annotation and Semantic Retrieval from Performing Arts

  • Rajkumar Kannan
  • Frederic Andres
  • Fernando Ferri
  • Patrizia Grifoni
Part of the Communications in Computer and Information Science book series (CCIS, volume 193)

Abstract

A well-annotated dance media is an essential part of a nation’s identity, transcending cultural and language barriers. Many dance video archives suffer from tremendous problems concerning authoring and access, because of the multimodal nature of human communication and complex spatio-temporal relationships that exist between dancers. A multimodal dance document consists of video of dancers in space and time, their dance steps through gestures and emotions and accompanying song and music.This work presents the architecture of an annotation system capturing information directly through the use of sensors, comparing and interpreting them using a context and a user’s model in order to annotate, index and access multimodal documents.

Keywords

Multimodal data Semantic retrieval Sensors Multimedia indexing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ann Hutchinson, G.: Dance Notation: Process of recording movemen. Dance Books, London (1984)Google Scholar
  2. 2.
    Chitra, D., Manthe, A., Nack, F., Rutledge, L., Sikora, T., Zettl, H.: Media Semantics: Who needs it and why? In: Proceedings of ACM Multimedia, pp. 580–583 (2002)Google Scholar
  3. 3.
    Herbison, D., Evans: Dance, Video, Notation and Computers. Leonardo 21(1), 45–50 (1988)CrossRefGoogle Scholar
  4. 4.
    George, P.: Computers and Dance: A bibliography. Leonardo 23(1), 87–90 (1990)CrossRefGoogle Scholar
  5. 5.
    Calfert, T.W., Chapman, J.: Notation of movement with computer assistance. In: Proceedings of ACM Annual Conference, pp. 731–736 (1978)Google Scholar
  6. 6.
    Hatol, J., Kumar, V.: Semantic representation and interaction of dance objects. In: Proceedings of LORNET Conference, Poster (2005)Google Scholar
  7. 7.
    Hachimura, K.: Digital archiving of dancing. Review of the National Center for Digitization 8, 51–66 (2006)Google Scholar
  8. 8.
    Hattori, M., Takamori, T.: The description of human movement in computer based on movement score. In: Proceedings of 41st SICE, pp. 2370–2371 (2002)Google Scholar
  9. 9.
  10. 10.
    Bimas, U., Simon, W., Peter, R.: NUNTIUS: A computer system for the interactive composition and analysis of music and dance. Leonardo 25(1), 59–68 (1992)CrossRefGoogle Scholar
  11. 11.
    Led & Linter: An X-Windows Editor / Interpreter for Labanotation (2006), http://wwwstaff.it.uts.edu.au/don/pubs/led.html
  12. 12.
    MacBenesh: Behesh notation editor for Apple Macintosh (2004), http://members.rogers.com/dancewrite/macbenesh/macbenesh.htm
  13. 13.
    Ilene, F.: Documentation Technology for the 21st Century. In: Proceedings of World Dance Academic Conference, pp. 137–142 (2000)Google Scholar
  14. 14.
    Kalajdziski, S., Davcev, D.: Augmented reality system interface for dance analysis and presentation based on MPEG-7. In: Proceedings of IASTED Conference on Visualization, Imaging, and Image Processing, pp. 725–730 (2004)Google Scholar
  15. 15.
    Forouzan, G., Pegge, V., Park, Y.C.: A multimedia information repository for cross cultural dance studies. Multimedia Tools and Applications 24, 89–103 (2004)CrossRefGoogle Scholar
  16. 16.
    Athanasios, C., Gkoritsas, Marios, C.A.: COSMOS-7: A video content modeling framework for MPEG-7. In: Proceedings of IEEE Multi Media Modeling, pp. 123–130 (2005)Google Scholar
  17. 17.
  18. 18.
    Tra-Thusng, T., Roisin, C.: Multimedia modeling using MPEG-7 for authoring multimedia integration. In: Proceedings of ACM Multimedia Information Retrieval, pp. 171–178 (2003)Google Scholar
  19. 19.
    Ryn, J., Sohn, J., Kin, M.: MPEG-7 metadata authoring tool. In: Proceedings of ACM Multimedia, pp. 267–270 (2002)Google Scholar
  20. 20.
    Haoran, Y.I., Rajan, D., Liang-Tien, C.: Automatic generation of MPEG-7 complaint XML document for motion trajectory description in sports video. Multimedia Tools and Applications 26(2), 191–206 (2005)CrossRefGoogle Scholar
  21. 21.
    Rajkumar, K., Andres, F., Guetl, C.: DanVideo: A Mpeg7 Authoring and Retrieval System for Dance Videos. Multimedia Tools and Applications 46(2), 545–572 (2009)Google Scholar
  22. 22.
    Devillers, L., Vidrascu, L., Lamel, L.: Challenges in real-life emotion annotation and machine learning based detection. Neural Networks 18, 407–422 (2005)CrossRefGoogle Scholar
  23. 23.
    Popescu-Belis, A.: Managing Multimodal Data, Metadata and Annotations: Challenges and Solutions. In: Thiran, J.-P., Marques, F., Bourlard, H. (eds.) Multimodal Signal Processing for Human-Computer Interaction, pp. 183–203. Elsevier/ Academic Press (2009)Google Scholar
  24. 24.
    Callejas, Z., Lòpez-Còzar, R.: Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication (2008), doi: 10.1016/j.specom, 01.001 Google Scholar
  25. 25.
    Yu, C., Zhou, J., Riekki, J.: Expression and Analysis of Emotions: Survey and Experiment. In: Symposia and Workshops on Ubiquitous, Autonomic and Trusted Computing, UIC-ATC, pp. 428–433 (2009)Google Scholar
  26. 26.
    Harada, I., Tadenuma, M., Nakai, T., Suzuki, R., Hikawa, N., Makino, M., Inoue, M.: An Interactive and Concerted Dance System?? Emotion Extraction and Support for Emotional Concert. In: Fifth International Conference on Information Visualisation (IV 2001), vol. iv, p. 0303 (2001)Google Scholar
  27. 27.
    Glowinski, D., Camurri, A., Volpe, G., Dael, N., Scherer, K.: Technique for automatic emotion recognition by body gesture analysis. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pp. 1–6 (2008)Google Scholar
  28. 28.
    Grassi, M.: Developing HEO human emotions ontology. In: Fierrez, J., Ortega-Garcia, J., Esposito, A., Drygajlo, A., Faundez-Zanuy, M. (eds.) BioID MultiComm2009. LNCS, vol. 5707, pp. 244–251. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  29. 29.
    Sorci, M., Antonini, G., Cruz, J., Robin, T., Bierlaire, M., and Thiran, J.: Modelling human perception of static facial expressions. Image Vision Comput. 28(5), 790–806 (2010), doi:http://dx.doi.org/ 10.1016/j.imavis. 2009.10.003 Google Scholar
  30. 30.
    Oviatt, S., Choen, P.: Perceptual user interfaces: multimodal interfaces that process what comes naturally. Comm. of ACM 43, 45–53 (2000)Google Scholar
  31. 31.
    D’Ulizia, A., Ferri, F., Grifoni, P.: Generating Multimodal Grammars for Multimodal Dialogue Processing. IEEE Transactions on Systems, Man, and Cybernetics, Part A 40(6), 1130–1145 (2010)CrossRefGoogle Scholar
  32. 32.
    Mankoff, J., Abowd, G.D., Hudson, S.E.: OOPS: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces. Computers & Graphics 24(6), 819–834 (2000)CrossRefGoogle Scholar
  33. 33.
    Caschera, M.C., Ferri, F., Grifoni, P.: Ambiguity detection in multimodal systems. In: Proc. AVI 2008, pp. 331–334 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Rajkumar Kannan
    • 1
  • Frederic Andres
    • 2
  • Fernando Ferri
    • 3
  • Patrizia Grifoni
    • 3
  1. 1.Bishop Heber College(Autonomous)TiruchirappalliIndia
  2. 2.National Institute of InformaticsTokyoJapan
  3. 3.IRPPS-CNRRomeItaly

Personalised recommendations