Abstract
A well-annotated dance media is an essential part of a nation’s identity, transcending cultural and language barriers. Many dance video archives suffer from tremendous problems concerning authoring and access, because of the multimodal nature of human communication and complex spatio-temporal relationships that exist between dancers. A multimodal dance document consists of video of dancers in space and time, their dance steps through gestures and emotions and accompanying song and music.This work presents the architecture of an annotation system capturing information directly through the use of sensors, comparing and interpreting them using a context and a user’s model in order to annotate, index and access multimodal documents.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Ann Hutchinson, G.: Dance Notation: Process of recording movemen. Dance Books, London (1984)
Chitra, D., Manthe, A., Nack, F., Rutledge, L., Sikora, T., Zettl, H.: Media Semantics: Who needs it and why? In: Proceedings of ACM Multimedia, pp. 580–583 (2002)
Herbison, D., Evans: Dance, Video, Notation and Computers. Leonardo 21(1), 45–50 (1988)
George, P.: Computers and Dance: A bibliography. Leonardo 23(1), 87–90 (1990)
Calfert, T.W., Chapman, J.: Notation of movement with computer assistance. In: Proceedings of ACM Annual Conference, pp. 731–736 (1978)
Hatol, J., Kumar, V.: Semantic representation and interaction of dance objects. In: Proceedings of LORNET Conference, Poster (2005)
Hachimura, K.: Digital archiving of dancing. Review of the National Center for Digitization 8, 51–66 (2006)
Hattori, M., Takamori, T.: The description of human movement in computer based on movement score. In: Proceedings of 41st SICE, pp. 2370–2371 (2002)
Calaban: (2002), http://www.bham.ac.uk/calaban/frame.htm
Bimas, U., Simon, W., Peter, R.: NUNTIUS: A computer system for the interactive composition and analysis of music and dance. Leonardo 25(1), 59–68 (1992)
Led & Linter: An X-Windows Editor / Interpreter for Labanotation (2006), http://wwwstaff.it.uts.edu.au/don/pubs/led.html
MacBenesh: Behesh notation editor for Apple Macintosh (2004), http://members.rogers.com/dancewrite/macbenesh/macbenesh.htm
Ilene, F.: Documentation Technology for the 21st Century. In: Proceedings of World Dance Academic Conference, pp. 137–142 (2000)
Kalajdziski, S., Davcev, D.: Augmented reality system interface for dance analysis and presentation based on MPEG-7. In: Proceedings of IASTED Conference on Visualization, Imaging, and Image Processing, pp. 725–730 (2004)
Forouzan, G., Pegge, V., Park, Y.C.: A multimedia information repository for cross cultural dance studies. Multimedia Tools and Applications 24, 89–103 (2004)
Athanasios, C., Gkoritsas, Marios, C.A.: COSMOS-7: A video content modeling framework for MPEG-7. In: Proceedings of IEEE Multi Media Modeling, pp. 123–130 (2005)
IBM VideoAnnEx (2002), http://www.alphaworks.ibm.com/tech/videoannex
Tra-Thusng, T., Roisin, C.: Multimedia modeling using MPEG-7 for authoring multimedia integration. In: Proceedings of ACM Multimedia Information Retrieval, pp. 171–178 (2003)
Ryn, J., Sohn, J., Kin, M.: MPEG-7 metadata authoring tool. In: Proceedings of ACM Multimedia, pp. 267–270 (2002)
Haoran, Y.I., Rajan, D., Liang-Tien, C.: Automatic generation of MPEG-7 complaint XML document for motion trajectory description in sports video. Multimedia Tools and Applications 26(2), 191–206 (2005)
Rajkumar, K., Andres, F., Guetl, C.: DanVideo: A Mpeg7 Authoring and Retrieval System for Dance Videos. Multimedia Tools and Applications 46(2), 545–572 (2009)
Devillers, L., Vidrascu, L., Lamel, L.: Challenges in real-life emotion annotation and machine learning based detection. Neural Networks 18, 407–422 (2005)
Popescu-Belis, A.: Managing Multimodal Data, Metadata and Annotations: Challenges and Solutions. In: Thiran, J.-P., Marques, F., Bourlard, H. (eds.) Multimodal Signal Processing for Human-Computer Interaction, pp. 183–203. Elsevier/ Academic Press (2009)
Callejas, Z., Lòpez-Còzar, R.: Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication (2008), doi: 10.1016/j.specom, 01.001
Yu, C., Zhou, J., Riekki, J.: Expression and Analysis of Emotions: Survey and Experiment. In: Symposia and Workshops on Ubiquitous, Autonomic and Trusted Computing, UIC-ATC, pp. 428–433 (2009)
Harada, I., Tadenuma, M., Nakai, T., Suzuki, R., Hikawa, N., Makino, M., Inoue, M.: An Interactive and Concerted Dance System?? Emotion Extraction and Support for Emotional Concert. In: Fifth International Conference on Information Visualisation (IV 2001), vol. iv, p. 0303 (2001)
Glowinski, D., Camurri, A., Volpe, G., Dael, N., Scherer, K.: Technique for automatic emotion recognition by body gesture analysis. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, pp. 1–6 (2008)
Grassi, M.: Developing HEO human emotions ontology. In: Fierrez, J., Ortega-Garcia, J., Esposito, A., Drygajlo, A., Faundez-Zanuy, M. (eds.) BioID MultiComm2009. LNCS, vol. 5707, pp. 244–251. Springer, Heidelberg (2009)
Sorci, M., Antonini, G., Cruz, J., Robin, T., Bierlaire, M., and Thiran, J.: Modelling human perception of static facial expressions. Image Vision Comput. 28(5), 790–806 (2010), doi:http://dx.doi.org/ 10.1016/j.imavis. 2009.10.003
Oviatt, S., Choen, P.: Perceptual user interfaces: multimodal interfaces that process what comes naturally. Comm. of ACM 43, 45–53 (2000)
D’Ulizia, A., Ferri, F., Grifoni, P.: Generating Multimodal Grammars for Multimodal Dialogue Processing. IEEE Transactions on Systems, Man, and Cybernetics, Part A 40(6), 1130–1145 (2010)
Mankoff, J., Abowd, G.D., Hudson, S.E.: OOPS: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces. Computers & Graphics 24(6), 819–834 (2000)
Caschera, M.C., Ferri, F., Grifoni, P.: Ambiguity detection in multimodal systems. In: Proc. AVI 2008, pp. 331–334 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kannan, R., Andres, F., Ferri, F., Grifoni, P. (2011). Towards Multimodal Capture, Annotation and Semantic Retrieval from Performing Arts. In: Abraham, A., Mauri, J.L., Buford, J.F., Suzuki, J., Thampi, S.M. (eds) Advances in Computing and Communications. ACC 2011. Communications in Computer and Information Science, vol 193. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22726-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-22726-4_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22725-7
Online ISBN: 978-3-642-22726-4
eBook Packages: Computer ScienceComputer Science (R0)