Skip to main content

Application of Audio and Video Processing Methods for Language Research and Documentation: The AVATecH Project

  • Conference paper
  • First Online:
Human Language Technology Challenges for Computer Science and Linguistics (LTC 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8387))

Included in the following conference series:

  • 847 Accesses

Abstract

Evolution and changes of all modern languages is a well-known fact. However, recently it is reaching dynamics never seen before, which results in loss of the vast amount of information encoded in every language. In order to preserve such rich heritage, and to carry out linguistic research, properly annotated recordings of world languages are necessary. Since creating those annotations is a very laborious task, reaching times 100 longer than the length of the annotated media, innovative video processing algorithms are needed, in order to improve the efficiency and quality of annotation process. This is the scope of the AVATecH project presented in this article.

AVATecH is a joint project of Max Planck and Fraunhofer Institutes, started in 2009 and funded by MPG and FhG. Some of the research leading to these results has received funding from the European Commissions 7th Framework Program under grant agreement n 238405 (CLARA).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.mpi.nl/dobes

  2. 2.

    http://www.hrelp.org/

  3. 3.

    http://www.paradisec.org.au/

  4. 4.

    http://www.iis.fraunhofer.de/bf/bsy/fue/isyst

  5. 5.

    https://www.idiap.ch/dataset/ami/

  6. 6.

    http://tla.mpi.nl/projects_info/auvis

References

  1. Crystal, D.: Language Death. Cambridge University Press, Cambridge (2000)

    Book  Google Scholar 

  2. Lenkiewicz, P., Gebre, B.G., Schreer, O., Masneri, S., Schneider, D., Tschöpel, S.: Avatech automated annotation through audio and video analysis. In: Choukri, K., Declerck, T., Doğan, M.U., Maegaard, B., Mariani, J., Odijk, J., Piperidis, S. (eds.) Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12) (N. C. C. Chair), Istanbul, Turkey. European Language Resources Association (ELRA), May 2012

    Google Scholar 

  3. Lenkiewicz, P., Uytvanck, D.V., Wittenburg, P., Drude, S.: Towards automated annotation of audio and video recordings by application of advanced web-services. In: INTERSPEECH, ISCA (2012)

    Google Scholar 

  4. Ajmera, J., Bourlard, H., Lapidot, I., McCowan, I.: Unknown-multiple speaker clustering using hmm. In: INTERSPEECH, Citeseer (2002)

    Google Scholar 

  5. Terrillon, J.-C., Shirazi, M., Fukamachi, H., Akamatsu, S.: Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 54–61 (2000)

    Google Scholar 

  6. Vezhnevets, V., Sazonov, V., Andreeva, A.: A survey on pixel-based skin color detection techniques. In: Proceedings of the GRAPHICON-2003, pp. 85–92 (2003)

    Google Scholar 

  7. Kueblbeck, C., Ernst, A.: Face detection and tracking in video sequences using the modified census transformation. J. Image Vis. Comput. 24(6), 564–572 (2006)

    Article  Google Scholar 

  8. Atzpadin, N., Kauff, P., Schreer, O.: Stereo analysis by hybrid recursive matching for real-time immersive video conferencing. IEEE Trans. Circuits Syst. Video Technol. 14(3), 321–334 (2004)

    Article  Google Scholar 

  9. Dumitras, A., Haskell, B.G.: A look-ahead method for pan and zoom detection in video sequences using block-based motion vectors in polar coordinates. In: Proceedings of the ISCAS, vol. 3, pp. 853–856 (2004)

    Google Scholar 

  10. Tranter, S., Reynolds, D.: An overview of automatic speaker diarization systems. IEEE Trans. Audio, Speech, Lang. Process. 14(5), 1557–1565 (2006)

    Article  Google Scholar 

  11. Anguera Miro, X., Bozonnet, S., Evans, N., Fredouille, C., Friedland, G., Vinyals, O.: Speaker diarization: a review of recent research. IEEE Trans. Audio, Speech, Lang. Process. 20(2), 356–370 (2012)

    Article  Google Scholar 

  12. McNeill, D.: So you think gestures are nonverbal? Psychol. Rev. 92, 350–371 (1985)

    Article  Google Scholar 

  13. Gebre, B.G., Wittenburg, P., Heskes, T.: The gesturer is the speaker. In: ICASSP 2013 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Przemyslaw Lenkiewicz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Lenkiewicz, P. et al. (2014). Application of Audio and Video Processing Methods for Language Research and Documentation: The AVATecH Project. In: Vetulani, Z., Mariani, J. (eds) Human Language Technology Challenges for Computer Science and Linguistics. LTC 2011. Lecture Notes in Computer Science(), vol 8387. Springer, Cham. https://doi.org/10.1007/978-3-319-08958-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08958-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08957-7

  • Online ISBN: 978-3-319-08958-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics