Advertisement

Auditory Browsing Interface of Ambient and Parallel Sound Expression for Supporting One-to-many Communication

  • Tomoko YonezawaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9189)

Abstract

In this paper, we introduce an auditory browsing system for supporting one-to-many communication in parallel with an ongoing discourse, lecture, or presentation. The live reactions of audiences should reflect the main speech from the viewpoint of active participation. In order to browse numerous live comments from audiences, the speaker stretches her/his neck toward a particular section of the virtual audience group. We adopt the metaphor of “looking inside” toward the direction of the seating position with repositioned and overlaid audiences’ voices corresponding to the length of the voice regardless of the seating of real audiences. As a result, the speaker could browse the comments of the audience and show the communicative behaviors when she/he was interested in a particular group of the audience’s utterances.

Keywords

Auditory space One-to-many parallel communication Browsing interface Audience interaction 

Notes

Acknowledgement

This research was supported in part by KAKENHI 24300047 and KAKENHI 25700021. The authors would like to thank the participants in the experiment.

References

  1. 1.
    Rekimoto, J., Ayatsuka, Y., Uoi, H., Arai, T.: Adding another communication channel to reality: an experience with a chataugmented conference. In: ACM CHI 98 Cconference Summary on Human Factors in Computing Systems, pp. 271–272 (1998)Google Scholar
  2. 2.
    Nishida, T., Igarashi, T.: Lock-on-Chat: boosting anchored conversation and its operation at a technical conference. In: Costabile, M.F., Paternó, F. (eds.) INTERACT 2005. LNCS, vol. 3585, pp. 970–973. Springer, Heidelberg (2005) CrossRefGoogle Scholar
  3. 3.
    Maynes-Aminzade, D., Pausch, R., Seitz, S.: Techniques for interactive audience participation. In: Proceedings of the 4th IEEE International Conference on Multimodal Interfaces, p. 1520. IEEE Computer Society (2002)Google Scholar
  4. 4.
    Streeck, J.: Gesture as communication II: the audience as co-author. Res. Lang. Soc. Interact. 27(3), 239–267 (1994)CrossRefGoogle Scholar
  5. 5.
    Trees, A.R., Jackson, M.H.: The learning environment in clicker classrooms: student processes of learning and involvement in large university-level courses using student response systems. Learn. Media Technol. 32(1), 21–40 (2007)CrossRefGoogle Scholar
  6. 6.
    Ahn, S.C., Kim, I.-J., Kim, H.-G., Kwon, Y.-M., Ko, H.: Audience interaction for virtual reality theater and its implementation. In: Proceedings of VRST, pp. 41–45 (2001)Google Scholar
  7. 7.
    Kevin, C.: Baird, real-time generation of music notation via audience interaction using python and gnu lilypond. In: Proceedings of NIME 2005, pp. 240–241 (2005)Google Scholar
  8. 8.
    Fernstrom, M., Brazil, E.: Sonic Browsing: an auditory tool for multimedia asset management. In: Proceedings of the International Conference on Auditory Display (ICAD) (2001)Google Scholar
  9. 9.
    Brazil, E., Fernstrm, M., Tzanetakis, G., Cook, P.: Enhancing sonic browsing using audio information retrieval. In: Proceedings of the International Conference on Auditory Display (ICAD) (2002)Google Scholar
  10. 10.
    Lumbreras, M., Rossi, G.: A metaphor for the visually impaired: browsing information in a 3D auditory environment. In: Conference Companion on Human factors in Computing Systems, pp. 216–217 (1995)Google Scholar
  11. 11.
    Yonezawa, T., Yamazoe, H., Terasawa, H.: Portable recording/browsing system of voice memos allocated to user-relative directions. In: Pervasive 2009 Adjunct Proceedings, pp. 241–244 (2009)Google Scholar
  12. 12.
    Yonezawa, T., Yamazoe, H., Terasawa, H.: Voisticky: sharable and portable auditory balloon with voice sticky posted and browsed by user’s head direction. In: IEEE ICSPCC 2011, pp. 118–123 (2011)Google Scholar
  13. 13.
    Montello, D.R.: Classroom seating location and its effect on course achievement, participation, and attitudes. J. Environ. Psychol. 8(2), 149–157 (1988)CrossRefGoogle Scholar
  14. 14.
    Shriberg, E., Stolcke, A., Jurafsky, D., Coccaro, N., Meteer, M., Bates, R., Taylor, P., Ries, K., Martin, R., Van Ess-Dykema, C.: Can prosody aid the automatic classification of dialog acts in conversational speech? Lang. Speech 41(3–4), 443–492 (1998)Google Scholar
  15. 15.
    Grimm, M., Kroschel, K., Narayanan, S.: Support vector regression for automatic recognition of spontaneous emotions in speech. In: Proceedings of IEEE ICASSP, vol. 4, p. IV-1085 (2007)Google Scholar
  16. 16.
    Ishi, C.T.: Perceptually-related F0 parameters for automatic classification of phrase final tones. IEICE Trans. Inf. Syst. E88–D(3), 481–488 (2005)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Kansai UniversityTakatsukiJapan

Personalised recommendations