Advertisement

Real-Time Feedback System for Monitoring and Facilitating Discussions

  • Sanat Sarda
  • Martin Constable
  • Justin Dauwels
  • Shoko Dauwels (Okutsu)
  • Mohamed Elgendi
  • Zhou Mengyu
  • Umer Rasheed
  • Yasir Tahir
  • Daniel Thalmann
  • Nadia Magnenat-Thalmann
Conference paper

Abstract

In this chapter, we present a system that provides real-time feedback about an ongoing discussion. Various speech statistics, such as speaking length, speaker turns and speaking turn duration, are computed and displayed in real time. In social monitoring, such statistics have been used to interpret and deduce talking mannerisms of people and gain insights on human social characteristics and behaviour. However, such analysis is usually conducted in an offline fashion, after the discussion has ended. In contrast, our system analyses the speakers and provides feedback to the speakers in real time during the discussion, which is a novel approach with plenty of potential applications. The proposed system consists of portable, easy to use equipment for recording the conversations. A user-friendly graphical user interface displays statistics about the ongoing discussion. Customized individual feedback to participants during conversation can be provided. Such close-loop design may help individuals to contribute effectively in the group discussion, potentially leading to more productive and perhaps shorter meetings. Here we present preliminary results on two-people face to face discussion. In the longer term, our system may prove to be useful, e.g. for coaching purposes and for facilitating business meetings.

Keywords

Graphical User Interface Audio Signal Video Signal Voice Activity Detection Coaching Session 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This research project is supported in part by the Institute for Media Innovation (Seed Grant M4080824) and the Nanyang Business School, both at Nanyang Technological University (NTU), Singapore. We would like to thank Mr. Vincent Teo and his colleagues at the Wee Kim Wee School of Communication of NTU, for the technical support. We are grateful to the lab managers and colleagues at Control Engineering Lab at NTU for their valuable support, and thank the participants for the test recordings.

References

  1. 1.
    Pentland, A.: Honest Signals: How They Shape Our World. MIT, Cambridge (2008)Google Scholar
  2. 2.
    Pentland, A.: Socially aware computation and communication. IEEE Comput. 38(3), 33–40 (2005)CrossRefGoogle Scholar
  3. 3.
    Poole, M.S., Holligshead, A.B., McGrath, J.E., Moreland, R,L., Rohrbaugh, J.: Interdisciplinary perspectives on small groups, Small Group Res. 35(1), 3–16 (2004). Sage.Google Scholar
  4. 4.
    Salas, E., Sims, D.E., Burke, C.S.: Is there a big five in teamwork. Small Group Res. 36(5), 555–599 (2005). SageGoogle Scholar
  5. 5.
    Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: A review. Image Vision Comput. 27(12), 1775–1787, Dec 2009. Elsevier.Google Scholar
  6. 6.
    Aran, O., Hung, H., Gatica-Perez, D.: A multimodal corpus for studying dominance in small group conversations. In: Proceedings of LREC Workshop on Multimodal Corpora and 7th International Conference for Language Resource and Evaluation, Malta (2010)Google Scholar
  7. 7.
    Rienks, R.J., Heylen, D.: Automatic dominance detection in meetings using easily detectable features. In: Proceedings of the Workshop on Machine Learning for Multimodal Interaction (MLMI), Edinburgh (2005)Google Scholar
  8. 8.
    Sanchez-Cortes, D., Aran, O., Schmid-Mast, M., Gatica-Perez, D.: Identifying emergent leadership in small groups using nonverbal communicative cues. In: 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI), pp. 39, Beijing, ACM (2010)Google Scholar
  9. 9.
    Pianesi, F., Mana, N., Cappelletti, A., Lepri, B., Zancanaro, M.: Multimodal recognition of personality traits in social interactions. In: Proceedings of 10th International Conference on Multimodal Interfaces, pp. 53–60, Chania, ACM Oct (2008)Google Scholar
  10. 10.
    Pentland, A.: Social dynamics: signals and behaviour. In: International Conference on Developmental Learning (ICDL), San Diego, vol. 5. IEEE (2004)Google Scholar
  11. 11.
    Aran, O., Gatica-Perez, D.: Fusing audio-visual nonverbal cues to detect dominant people in conversations. In: 20th International Conference on Pattern Recognition (ICPR), IEEE, pp.~3687–3690, Istanbul, Turkey, Aug 23–26 (2010)Google Scholar
  12. 12.
    Carletta, J., et al.: The AMI meeting corpus: A pre-announcement. In: Proceedings of Machine Learning for Multimodal Interaction (MLMI), pp. 28–39, Edinburgh, Jul (2005)Google Scholar
  13. 13.
    Sanchez-Cortes, D., Aran O., Gatica-Perez, D.: An audio visual corpus for emergent leader analysis. In: (ICMI-MLMI), Multimodal Corpora for Machine Learning, Nov 14–18, Alicante. ACM (2011)Google Scholar
  14. 14.
    Kim, T., Chang, A., Holland, L., Pentland, A.: Meeting mediator: enhancing group collaboration with sociometric feedback. In: Proceedings of ACM Conference on Computer Supportive Cooperative Work (CSCW), pp. 457–466, San Diego (2008)Google Scholar
  15. 15.
    Mike Brooks: VOICEBOX: Speech Processing Toolbox for MATLAB, Department of Electrical and Electronic Engineerng, Imperial College, LondonGoogle Scholar
  16. 16.
    Chial, M.R.: Suggestions for computer based audio recordings of speech samples for perceptual and acoustic analyses. In: Phonology Project Technical Report No. 13, Department of Communicative Disorders, Phonology Project, University of Wisconsin-Madison, Oct (2003)Google Scholar
  17. 17.
    Basu, S.: Conversation Scene Analysis. PhD Thesis, MIT, Department of Electrical Engineering and Computer Science (2002)Google Scholar
  18. 18.
    Stoltzman, W.T: Towards a Social Signaling Framework: Activity and Emphasis in Speech. Master Thesis. MIT, Sep (2006)Google Scholar
  19. 19.
    Ambady, N., Rosenthal, R.: Thin slices of expressive behaviour as predictors of interpersonal consequences: A meta analysis. Psychological Bulletin, vol. 111(2), pp. 256–274. American Psychological Association (1992)Google Scholar
  20. 20.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Jacobs A., Baldwin, T. (eds.) Proceedings of 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA vol. 1, pp. 511–518 (2001)Google Scholar
  21. 21.
    Dollar, P.: Piotr’s Image and Video Matlab Toolbox (PMT). Available from http://vision.ucsd.edu/~pdollar/toolbox/doc/

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Sanat Sarda
    • 1
  • Martin Constable
    • 2
  • Justin Dauwels
    • 1
  • Shoko Dauwels (Okutsu)
    • 3
  • Mohamed Elgendi
    • 4
  • Zhou Mengyu
    • 2
  • Umer Rasheed
    • 1
  • Yasir Tahir
    • 4
  • Daniel Thalmann
    • 4
  • Nadia Magnenat-Thalmann
    • 4
  1. 1.School of Electrical and Electronic EngineeringNanyang Technological UniversitySingaporeSingapore
  2. 2.School of Art, Design, and Media, Nanyang Technological UniversitySingaporeSingapore
  3. 3.Centre of Innovation Research in Cultural Intelligence and Leadership (CIRCQL), Nanyang Business SchoolSingaporeSingapore
  4. 4.Institute for Media InnovationNanyang Technological UniversitySingaporeSingapore

Personalised recommendations