Advertisement

Multimodal Joke Generation and Paralinguistic Personalization for a Socially-Aware Robot

  • Hannes RitschelEmail author
  • Thomas Kiderle
  • Klaus Weber
  • Florian Lingenfelser
  • Tobias Baur
  • Elisabeth André
Conference paper
  • 34 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12092)

Abstract

Robot humor is typically scripted by the human. This work presents a socially-aware robot which generates multimodal jokes for use in real-time human-robot dialogs, including appropriate prosody and non-verbal behaviors. It personalizes the paralinguistic presentation strategy based on socially-aware reinforcement learning, which interprets human social signals and aims to maximize user amusement.

Keywords

Robot humor Non-verbal behavior Personalization 

Notes

Acknowledgment

This research was funded by the European Union PRESENT project, grant agreement No. 856879.

References

  1. 1.
    Archakis, A., Giakoumelou, M., Papazachariou, D., Tsakona, V.: The prosodic framing of humour in conversational narratives: evidence from Greek data. J. Greek Linguist. 10(2), 187–212 (2010)CrossRefGoogle Scholar
  2. 2.
    Attardo, S., Pickering, L., Baker, A.: Prosodic and multimodal markers of humor in conversation. Pragmat. Cogn. 19(2), 224–247 (2011)CrossRefGoogle Scholar
  3. 3.
    Audrieth, A.L.: The art of using humor in public speaking. Retrieved 20 March 2005 (1998)Google Scholar
  4. 4.
    Bauman, R.: Story, Performance, and Event: Contextual Studies of Oral Narrative, vol. 10. Cambridge University Press, Cambridge (1986)CrossRefGoogle Scholar
  5. 5.
    Bird, C.: Formulaic jokes in interaction: the prosody of riddle openings. Pragmat. Cogn. 19(2), 268–290 (2011)CrossRefGoogle Scholar
  6. 6.
    Chafe, W.: Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. University of Chicago Press, Chicago (1994) Google Scholar
  7. 7.
    Gironzetti, E.: Prosodic and multimodal markers of humor. In: The Routledge Handbook of Language and Humor, pp. 400–413. Routledge, London (2017)Google Scholar
  8. 8.
    Gironzetti, E., Attardo, S., Pickering, L.: Smiling, gaze, and humor in conversation: a pilot study. Metapragmat. Humor: Curr. Res. Trends 14, 235 (2016)CrossRefGoogle Scholar
  9. 9.
    Gironzetti, E., Huang, M., Pickering, L., Attardo, S.: The role of eye gaze and smiling in humorous dyadic conversations, March 2015Google Scholar
  10. 10.
    Glenn, P.J.: Initiating shared laughter in multi-party conversations. West. J. Commun. (includes Commun. Rep.) 53(2), 127–149 (1989)Google Scholar
  11. 11.
    Hayashi, K., Kanda, T., Miyashita, T., Ishiguro, H., Hagita, N.: Robot manzai: robot conversation as a passive-social medium. Int. J. Humanoid Rob. 5(01), 67–86 (2008)CrossRefGoogle Scholar
  12. 12.
    Katevas, K., Healey, P.G., Harris, M.T.: Robot comedy lab: experimenting with the social dynamics of live performance. Front. Psychol. 6, 1253 (2015)CrossRefGoogle Scholar
  13. 13.
    Knight, H.: Eight lessons learned about non-verbal interactions through robot theater. In: Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T. (eds.) ICSR 2011. LNCS (LNAI), vol. 7072, pp. 42–51. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-25504-5_5 CrossRefGoogle Scholar
  14. 14.
    Konidaris, G.D., Osentoski, S., Thomas, P.S.: Value function approximation in reinforcement learning using the fourier basis. In: Burgard, W., Roth, D. (eds.) Proceedings of the Twenty-Fifth Conference on Artificial Intelligence. AAAI 2011, San Francisco, California, USA, 7–11 August 2011. AAAI Press (2011). http://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/view/3569
  15. 15.
    Manurung, R., Ritchie, G., Pain, H., Waller, A., O’Mara, D., Black, R.: The construction of a pun generator for language skills development. Appl. Artif. Intell. 22(9), 841–869 (2008)CrossRefGoogle Scholar
  16. 16.
    McKeown, G., Curran, W., Wagner, J., Lingenfelser, F., André, E.: The belfast storytelling database: a spontaneous social interaction database with laughter focused annotation. In: Affective Computing and Intelligent Interaction, pp. 166–172. IEEE (2015)Google Scholar
  17. 17.
    Mirnig, N., Stollnberger, G., Giuliani, M., Tscheligi, M.: Elements of humor: how humans perceive verbal and non-verbal aspects of humorous robot behavior. In: International Conference on Human-Robot Interaction, pp. 211–212. ACM (2017)Google Scholar
  18. 18.
    Mollahosseini, A., Hasani, B., Mahoor, M.H.: Affectnet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18–31 (2017)CrossRefGoogle Scholar
  19. 19.
    Nijholt, A.: Conversational agents and the construction of humorous acts, chap. 2, pp. 19–47. Wiley-Blackwell (2007)Google Scholar
  20. 20.
    Norrick, N.R.: On the conversational performance of narrative jokes: toward an account of timing. Humor 14(3), 255–274 (2001)CrossRefGoogle Scholar
  21. 21.
    Pickering, L., Corduas, M., Eisterhold, J., Seifried, B., Eggleston, A., Attardo, S.: Prosodic markers of saliency in humorous narratives. Discourse Processes 46(6), 517–540 (2009)CrossRefGoogle Scholar
  22. 22.
    Ritschel, H.: Socially-aware reinforcement learning for personalized human-robot interaction. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2018, Stockholm, Sweden, 10–15 July 2018, pp. 1775–1777. International Foundation for Autonomous Agents and Multiagent Systems, Richland/ACM (2018)Google Scholar
  23. 23.
    Ritschel, H., André, E.: Real-time robot personality adaptation based on reinforcement learning and social signals. In: Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. HRI 2017, Vienna, Austria, 6–9 March 2017, pp. 265–266. ACM (2017)Google Scholar
  24. 24.
    Ritschel, H., André, E.: Shaping a social robot’s humor with natural language generation and socially-aware reinforcement learning. In: Proceedings of the Workshop on NLG for Human-Robot Interaction, pp. 12–16 (2018)Google Scholar
  25. 25.
    Ritschel, H., Aslan, I., Mertes, S., Seiderer, A., André, E.: Personalized synthesis of intentional and emotional non-verbal sounds for social robots. In: 8th International Conference on Affective Computing and Intelligent Interaction. ACII 2019, Cambridge, United Kingdom, 3–6 September 2019, pp. 1–7. IEEE (2019)Google Scholar
  26. 26.
    Ritschel, H., Aslan, I., Sedlbauer, D., André, E.: Irony man: augmenting a social robot with the ability to use irony in multimodal communication with humans. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2019, pp. 86–94. IFAAMAS (2019)Google Scholar
  27. 27.
    Ritschel, H., Baur, T., André, E.: Adapting a robot’s linguistic style based on socially-aware reinforcement learning. In: 26th IEEE International Symposium on Robot and Human Interactive Communication, pp. 378–384. IEEE (2017)Google Scholar
  28. 28.
    Ritschel, H., Janowski, K., Seiderer, A., André, E.: Towards a robotic dietitian with adaptive linguistic style. In: Joint Proceeding of the Poster and Workshop Sessions of AmI-2019, the 2019 European Conference on Ambient Intelligence, Rome, Italy, 13–15 November 2019. CEUR Workshop Proceedings, vol. 2492, pp. 134–138. CEUR-WS.org (2019)Google Scholar
  29. 29.
    Ritschel, H., Kiderle, T., Weber, K., André, E.: Multimodal joke presentation for social robots based on natural-language generation and nonverbal behaviors. In: Proceedings of the 2nd Workshop on NLG for Human-Robot Interaction (2020)Google Scholar
  30. 30.
    Ritschel, H., Seiderer, A., Janowski, K., Aslan, I., André, E.: Drink-O-Mender: an adaptive robotic drink adviser. In: Proceedings of the 3rd International Workshop on Multisensory Approaches to Human-Food Interaction. MHFI 2018, pp. 3:1–3:8. ACM (2018)Google Scholar
  31. 31.
    Ritschel, H., Seiderer, A., Janowski, K., Wagner, S., André, E.: Adaptive linguistic style for an assistive robotic health companion based on explicit human feedback. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. PETRA 2019, Island of Rhodes, Greece, 5–7 June 2019, pp. 247–255 (2019)Google Scholar
  32. 32.
    Ruhland, K., et al.: Look me in the eyes: a survey of eye and gaze animation for virtual agents and artificial systems. In: Eurographics 2014 - State of the Art Reports, pp. 69–91 (2014)Google Scholar
  33. 33.
    Sutton, R.S., et al.: Fast gradient-descent methods for temporal-difference learning with linear function approximation. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993–1000. ACM (2009)Google Scholar
  34. 34.
    Umetani, T., Nadamoto, A., Kitamura, T.: Manzai robots: entertainment robots as passive media based on autocreated manzai scripts from web news articles. In: Handbook of Digital Games and Entertainment Technologies, pp. 1041–1068 (2017)Google Scholar
  35. 35.
    Vogt, T., André, E., Bee, N.: EmoVoice — a framework for online recognition of emotions from voice. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 188–199. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-69369-7_21CrossRefGoogle Scholar
  36. 36.
    Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., André, E.: The social signal interpretation (SSI) framework: Multimodal signal processing and recognition in real-time. In: 21st International Conference on Multimedia, pp. 831–834. ACM (2013)Google Scholar
  37. 37.
    Weber, K., Ritschel, H., Aslan, I., Lingenfelser, F., André, E.: How to shape the humor of a robot - social behavior adaptation based on reinforcement learning. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction. ICMI 2018, pp. 154–162. ACM (2018)Google Scholar
  38. 38.
    Weber, K., Ritschel, H., Lingenfelser, F., André, E.: Real-time adaptation of a robotic joke teller based on human social signals. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2018, Stockholm, Sweden, 10–15 July 2018, pp. 2259–2261. International Foundation for Autonomous Agents and Multiagent Systems, Richland/ACM (2018)Google Scholar
  39. 39.
    Wennerstrom, A.: The Music of Everyday Speech: Prosody and Discourse Analysis. Oxford University Press, Oxford (2001)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Hannes Ritschel
    • 1
    Email author
  • Thomas Kiderle
    • 1
  • Klaus Weber
    • 1
  • Florian Lingenfelser
    • 1
  • Tobias Baur
    • 1
  • Elisabeth André
    • 1
  1. 1.Human-Centered MultimediaAugsburg UniversityAugsburgGermany

Personalised recommendations