Abstract
Robot humor is typically scripted by the human. This work presents a socially-aware robot which generates multimodal jokes for use in real-time human-robot dialogs, including appropriate prosody and non-verbal behaviors. It personalizes the paralinguistic presentation strategy based on socially-aware reinforcement learning, which interprets human social signals and aims to maximize user amusement.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Archakis, A., Giakoumelou, M., Papazachariou, D., Tsakona, V.: The prosodic framing of humour in conversational narratives: evidence from Greek data. J. Greek Linguist. 10(2), 187–212 (2010)
Attardo, S., Pickering, L., Baker, A.: Prosodic and multimodal markers of humor in conversation. Pragmat. Cogn. 19(2), 224–247 (2011)
Audrieth, A.L.: The art of using humor in public speaking. Retrieved 20 March 2005 (1998)
Bauman, R.: Story, Performance, and Event: Contextual Studies of Oral Narrative, vol. 10. Cambridge University Press, Cambridge (1986)
Bird, C.: Formulaic jokes in interaction: the prosody of riddle openings. Pragmat. Cogn. 19(2), 268–290 (2011)
Chafe, W.: Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. University of Chicago Press, Chicago (1994)
Gironzetti, E.: Prosodic and multimodal markers of humor. In: The Routledge Handbook of Language and Humor, pp. 400–413. Routledge, London (2017)
Gironzetti, E., Attardo, S., Pickering, L.: Smiling, gaze, and humor in conversation: a pilot study. Metapragmat. Humor: Curr. Res. Trends 14, 235 (2016)
Gironzetti, E., Huang, M., Pickering, L., Attardo, S.: The role of eye gaze and smiling in humorous dyadic conversations, March 2015
Glenn, P.J.: Initiating shared laughter in multi-party conversations. West. J. Commun. (includes Commun. Rep.) 53(2), 127–149 (1989)
Hayashi, K., Kanda, T., Miyashita, T., Ishiguro, H., Hagita, N.: Robot manzai: robot conversation as a passive-social medium. Int. J. Humanoid Rob. 5(01), 67–86 (2008)
Katevas, K., Healey, P.G., Harris, M.T.: Robot comedy lab: experimenting with the social dynamics of live performance. Front. Psychol. 6, 1253 (2015)
Knight, H.: Eight lessons learned about non-verbal interactions through robot theater. In: Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T. (eds.) ICSR 2011. LNCS (LNAI), vol. 7072, pp. 42–51. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25504-5_5
Konidaris, G.D., Osentoski, S., Thomas, P.S.: Value function approximation in reinforcement learning using the fourier basis. In: Burgard, W., Roth, D. (eds.) Proceedings of the Twenty-Fifth Conference on Artificial Intelligence. AAAI 2011, San Francisco, California, USA, 7–11 August 2011. AAAI Press (2011). http://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/view/3569
Manurung, R., Ritchie, G., Pain, H., Waller, A., O’Mara, D., Black, R.: The construction of a pun generator for language skills development. Appl. Artif. Intell. 22(9), 841–869 (2008)
McKeown, G., Curran, W., Wagner, J., Lingenfelser, F., André, E.: The belfast storytelling database: a spontaneous social interaction database with laughter focused annotation. In: Affective Computing and Intelligent Interaction, pp. 166–172. IEEE (2015)
Mirnig, N., Stollnberger, G., Giuliani, M., Tscheligi, M.: Elements of humor: how humans perceive verbal and non-verbal aspects of humorous robot behavior. In: International Conference on Human-Robot Interaction, pp. 211–212. ACM (2017)
Mollahosseini, A., Hasani, B., Mahoor, M.H.: Affectnet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18–31 (2017)
Nijholt, A.: Conversational agents and the construction of humorous acts, chap. 2, pp. 19–47. Wiley-Blackwell (2007)
Norrick, N.R.: On the conversational performance of narrative jokes: toward an account of timing. Humor 14(3), 255–274 (2001)
Pickering, L., Corduas, M., Eisterhold, J., Seifried, B., Eggleston, A., Attardo, S.: Prosodic markers of saliency in humorous narratives. Discourse Processes 46(6), 517–540 (2009)
Ritschel, H.: Socially-aware reinforcement learning for personalized human-robot interaction. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2018, Stockholm, Sweden, 10–15 July 2018, pp. 1775–1777. International Foundation for Autonomous Agents and Multiagent Systems, Richland/ACM (2018)
Ritschel, H., André, E.: Real-time robot personality adaptation based on reinforcement learning and social signals. In: Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. HRI 2017, Vienna, Austria, 6–9 March 2017, pp. 265–266. ACM (2017)
Ritschel, H., André, E.: Shaping a social robot’s humor with natural language generation and socially-aware reinforcement learning. In: Proceedings of the Workshop on NLG for Human-Robot Interaction, pp. 12–16 (2018)
Ritschel, H., Aslan, I., Mertes, S., Seiderer, A., André, E.: Personalized synthesis of intentional and emotional non-verbal sounds for social robots. In: 8th International Conference on Affective Computing and Intelligent Interaction. ACII 2019, Cambridge, United Kingdom, 3–6 September 2019, pp. 1–7. IEEE (2019)
Ritschel, H., Aslan, I., Sedlbauer, D., André, E.: Irony man: augmenting a social robot with the ability to use irony in multimodal communication with humans. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2019, pp. 86–94. IFAAMAS (2019)
Ritschel, H., Baur, T., André, E.: Adapting a robot’s linguistic style based on socially-aware reinforcement learning. In: 26th IEEE International Symposium on Robot and Human Interactive Communication, pp. 378–384. IEEE (2017)
Ritschel, H., Janowski, K., Seiderer, A., André, E.: Towards a robotic dietitian with adaptive linguistic style. In: Joint Proceeding of the Poster and Workshop Sessions of AmI-2019, the 2019 European Conference on Ambient Intelligence, Rome, Italy, 13–15 November 2019. CEUR Workshop Proceedings, vol. 2492, pp. 134–138. CEUR-WS.org (2019)
Ritschel, H., Kiderle, T., Weber, K., André, E.: Multimodal joke presentation for social robots based on natural-language generation and nonverbal behaviors. In: Proceedings of the 2nd Workshop on NLG for Human-Robot Interaction (2020)
Ritschel, H., Seiderer, A., Janowski, K., Aslan, I., André, E.: Drink-O-Mender: an adaptive robotic drink adviser. In: Proceedings of the 3rd International Workshop on Multisensory Approaches to Human-Food Interaction. MHFI 2018, pp. 3:1–3:8. ACM (2018)
Ritschel, H., Seiderer, A., Janowski, K., Wagner, S., André, E.: Adaptive linguistic style for an assistive robotic health companion based on explicit human feedback. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. PETRA 2019, Island of Rhodes, Greece, 5–7 June 2019, pp. 247–255 (2019)
Ruhland, K., et al.: Look me in the eyes: a survey of eye and gaze animation for virtual agents and artificial systems. In: Eurographics 2014 - State of the Art Reports, pp. 69–91 (2014)
Sutton, R.S., et al.: Fast gradient-descent methods for temporal-difference learning with linear function approximation. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993–1000. ACM (2009)
Umetani, T., Nadamoto, A., Kitamura, T.: Manzai robots: entertainment robots as passive media based on autocreated manzai scripts from web news articles. In: Handbook of Digital Games and Entertainment Technologies, pp. 1041–1068 (2017)
Vogt, T., André, E., Bee, N.: EmoVoice — a framework for online recognition of emotions from voice. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 188–199. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69369-7_21
Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., André, E.: The social signal interpretation (SSI) framework: Multimodal signal processing and recognition in real-time. In: 21st International Conference on Multimedia, pp. 831–834. ACM (2013)
Weber, K., Ritschel, H., Aslan, I., Lingenfelser, F., André, E.: How to shape the humor of a robot - social behavior adaptation based on reinforcement learning. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction. ICMI 2018, pp. 154–162. ACM (2018)
Weber, K., Ritschel, H., Lingenfelser, F., André, E.: Real-time adaptation of a robotic joke teller based on human social signals. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS 2018, Stockholm, Sweden, 10–15 July 2018, pp. 2259–2261. International Foundation for Autonomous Agents and Multiagent Systems, Richland/ACM (2018)
Wennerstrom, A.: The Music of Everyday Speech: Prosody and Discourse Analysis. Oxford University Press, Oxford (2001)
Acknowledgment
This research was funded by the European Union PRESENT project, grant agreement No. 856879.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ritschel, H., Kiderle, T., Weber, K., Lingenfelser, F., Baur, T., André, E. (2020). Multimodal Joke Generation and Paralinguistic Personalization for a Socially-Aware Robot. In: Demazeau, Y., Holvoet, T., Corchado, J., Costantini, S. (eds) Advances in Practical Applications of Agents, Multi-Agent Systems, and Trustworthiness. The PAAMS Collection. PAAMS 2020. Lecture Notes in Computer Science(), vol 12092. Springer, Cham. https://doi.org/10.1007/978-3-030-49778-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-49778-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49777-4
Online ISBN: 978-3-030-49778-1
eBook Packages: Computer ScienceComputer Science (R0)