Advertisement

DualKeepon: a human–robot interaction testbed to study linguistic features of speech

  • Hoang-Long Cao
  • Lars Christian Jensen
  • Xuan Nhan Nghiem
  • Huong Vu
  • Albert De Beir
  • Pablo Gomez Esteban
  • Greet Van de Perre
  • Dirk Lefeber
  • Bram Vanderborght
Original Research Paper

Abstract

In this paper, we present a novel dual-robot testbed called DualKeepon for carrying out pairwise comparisons of linguistic features of speech in human–robot interactions. Our solution, using a modified version of the MyKeepon robotic toy developed by Beatbots, is a portable open-source system for researchers to set up experiments quickly, and in an intuitive way. We provide an online tutorial with all required materials to replicate the system. We present two human–robot interaction studies to demonstrate the testbed. The first study investigates the perception of robots using filled pauses. The second study investigates how social roles, realized by different prosodic and lexical speaking profiles, affect trust. Results show that the proposed testbed is a helpful tool for linguistic studies. In addition to the basic setup, advanced users of the system have the ability to connect the system to different robot platforms, i.e., NAO, Pepper.

Keywords

Keepon Social robot Human–robot interaction NAO Low-cost robotics Linguistics 

Notes

Acknowledgements

The authors would like to acknowledge the helpful comments of the anonymous reviewers on the earlier versions of this paper. The work leading to these results has received funding from the EC FP7 project DREAM (grant no. 611391) and the ICON project ROBO-CURE.

References

  1. 1.
    Aarestrup M, Jensen LC, Fischer K (2015) The sound makes the greeting: interpersonal functions of intonation in human–robot interaction. In: 2015 AAAI spring symposium series. AAAI, Palo Alto, California, USA, pp 67–70Google Scholar
  2. 2.
    Andrist S, Spannan E, Mutlu B (2013) Rhetorical robots: making robots more effective speakers using linguistic cues of expertise. In: Proceedings of the 8th ACM/IEEE international conference on human–robot interaction (HRI). IEEE Press, Tokyo, Japan, pp 341–348Google Scholar
  3. 3.
    Andrist S, Ziadee M, Boukaram H, Mutlu B, Sakr M (2015) Effects of culture on the credibility of robot speech: a comparison between English and Arabic. In: Proceedings of the 10th annual ACM/IEEE international conference on human–robot interaction (HRI). ACM, Portland, Oregon, USA, pp 157–164Google Scholar
  4. 4.
    Asselborn TLC, Johal W, Dillenbourg P (2017) Keep on moving! exploring anthropomorphic effects of motion during idle moments. In: 26th IEEE international symposium on robot and human interactive communication (ROMAN). IEEE, Lisbon, Portugal, pp 897–902Google Scholar
  5. 5.
    Azmin AF, Shamsuddin S, Yussof H (2016) HRI observation with My Keepon robot using Kansei engineering approach. In: 2016 2nd IEEE international symposium on robotics and manufacturing automation (ROMA). IEEE, Ipoh, Malaysia, pp 1–6Google Scholar
  6. 6.
    Azuma J (2006) Creating micro exercises utilizing text-to-speech technology: new horizons in foreign language teaching. In: Hug T, Lindner M, Bruck PA (eds) Micromedia and e-learning 2.0: gaining the big picture. Innsbruck University Press, Austria, pp 198–210Google Scholar
  7. 7.
    Bainbridge WA, Hart JW, Kim ES, Scassellati B (2011) The benefits of interactions with physically present robots over video-displayed agents. Int J Soc Robot 3(1):41–52CrossRefGoogle Scholar
  8. 8.
    Baxter P, Ashurst E, Read R, Kennedy J, Belpaeme T (2017) Robot education peers in a situated primary school study: personalisation promotes child learning. PloS one 12(5):e0178126CrossRefGoogle Scholar
  9. 9.
    Baxter P, Kennedy J, Senft E, Lemaignan S, Belpaeme T (2016) From characterising three years of HRI to methodology and reporting recommendations. In: 11th ACM/IEEE international conference on human robot interaction (HRI). IEEE Press, New York, USA, pp 391–398Google Scholar
  10. 10.
    Bethel CL, Murphy RR (2010) Review of human studies methods in HRI and recommendations. Int J Soc Robot 2(4):347–359CrossRefGoogle Scholar
  11. 11.
    Betz S, Carlmeyer B, Wagner P, Wrede B (2018) Interactive hesitation synthesis: modelling and evaluation. Multimodal Technol Interact 2(1):1–21CrossRefGoogle Scholar
  12. 12.
    Breazeal C, Harris PL, DeSteno D, Westlund K, Jacqueline M, Dickens L, Jeong S (2016) Young children treat robots as informants. Top Cogn Sci 8(2):481–491CrossRefGoogle Scholar
  13. 13.
    Bresnahan MJ, Ohashi R, Nebashi R, Liu WY, Shearman SM (2002) Attitudinal and affective response toward accented english. Lang Commun 22(2):171–185CrossRefGoogle Scholar
  14. 14.
    Brown L, Howard AM (2013) Engaging children in math education using a socially interactive humanoid robot. In: 2013 13th IEEE-RAS international conference on humanoid robots (Humanoids). IEEE, Atlanta, GA, USA, pp 183–188Google Scholar
  15. 15.
    Cao HL, Van de Perre G, Simut R, Pop C, Peca A, Lefeber D, Vanderborght B (2014) Enhancing My Keepon robot: a simple and low-cost solution for robot platform in human–robot interaction studies. In: 23rd IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, Edinburgh, UK, pp 555–560Google Scholar
  16. 16.
    Cao HL, Pop C, Simut R, Furnémont R, De Beir A, Van de Perre G, Esteban PG, Lefeber D, Vanderborght B (2015) Probolino: a portable low-cost social device for home-based autism therapy. In: International conference on social robotics. Springer, Paris, France, pp 93–102CrossRefGoogle Scholar
  17. 17.
    Carlmeyer B, Betz S, Wagner P, Schlangen D, Wrede B (2018) The hesitating robot—implementation and first impressions. In: Companion of the 2018 ACM/IEEE international conference on human–robot interaction (HRI). ACM, New York, USA, pp 1–2Google Scholar
  18. 18.
    Clopper CG, Pisoni DB (2002) Perception of dialect variation: some implications for current research and theory in speech perception. Research on spoken language processing progress report, vol 25. Indiana University Press, Indiana, USA, pp 270–190Google Scholar
  19. 19.
    Costescu CA, Vanderborght B, David DO (2015) Reversal learning task in children with autism spectrum disorder: a robot-based approach. J Autism Dev Disord 45(11):3715–3725CrossRefGoogle Scholar
  20. 20.
    Costescu CA, Vanderborght B, David DO (2017) Robot-enhanced CBT for dysfunctional emotions in social situations for children with ASD. J Evid Based Psychother 17(2):119–132CrossRefGoogle Scholar
  21. 21.
    Docan-Morgan T, Manusov V, Harvey J (2013) When a small thing means so much: nonverbal cues as turning points in relationships. Interpersona 7(1):110–124CrossRefGoogle Scholar
  22. 22.
    Dow S, MacIntyre B, Lee J, Oezbek C, Bolter JD, Gandy M (2005) Wizard of oz support throughout an iterative design process. IEEE Pervasive Comput 4(4):18–26CrossRefGoogle Scholar
  23. 23.
    Fasola J, Mataric M (2013) A socially assistive robot exercise coach for the elderly. J Hum Robot Interact 2(2):3–32Google Scholar
  24. 24.
    Fischer K (2000) From cognitive semantics to lexical pragmatics: the functional polysemy of discourse particles. Walter de Gruyter, BerlinCrossRefGoogle Scholar
  25. 25.
    Fischer K (2016) Robots as confederates: how robots can and should support research in the humanities. In: Proceedings of the 2016 robophilosophy conference. IOS Press, Aarhus, Denmark, pp 60–66Google Scholar
  26. 26.
    Fischer K, Lohan K, Foth K (2012) Levels of embodiment: linguistic analyses of factors influencing HRI. In: Proceedings of the 7th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, Boston, MA, USA, pp 463–470Google Scholar
  27. 27.
    Fischer K, Niebuhr O, Novák-Tót E, Jensen LC (2017) Strahlt die negative reputation von häsitationsmarkern auf ihre sprecher aus? In: Proceedings of the 43rd annual meeting of the German acoustical society (DAGA). Kiel, Germany, pp 1450–1453Google Scholar
  28. 28.
    Fox Tree JE (2002) Interpreting pauses and ums at turn exchanges. Discourse Process 34(1):37–55CrossRefGoogle Scholar
  29. 29.
    Gallois C, Giles H (2015) Communication accommodation theory. In: Tracy K, Sandel T, Ilie C (eds) The international encyclopedia of language and social interaction. Wiley, Hoboken, pp 1–18Google Scholar
  30. 30.
    Henton C (2012) Text-to-speech synthesis development. In: Chapelle CA (ed) The encyclopedia of applied linguistics. Wiley, Hoboken, pp 1–6Google Scholar
  31. 31.
    Holmes J, Hazen K (2014) Research methods in sociolinguistics: a practical guide, vol 5. Wiley, HobokenGoogle Scholar
  32. 32.
    Hood D, Lemaignan S, Dillenbourg P (2015) When children teach a robot to write: an autonomous teachable humanoid which uses simulated handwriting. In: Proceedings of the 10th annual ACM/IEEE international conference on human–robot interaction (HRI). ACM, Portland, Oregon, USA, pp 83–90Google Scholar
  33. 33.
    Kanda T, Hirano T, Eaton D, Ishiguro H (2004) Interactive robots as social partners and peer tutors for children: a field trial. Hum Comput Interact 19(1/2):61–84CrossRefGoogle Scholar
  34. 34.
    Kircher R (2015) The matched-guise technique. In: Hua Z (ed) Research methods in intercultural communication: a practical guide. John Wiley & Sons Inc, Hoboken, NJ, pp 196–211CrossRefGoogle Scholar
  35. 35.
    Kuhlen AK, Brennan SE (2013) Language in dialogue: when confederates might be hazardous to your data. Psychon Bull Rev 20(1):54–72CrossRefGoogle Scholar
  36. 36.
    Lambert WE, Hodgson RC, Gardner RC, Fillenbaum S (1960) Evaluational reactions to spoken languages. J Abnorm Soc Psychol 60(1):44CrossRefGoogle Scholar
  37. 37.
    Levinson SC (1983) Pragmatics. Cambridge University Press, New YorkCrossRefGoogle Scholar
  38. 38.
    Leyzberg D, Spaulding S, Scassellati B (2014) Personalizing robot tutors to individuals’ learning differences. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction (HRI). ACM, Bielefeld, Germany, pp 423–430Google Scholar
  39. 39.
    Lund AM (2001) Measuring usability with the use questionnaire. Usability Interface 8(2):3–6Google Scholar
  40. 40.
    Mangelsdorf K (1992) Peer reviews in the ESL composition classroom: what do the students think? ELT J 46(3):274–284CrossRefGoogle Scholar
  41. 41.
    Meghdari A, Alemi M, Ghazisaedy M, Taheri A, Karimian A, Zandvakili M (2013) Applying robots as teaching assistant in EFL classes at Iranian middle-schools. In: International conference on education and modern educational technologies (EMET-2013). ADIS, Kuala Lumpur, Malaysia, pp 67–73Google Scholar
  42. 42.
    Mendonca CO, Johnson KE (1994) Peer review negotiations: revision activities in ESL writing instruction. TESOL Q 28(4):745–769CrossRefGoogle Scholar
  43. 43.
    Mohammad Y, Ohya T, Hiramatsu T, Sumi Y, Nishida T (2007) Embodiment of knowledge into the interaction and physical domains using robots. In: International conference on control automation and systems (ICCAS), 2007. IEEE, Guangzhou, China, pp 737–744Google Scholar
  44. 44.
    Nelson GL, Murphy JM (1993) Peer response groups: do L2 writers use peer comments in revising their drafts? TESOL Q 27(1):135–141CrossRefGoogle Scholar
  45. 45.
    Peca A, Simut R, Cao HL, Vanderborght B (2016) Do infants perceive the social robot Keepon as a communicative partner? Infant Behav Dev 42:157–167CrossRefGoogle Scholar
  46. 46.
    Pfeifer R, Gómez G (2009) Morphological computation—connecting brain, body, and environment. In: Sendhoff B, Krner E, Sporns O, Ritter H, Doya K (eds) Creat Brain Like Intell. Springer, Berlin, pp 66–83CrossRefGoogle Scholar
  47. 47.
    Pisoni D, Remez R (2008) The handbook of speech perception. Wiley, OxfordGoogle Scholar
  48. 48.
    Riek LD (2012) Wizard of oz studies in HRI: a systematic review and new reporting guidelines. J Hum Robot Interact 1(1):119–136CrossRefGoogle Scholar
  49. 49.
    Robins B, Dautenhahn K, Dubowski J (2005) Robots as isolators or mediators for children with autism a cautionary tale. In: Proceedings of the AISB 05 symposium on robot companions. AISB, Hatfield, UK, pp 82–88Google Scholar
  50. 50.
    Rosenberg A, Hirschberg J (2009) Charisma perception from text and speech. Speech Commun 51(7):640–655CrossRefGoogle Scholar
  51. 51.
    Solís Obiols M (2002) The matched guise technique: a critical approximation to a classic test for formal measurement of language attitudes. Noves SL. Rev Socioling 1:1–6Google Scholar
  52. 52.
    Srinivasan V, Takayama L (2016) Help me please: Robot politeness strategies for soliciting help from humans. In: Proceedings of the 2016 CHI conference on human factors in computing systems. ACM, San Jose, CA, USA, pp 4945–4955Google Scholar
  53. 53.
    Strait M, Canning C, Scheutz M (2014) Let me tell you! Investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction (HRI). ACM, Bielefeld, Germany, pp 479–486Google Scholar
  54. 54.
    Strupka E, Niebuhr O, Fischer K (2016) Influence of robot gender and speaker gender on prosodic entrainment in HRI. In: Proceedings 25th IEEE robot and human interactive communication (RO-MAN). IEEE, New York City, USA, pp 1–2Google Scholar
  55. 55.
    Thill S, Pop CA, Belpaeme T, Ziemke T, Vanderborght B (2012) Robot-assisted therapy for autism spectrum disorders with (partially) autonomous control: challenges and outlook. Paladyn 3(4):209–217Google Scholar
  56. 56.
    Toris R, Kent D, Chernova S (2014) The robot management system: a framework for conducting human–robot interaction studies through crowdsourcing. J Hum Robot Interact 3(2):25–49CrossRefGoogle Scholar
  57. 57.
    Torrey C, Fussell S, Kiesler S (2013) How a robot should give advice. In: Proceedings of the 8th ACM/IEEE international conference on human–robot interaction (HRI). IEEE Press, Tokyo, Japan, pp 275–282Google Scholar
  58. 58.
    Walters ML, Woods S, Koay KL, Dautenhahn K (2005) Practical and methodological challenges in designing and conducting human–robot interaction studies. In: 2005 AISB symposium on robot companions. AISBGoogle Scholar
  59. 59.
    Whittaker S, O’Conaill B (1997) The role of vision in face-to-face and mediated communication. In: Finn KE, Sellen AJ, Wilbur SB (eds) Video-mediated communication. Lawrence Erlbaum Associates Publishers, Mahwah, pp 23–49Google Scholar
  60. 60.
    Wigdor N, de Greeff J, Looije R, Neerincx MA (2016) How to improve human–robot interaction with conversational fillers. In: 25th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, New York, USA, pp 219–224Google Scholar
  61. 61.
    Wolfgang A (2013) Nonverbal behavior: applications and cultural implications. Academic Press, New YorkGoogle Scholar
  62. 62.
    Zaga C, Lohse M, Truong KP, Evers V (2015) The effect of a robot’s social character on children’s task engagement: peer versus tutor. In: International conference on social robotics. Springer, Paris, France, pp 704–713CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  • Hoang-Long Cao
    • 1
  • Lars Christian Jensen
    • 2
  • Xuan Nhan Nghiem
    • 1
  • Huong Vu
    • 1
  • Albert De Beir
    • 1
  • Pablo Gomez Esteban
    • 1
  • Greet Van de Perre
    • 1
  • Dirk Lefeber
    • 1
  • Bram Vanderborght
    • 1
  1. 1.Robotics and Multibody Mechanics Research GroupVrije Universiteit Brussel and Flanders MakeBrusselsBelgium
  2. 2.Department of Design and CommunicationUniversity of Southern DenmarkSønderborgDenmark

Personalised recommendations