Increasing the Role of Data Analytics in m-Learning Conversational Applications

Chapter
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 11)

Abstract

Technological integration is currently a key factor in teaching and learning. New interaction handheld devices (such as smartphones and tablets) are opening new learning scenarios that require more sophisticated applications and learning strategies. This chapter is focused on the high variety of educational applications that multimodal conversational systems offer. We also describe a framework based on conversational interfaces in mobile learning to enhance the learning process and experience. Our approach focuses on the use of NLP techniques, such as speech and text analytics, to adapt and personalize student’s conversational interfaces . Using this framework, we have developed a practical app that offers different kinds of educative exercises and academic information, which can be easily adapted according to the pedagogical contents and the students’ progress.

Keywords

Mobile learning (m-learning) Data analytics Conversational interfaces Multimodal User modeling Context of the interaction Adaptation of the provided services 

References

  1. Ai, H., Littman, D., Forbes-Riley, K., Rotaru, M., Tetreault, J., & Purandare, A. (2006). Using systems and user performance features to improve emotion detection in spoken tutoring dialogs. In Proceedings of 9th International Conference on Spoken Language Processing (Interspeech ‘06-ICSLP) (pp. 797–800). Pittsburgh, USA.Google Scholar
  2. Aleven, V., Ogan, A., Popescu, O., Torrey, C., & Koedinger, K. (2004). Evaluating the effectiveness of a tutorial dialog system for self-explanation. In Proceedings of 7th International Conference on Intelligent Tutoring Systems (ITS’04), (pp. 443–454). Maceió, Alagoas, Brazil.Google Scholar
  3. Aimeur, E., Dufort, H., Leibu, D., & Frasson, C. (1992). Some justifications for the learning by disturbing strategy. In Proceedings of 8th World Conference on Artificial Intelligence in Education (AI-ED’97), (pp. 119–126). Kobe, Japan.Google Scholar
  4. Bailly, G., Raidt, S., & Elisei, F. (2010). Gaze, conversational agents and face-to-face communication. Speech Communication, 52(6), 598–612.CrossRefGoogle Scholar
  5. Baylor, A., & Kim, Y. (2005). Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education, 15(2), 95–115.Google Scholar
  6. Becker, R., Caceres, R., Hanson, K., Isaacman, S., Loh, J., Martonosi, M., et al. (2013). Human mobility characterization from cellular network data. Communications of the ACM, 56(1), 74–82.CrossRefGoogle Scholar
  7. Bickmore, T. (2003). Relational Agents: Effecting Change through Human-Computer Relationships. Ph.D. thesis Media Arts and Sciences, Massachusetts Institute of Technology, Cambridge, USA.Google Scholar
  8. Bickmore, T., & Picard, R. (2005). Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer Human Interaction, 12, 293–327.CrossRefGoogle Scholar
  9. Callejas, Z., Griol, D., & López-Cózar, R. (2012). Merging intention and emotion to develop adaptive dialogue systems. Communications in Computer and Information Science, 328, 168–177.CrossRefGoogle Scholar
  10. Calvo, R. E., Riva, G., & Lisetti, C. L. (2014). Affect and wellbeing: Introduction to special section. IEEE Transactions Affective Computing, 5(3), 215–216.Google Scholar
  11. Cassell, J., Sullivan, J., Prevost, S., & Churchill, E.F. (2000) Embodied Conversational Agents. The MIT Press.Google Scholar
  12. Cavazza, M., de la Camara, R.-S., & Turunen, M. (2010). How Was Your Day? A Companion ECA. In Proceedings of 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10), (pp. 1629–1630). Toronto, CanadaGoogle Scholar
  13. Chu, S.-W., O’Neill, I., Hanna, P., & McTear, M. (2005). An approach to multistrategy dialogue management. In Proceedings of European Conference on Speech Communication and Technology (Interspeech’05-Eurospeech), (pp. 865–868). Lisbon, Portugal.Google Scholar
  14. Cole, R., Mariani, J., Uszkoreit, H., Varile, G. B., Zaenen, A., Zampolli, A., & Zue, V. (Eds.). (1997). Survey of the state of the art in human language technology. Cambridge University Press.Google Scholar
  15. Cuayáhuitl, H., Renals, S., Lemon, O., & Shimodaira, H. (2006). Reinforcement learning of dialogue strategies with hierarchical abstract machines. In Proceedings of IEEE/ACL Workshop on Spoken Language Technology (SLT’06), Palm Beach, Aruba, (pp. 182–186.Google Scholar
  16. Dillenbourg, P., & Self, J. (1992). People power: A human-computer collaborative learning system. In Proceedings of Second International Conference on Intelligent Tutoring Systems (ITS ‘92), (pp. 651–660). Montréal, Canada.Google Scholar
  17. D’Mello, S., Craig, S., Gholson, B., Frankin, S., Picard, R., & Graesser, A. (2005). Integrating affect sensors in an intelligent tutoring system. In Proceedings of Workshop on Affective Interactions: The Computer in the Affective Loop (IUI’05), (pp. 7–13). San Diego, California, USA.Google Scholar
  18. Dowding, J., Clancey, W., & Graham, J. (2006). Are You Talking to Me? Dialogue systems supporting mixed teams of humans and robots. In Proceedings of AIAA Fall Symposium Annually Informed Performance: Integrating Machine Listing and Auditory Presentation in Robotic Systems, (pp. 22–27). Washington DC, USA.Google Scholar
  19. Edlund, J., Gustafson, J., Heldner, M., & Hjalmarsson, A. (2008). Towards human-like spoken dialogue systems. Speech Communication, 50(8–9), 630–645.CrossRefGoogle Scholar
  20. Elhadad, M., & Robin, J. (1996). An overview of surge: A reusable comprehensive syntactic realization component (pp. 1–4). Philadelphia, USA: Proceedings of the Eight International Natural Language Generation Workshop.Google Scholar
  21. Feng, D., Jeong, D. C., Krämer, N. C., Miller, L. C., & Marsella, S. (2017). Is It Just Me?: Evaluating attribution of negative feedback as a function of virtual instructor’s gender and proxemics. In: Proceedings of AAMAS Conference, (pp. 810–818). Sao Paulo, Brazil.Google Scholar
  22. Fryer, L., & Carpenter, R. (2006). Bots as Language Learning Tools. Language Learning and Technology, 10(3), 8–14.Google Scholar
  23. Gorostiza, J., & Salichs, M. (2011). End-user programming of a social robot by dialog. Robotics and Autonomous Systems, 59, 1102–1114.CrossRefGoogle Scholar
  24. Graesser, A., Chipman, P., Haynes, B., & Olney, A. (2005). AutoTutor: An intelligent tutoring system with mixed-initiative dialog. IEEE Transactions in Education, 48, 612–618.CrossRefGoogle Scholar
  25. Graesser, A., Person, N., & Harter, D. (2001). Teaching Tactics and Dialog in AutoTutor. International Journal of Artificial Intelligence in Education, 12, 23–39.Google Scholar
  26. Graesser, A., Wiemer-Hastings, K., Wiemer-Hastings, P., & Kreuz, R. (1999). AutoTutor: A Simulation of a Human Tutor. Journal of Cognitive Systems Research, 1, 35–51.CrossRefGoogle Scholar
  27. Gratch, J., Rickel, J., Andre, J., Badler, N., Cassell, J., & Petajan, E. (2002). Creating interactive virtual humans: some assembly required. In Proceedings of IEEE Conference on Intelligent Systems, (pp. 54–63). Varna, Bulgaria.Google Scholar
  28. Griol, D., Callejas, Z., López-Cózar, R., & Riccardi, G. (2014). A domain-independent statistical methodology for dialog management in spoken dialog systems. Computer Speech & Language, 28(3), 743–768.CrossRefGoogle Scholar
  29. Griol, D., Hurtado, L. F., Segarra, E., & Sanchis, E. (2008). A statistical approach to spoken dialog systems design and evaluation. Speech Communication, 50(8–9), 666–682.CrossRefGoogle Scholar
  30. Heffernan, N. (2003). Web-Based evaluations showing both cognitive and motivational benefits of the Ms. Lindquist Tutor. In Proceedings of International Conference on Artificial Intelligence in Education (AIEd2003), (pp. 115–122). Sydney, Australia.Google Scholar
  31. Johnson, W., Labore, L., & Chiu, Y. (2004). A Pedagogical Agent for Psychosocial Intervention on a Handheld Computer (pp. 22–24). Arlington, Virginia, USA: Proceedings of AAAI Fall Symposium on Dialogue Systems for Health Communication.Google Scholar
  32. Kerly, A., Ellis, R., & Bull, S. (2008a). CALMsystem: A dialog system for learner modelling. Knowledge Based Systems, 21, 238–246.CrossRefGoogle Scholar
  33. Kerly, A., Ellis, R., & Bull, S. (2008b). Conversational Agents in E-Learning. In Proceedings of 27th SGAI International Conference on Artificial Intelligence (AI-2007), (pp. 169–182). Cambridge, USA.Google Scholar
  34. Kumar, R., & Rose, C. (2011). Architecture for building dialog systems that support collaborative learning. IEEE Transactions on Learning Technologies, 4, 21–34.CrossRefGoogle Scholar
  35. Latham, A., Crockett, K., McLean, D., & Edmonds, B. (2012). A conversational intelligent tutoring system to automatically predict learning styles. Computers & Education, 59, 95–109.CrossRefGoogle Scholar
  36. Lemon, O., Georgila, K., & Henderson, J. (2006). Evaluating Effectiveness and Portability of Reinforcement Learned Dialogue Strategies with real users: the TALK TownInfo Evaluation. In Proceedings of IEEE/ACL Workshop on Spoken Language Technology (SLT’06), (pp. 178–181). Palm Beach, Aruba.Google Scholar
  37. Litman, D., & Silliman, S. (2004). ITSPOKE: An Intelligent Tutoring Spoken Dialog System. In Proceedings of Human Language Technology Conference: North American Chapter of the Association for Computational Linguistics (HLT-NAACL-2004), (pp. 5–8). Boston, Massachusetts, USA.Google Scholar
  38. López-Cózar, R., & Araki, M. (2005). Spoken. Multilingual and Multimodal Dialogue Systems. Development and Assessment: Wiley.Google Scholar
  39. Mairesse, F., Gasic, M., Jurcícek, F., Keizer, S., Thomson, B., Yu, K., Young, & S.J. (2009). Spoken language understanding from unaligned data using discriminative classification models. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’09). (pp. 4749–4752). Taipei, Taiwan.Google Scholar
  40. McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60(2), 175–215.Google Scholar
  41. McTear, M. F., Callejas, Z., & Griol, D. (2016). The Conversational Interface. Springer, New York, U.S.A.Google Scholar
  42. Meza-Ruíz, I.V., Riedel, S., & Lemon, O. (2008). Accurate statistical spoken language understanding from limited development resources. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’08). (pp. 5021–5024). Las Vegas, Nevada, USA.Google Scholar
  43. Minker, W. (1998). Stochastic versus rule-based speech understanding for information retrieval. Speech Communication, 25(4), 223–247.CrossRefGoogle Scholar
  44. Muñoz, A., Lasheras, J., Capel, A., Cantabella, M., & Caballero, A. (2015). Ontosakai: On the optimization of a learning management system using semantics and user profiling. Expert Systems with Applications, 42, 5995–6007.CrossRefGoogle Scholar
  45. Nagy, P., & Németh, G. (2016). Improving HMM speech synthesis of interrogative sentences by pitch track transformations. Speech Communication, 82, 97–112.CrossRefGoogle Scholar
  46. O’Halloran, K. (2015). The language of learning mathematics: A multimodal perspective. Journal of Mathematical Behavior, 40, 63–74.CrossRefGoogle Scholar
  47. Oh, A., & Rudnicky, A. (2000). Stochastic language generation for spoken dialog systems. In Proceedings of ANLP/NAACL Workshop on Conversational Systems, Seattle, (pp. 27–32). Washington, USA.Google Scholar
  48. Oulasvirta, A., Rattenbury, T., Ma, L., & Raita, E. (2012). Habits make smartphone use more pervasive. Personal and Ubiquitous Computing, 16(1), 105–114.CrossRefGoogle Scholar
  49. Pérez-Marín, D., & Pascual-Nieto, I. (2011). Conversational Agents and Natural Language Interaction: Techniques and Effective Practices. Hershey, PA, USA: IGI Global.CrossRefGoogle Scholar
  50. Pon-Barry, H., Schultz, K., Bratt, E.-O., Clark, B., & Peters, S. (2006). Responding to student uncertainty in spoken tutorial dialog systems. IJAIED Journal, 16, 171–194.Google Scholar
  51. Rabiner, L. R., & Juang, B. H. (1993). Fundamentals of speech recognition. Prentice-Hall.Google Scholar
  52. Reiter, E. (1995). NLG vs. templates. In Proceedings of the Fifth European Workshop in Natural Language Generation, (pp. 95–105). Leiden, Netherland.Google Scholar
  53. Roda, C., Angehrn, A., & Nabeth, T. (2001). Dialog systems for Advanced Learning: Applications and Research. In Proceedings of BotShow’01 Conference, Paris, France, pp. 1–7.Google Scholar
  54. Rosé, C., Moore, J., VanLehn, K., & Allbritton, D. (2001). A Comparative Evaluation of Socratic versus Didactic Tutoring. In Proceedings of 23rd Annual Conference of the Cognitive Science, (pp. 869–874). Edinburgh, Scotland.Google Scholar
  55. de Rosis, F., Cavalluzzi, A., Mazzotta, I., & Novielli, N. (2005). Can embodied dialog systems induce empathy in users? In Proceedings of AISB’05 Virtual Social Characters Symposium, (pp. 1–8), Hatfield, UK.Google Scholar
  56. Salse, M., Ribera, M., Satorras, R., & Centelles, M. (2015). Multimodal campus project: Pilot test of voice supported reading. Procedia - Social and Behavioral Sciences, 196, 190–197.CrossRefGoogle Scholar
  57. Schmitt, A., Ultes, S. (2015). Interaction quality: Assessing the quality of ongoing spoken dialog interaction by experts - And how it relates to user satisfaction. Speech Communication, 74, 12–36.Google Scholar
  58. Schuller, B. W., & Batliner, A. M. (2013). Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing. John Wiley and Sons.Google Scholar
  59. Sidner, C., Kidd, C., Lee, C., & Lesh, N. (2004). Where to look: a study of human-robot engagement. In Proceedings of 9th International Conference on Intelligent user interfaces (IUI’04), Funchal, Portugal, pp. 78–84.Google Scholar
  60. Theobalt, C., Bos, J., Chapman, T., Espinosa-Romero, A., Fraser, M., Hayes, G., Klein, E., & Reeve, R. (2002). Talking to Godot: dialogue with a mobile robot. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2002), (pp. 1338–1343). Lausanne, Switzerland.Google Scholar
  61. Tur, G., & De Mori, R. (2011). Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. Wiley.Google Scholar
  62. Vaquero, C., Saz, O., Lleida, E., Marcos, J., & Canalís, C. (2006). VOCALIZA: An application for computer-aided speech therapy in Spanish language. In: Proceedings of IV Jornadas en Tecnología del Habla, (pp. 321–326). Zaragoza, Spain.Google Scholar
  63. Wang, N., & Johnson, L. (2008). The Politeness Effect in an intelligent foreign language tutoring system. Proc. of Intelligent Tutoring Systems Conference (ITS’08), Montreal, Canada, pp. 270–280.Google Scholar
  64. Wang, Y., Wang, W., & Huang, C. (2007). Enhanced Semantic Question Answering System for e-Learning Environment. n Proceedings of 21st International Conference on Advanced Information Networking and Applications (AINAW’07), (pp. 1023–1028). Niagara Falls, Canada.Google Scholar
  65. Williams, J., & Young, S. (2007). Partially Observable Markov Decision Processes for Spoken Dialog Systems. Computer Speech & Language, 21(2), 393–422.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceCarlos III University of MadridLeganésSpain
  2. 2.Department of Languages and Computer SystemsUniversity of Granada, CITIC-UGR, Granada, SpainGranadaSpain

Personalised recommendations