Learning Analytics in Mobile Applications Based on Multimodal Interaction

  • José Miguel MotaEmail author
  • Iván Ruiz-Rube
  • Juan Manuel Dodero
  • Tatiana Person
  • Inmaculada Arnedillo-Sánchez
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 11)


One of the most valuable skills for teachers is the ability to produce their own digital solutions, translating teaching concepts into end-user computer systems. This often requires the involvement of computing specialists. As a result, the development of educational programming environments remains a challenge. Learning experiences based multimodal interaction applications (gesture interaction, voice recognition or artificial vision) are becoming commonplace in education because they motivate and involve students. This chapter analyses the state-of-the-art in LA techniques and user-friendly authoring tools. It presents a tool to support the creation of multimodal interactive applications equipped with non-intrusive monitoring and analytics capabilities. This tool enables teachers with no programming skills to create interactive LA-enriched learning scenarios. To this end, several components that manage LA activities are included in the tool, they range from automatically capturing users’ interaction with mobile applications, to querying data and retrieving metrics, to visualising tables and charts.


Learning analytics Mobile apps Visual programming language Language learning Human-machine interaction 


  1. Agudo-Peregrina, Á. F., Iglesias-Pradas, S., Conde-González, M. Á., & Hernández-García, Á. (2014). Can we predict success from log data in VLEs? classification of interactions for learning analytics and their relation with performance in VLE-supported F2F and online learning. Computers in Human Behavior, 31, 542–550.CrossRefGoogle Scholar
  2. Aharony, N., Pan, W., Ip, C., Khayal, I.,& Pentland, A. (2011). Social fMRI: Investigating and shaping social mechanisms in the real world. Pervasive and Mobile Computing, 7(6), 643–659. ISSN 1574-1192.
  3. Andone, I., Eibes, M., Trendafilov, B., Montag, C.,  & Markowetz, A. (2016). Menthal: A framework for mobile data collection and analysis. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (pp. 624–629).Google Scholar
  4. Bain, K., Basson, S. H., & Wald, M. (2002). Speech recognition in university classrooms: Liberated learning project. In Proceedings of the Fifth International ACM Conference on Assistive Technologies (pp. 192–196). ACM.Google Scholar
  5. Balderas, A., Dodero, J. M., Palomo-Duarte, M., & Ruiz-Rube, I. (2015). A domain specific language for online learning competence assessments. International Journal of Engineering Education—Special issue on Innovative Methods of Teaching Engineering, 31(3), 851–862.Google Scholar
  6.  Baloian, N., Pino, J. A.,& Vargas, R. (2013). Tablet gestures as a motivating factor for learning. In Proceedings of the 2013 Chilean Conference on Human-Computer Interaction (pp. 98–103). ACM.Google Scholar
  7. Bhih, A. A., Johnson, P.,& Randles, M. (2016). Diversity in smartphone usage. In Proceedings of the 17th International Conference on Computer Systems and Technologies 2016 (pp. 81–88). ISSN 15277755.
  8. CAMF—Context-Aware Machine Learning Framework for Android (2010). IEEE Pervasive Computing.
  9. Carlson, D.,& Schrader, A. (2012). Dynamix: An open plug-and-play context framework for android. In 2012 3rd IEEE International Conference on the Internet of Things (pp 151–158), October 2012.
  10. Chatti, M. A., Dyckhoff, A. L., Schroeder, U.,& Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning, 4(5/6), 318.
  11.  de-la Fuente-Valentín, L., Pardo, A., Hernández, F. L.,& Burgos, D. (2015). A visual analytics method for score estimation in learning courses. Journal of UCS, 21(1), 134–155.Google Scholar
  12. Di Serio, Á., Ibáñez, M. B., & Kloos, C. D. (2013). Impact of an augmented reality system on students’ motivation for a visual art course. Computers & Education, 68, 586–596.CrossRefGoogle Scholar
  13. Esteban, G., Fernández, C., Conde, M. Á., & García-Peñalvo, F. J. (2014). Playing with shule: Surgical haptic learning environment. In Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 247–253). ACM.Google Scholar
  14. Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317.CrossRefGoogle Scholar
  15. Ferreira, D., Kostakos, V., & Dey, A. K. (2015). AWARE: Mobile context instrumentation framework. Frontiers in ICT, 2(April), 1–9. ISSN 2297-198X.
  16. Ferreira, D., Schuss, C., Luo, C., Goncalves, J., Kostakos, V.,& Rahkonen, T. (2016). Indoor light scavenging on smartphones. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia—MUM ’16 (pp 369–371).
  17.  Froehlich, J., Chen, M. Y., Consolvo, S., Harrison, B., & Landay, J. A. (2007). MyExperience: A system for in situ tracing and capturing of user feedback on mobile phones. In Proceedings of the 5th International Conference on Mobile Systems, Applications and Services (pp. 57–70). ISBN 9781595936141.Google Scholar
  18. Griffiths, D., & Hoel, T. (2016). Comparing xAPI and Caliper. LACE: Technical report.Google Scholar
  19. Guo, B., Yu, Z., Zhou, X.,& Zhang, D. (2014). From participatory sensing to mobile crowd sensing. In 2014 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) (pp. 593–598). IEEE.Google Scholar
  20. Hsu, Y. C. , Rice, K.,& Dawley, L. (2012). Empowering educators with Google’s Android App Inventor: An online workshop in mobile app design. British Journal of Educational Technology, 43(1).
  21. Jian, M.-S., Shen, J.-H., Huang, T.-C., Chen, Y.-C. & Chen, J.-L. (2015). Language learning in cloud: Modular role player game-distance-learning system based on voice recognition. In Future Information Technology-II (pp. 129–135). Springer.Google Scholar
  22. Lindgren, R., & Johnson-Glenberg, M. (2013). Emboldened by embodiment six precepts for research on embodied learning and mixed reality. Educational Researcher, 42(8), 445–452.CrossRefGoogle Scholar
  23.  Martin, F.,& Ertzberger, J. (2013). Here and now mobile learning: An experimental study on the use of mobile technology. Computers and Education, 68, 76–85. ISSN 03601315.
  24. Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, 77(12), 1321–1329.Google Scholar
  25. Mota, J. M., Ruiz-Rube, I., Dodero, J. M.,& Arnedillo-Sánchez, I. (2017). Augmented reality mobile app development for all. Computers & Electrical Engineering.Google Scholar
  26. NMC Horizon Report (2014). Games and Gamification. The New Media Consortium. ISBN 9780989733557.Google Scholar
  27. Ochoa, X., & Superior, E. (2016). Augmenting learning analytics with multimodal sensory data. Journal of Learning Analytics, 3(2), 213–219.CrossRefGoogle Scholar
  28.  Raento, M., Oulasvirta, A., Petit, R.,& Toivonen, H. (2005). ContextPhone: A prototyping platform for context-aware mobile applications. ISSN 15361268.Google Scholar
  29. Rodgers, D. L., & Withrow-Thorton, B. J. (2005). The effect of instructional media on learner motivation. International Journal of Instructional Media, 32(4), 333.Google Scholar
  30.  Romero, C. (2010). Educational data mining: A review of the state-of-the-art. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 40(X), 601–618. ISSN 1094-6977.
  31.  Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. Or Books.Google Scholar
  32. Santos, M. E. C., Yamamoto, G., Taketomi, T., Miyazaki, J.,& Kato, H. (2013). Authoring augmented reality learning experiences as learning objects. In 2013 IEEE 13th International Conference on Advanced Learning Technologies (pp. 506–507).Google Scholar
  33. Sclater, N. (2008). Web 2.0, Personal Learning Environments, and the Future of Learning Management Systems. EDUCAUSE Research. Bulletin, 13, 2008.Google Scholar
  34.  Shen, J.,& Pantic, M. (2013). Hci2 a software framework for multimodal human-computer interaction systems. IEEE Transactions on Cybernetics, 43(6), 1593–1606, Dec 2013. ISSN 2168-2267.
  35. Stopczynski, A., Stahlhut, C., Petersen, M. K., Larsen, J. E., Jensen, C. F., & Ivanova, M. G., et al. (2014). Smartphones as pocketable labs: Visions for mobile brain imaging and neurofeedback. International Journal of Psychophysiology, 91(1), 54–66.Google Scholar
  36. The New Media Consortium (2011). The Horizon Report. Media (pp. 40).
  37. Therón, R., García-Peñalvo, F., & Cruz-Benito, J. (2016). Software architectures supporting human-computer interaction analysis: A literature review. In Learning and Collaboration Technologies (page In press). Springer International Publishing.Google Scholar
  38. Turk, M. (2014). Multimodal interaction: A review. Pattern Recognition Letters, 36, 189–195.CrossRefGoogle Scholar
  39. Wei, L., Zhou, H., Soe, A. K.,& Nahavandi, S. (2013). Integrating kinect and haptics for interactive stem education in local and distributed environments. In 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) (pp. 1058–1065). IEEE.Google Scholar
  40. Xiong, H., Huang, Y., Barnes, L. E., & Gerber, M. S. (2016). Sensus: A cross-platform, general-purpose system for mobile crowdsensing in human-subject studies. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (pp. 415–426).

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • José Miguel Mota
    • 1
    Email author
  • Iván Ruiz-Rube
    • 1
  • Juan Manuel Dodero
    • 1
  • Tatiana Person
    • 1
  • Inmaculada Arnedillo-Sánchez
    • 2
  1. 1.University of CádizPuerto Real, CádizSpain
  2. 2.School of Computer Science and StatisticsTrinity CollegeIreland

Personalised recommendations