Abstract
Nowadays, we can use immersive interaction and display technologies in collaborative analytical reasoning and decision making scenarios. In order to support heterogeneous professional communities of practice in their digital transformation, it is necessary not only to provide the technologies but to understand the work practices under transformations as well as the security, privacy and other concerns of the communities. Our approach is a comprehensive and evolutionary socio-technological learning analytics and design process leading to a flexible infrastructure where professional communities can co-create their wearable enhanced learning solution. In the core, we present a multi-sensory fusion recorder and player that allows the recordings of multi-actor activity sequences by human activity recognition and the computational support of immersive learning analytics to support training scenarios. Our approach enables cross-domain collaboration by fusing, aggregating and visualizing sensor data coming from wearables and modern production systems. The software is open source and based on the outcomes of several national and international funded projects.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Learning analytics [1] has become a central discipline for planning, monitoring and evaluating interventions in learning processes. One of the goals of learning analytics is the use of big data for this purpose. The field has developed quite fast and applications of learning analytics cover both formal learning processes on different levels (micro, meso, macro) and informal learning processes. With respect to that, foci of analysis are learning processes on the cognitive level, e.g. learning languages and sciences. While we agree with this, we would like to suggest adding learning analytics for learning processes including manual activities quite common in assembly, production and picking workplaces to the tool set. We see here a turn from the mind to the body and with that an integration of declarative and procedural knowledge [2]. In organizational learning theories, Nonaka and Takeuchi made the first distinction between declarative and procedural knowledge [3]. In their motivating examples they came up with examples from bakers kneading bread for the design of a kneading automaton. The designers learned only by observing and practicing techniques from real bakers how to do it despite their engineering and design knowledge. We know from research on language acquisition that declarative and procedural knowledge are dependent on each other. In [4] we already wrote that learning analytics in informal learning has a social dimension. Instead of formal learning where we mostly learn from teachers, informal learning is often learning from observations of practice performed by an expert and learning with peers while sharing a practice. Communities of Practice (CoP) [5] are groups of people who interact frequently to learn from each other. CoP has been applied widely as a social learning theory, yet its relationship to learning analytics is not well understood yet. We think, that we need to incorporate practice theories and their methodologies. But we need to transform research practices as well, as work is digitally transformed.
Nowadays empirical research on practice theories at workplaces is carried out using ethnography, sometimes supported by multimedia recordings. Digital ethnography is an emerging field of research where in particular data gathering methods are supported by digital tools. We propose a digital ethnography with fine-grained descriptions of workplaces and activities, and recordings using sensor technologies. With that we can store and analyze recorded observations, share best practices and use them for training. We also propose a shift from researcher-centered data gathering methods to community-centered data gathering methods. We want to come up with a little example for data gathering in runners’ communities. Wearables are becoming more social in many application domains. While their major purpose is to interact with the person wearing the device, it is commonly very attractive to users to share the data recorded by the device with a specified community. Despite common privacy concerns in the beginning and now, data sharing is quite frequently happening in communities. An example usage of shared data is keeping the long term motivation for doing sports. The company Fitbit for example is offering to share fitness data like step count with friends from its activity trackers of the same name. When we provide such information when running the feedback would be more immersive. But how to provide the motivating feedback and leave out the frustrating one?
In this little example, we can identify major challenges for immersive community learning analytics. Learning analytics is the use of big data for the planning and intervention of learning processes. Community learning analytics is using learning related data under the control of communities of practice [5] with respect to their learning processes in open environments like the Web. A major challenge is the privacy and security of shared data. While in many domains, users share their data voluntarily, the affordances of privacy and data security are magnitudes of order higher in domains like learning. It will not be possible to store sensitive data on repositories of companies, which cannot give guarantees for data security and protection. Privacy issues must be considered. Another challenge is the technological openness of the approach. In the example, data can only be shared using the same hardware and software setup, while in learning analytics, data of different devices with heterogeneous data formats should be shared in a common repository under the full control of the community sharing the data and not under the control of a company. A third challenge is that the data are visualized via Web applications and not using the wearables. So, the user cannot access the data during the learning processes but only afterwards and in a non-immersive way. Moreover, the users get a visual or non-visual presentation of the data and the analytics, which is basically generated by somebody.
Our approach is to give the community complete control over continuous visual learning analytics in an immersive way. We see the following possible contributions.
-
Communities shall be able to collaboratively edit learning analytics processes.
-
Communities shall be able to analyze and collaboratively visualize learning traces.
-
Communities shall be able to use learning analytics in an immersive manner.
-
Communities shall be able to store and retrieve their own learning traces under their own control.
-
Communities shall be able to protect their data.
For covering the stated requirements, we combine our existing community analytics platform SWEVA with an immersive analytics concept based on augmented reality head mounted displays. Not only the learning process is collaborative but also the analytics process. An overview of the community learning analytics with wearables is given in Fig. 1. On the left, we see an industrial workplace equipped with sensors that are constantly measuring human movements, air temperature and other environmental factors. The human worker is wearing a number of body sensors, including a gesture-recognition device, a mobile phone in the pockets, and an augmented reality headset. Both workplace settings and necessary activities are formalized in a standardized description language. Together with the sensor data, the latter are fed into a human activity recognition machine learning algorithm. The classified output, i.e. what activity is carried out and connected qualities, are fed into a visual analytics Web cockpit. Its precise definition of how to process and visualize data are defined in a community-aware, collaborative manner. With regard to communities, there are further aspects visible, e.g. data sharing, privacy, and legislative challenges. Finally, the results are fed back to the worker in the form of immersive analytics into the augmented reality headset or other actuators at the workplace.
In this introduction we addressed our approach to collaborative immersive community analytics for wearable enhanced learning. In the next section, we will provide more background information and a state of the art. Major components of the approach are described in Sects. 3 and 4, respectively.
2 Background
In this section we discuss related work and introduce important technical concepts used in our implementation.
2.1 Related Work
Visual Analytics [6] is an emerging, interdisciplinary field of research. It “combines automatic analysis techniques with interactive visualizations for an effective understanding, reasoning and decision making on the basis of very large and complex data”. Immersive Analytics is defined as “the use of engaging analysis tools to support data understanding and decision making.” [7], we see it as a sub-discipline of visual analytics. What distinguishes immersive analytics from visual analytics are data physicalization, situated analytics, embodied data exploration, spatial immersion, and multi-sensory presentation. Billinghurst et al. [8] define collaborative immersive analytics as: “The shared use of immersive interaction and display technologies by more than one person for supporting collaborative analytical reasoning and decision making.” In communities of practice [5], collaboration and mutual engagement are key features and CoPs are defined as “groups of people who share a common concern or passion for something they do and who interact regularly to learn how to do it better”. Practice theories [9,10,11,12,13,14] are not often connected to visual analytics since they are not yet using available digital media and tools to a full extent. However, in an organizational context, the value of practice has been considered e.g. by addressing tacit knowledge through ethnographic observations [3, 15].
2.2 Technical Background
In particular in augmented reality assisted training, immersive analytics plays an important role in understanding the learning process, in providing meaningful feedback and in engaging the learner. Several approaches and prototypes have been presented and surveyed [16,17,18,19,20,21,22], like assembling personal computers [17], learning culturalism [18], learning history [19], learning science [21]. However, only few studies are related to learning at the workplace [17]. The European H2020 research project WEKIT (Wearable Experience for Knowledge Intensive Training) [23] came up with both a content authoring tool (recorder) and a training tool (player) for the Microsoft HoloLens and the Mixed Reality Toolkit. The recorder records multi-sensory input from a body-worn vest with a couple of body sensors (heart beat, temperature) and the HoloLens. The player is able to reproduce the stored training content by augmenting the HoloLens view. The software has been tested and evaluated within three main scenarios: aircraft maintenance for the support of inspections, decision making and safety, healthcare for the usage of complex ultrasound diagnostics machines and astronaut training for orbital and planetary missions. The retrieval of training scenarios is based on their description in an emerging international standard called ARLEM (Augmented Reality Learning Experience Model) [24] supported by the IEEE. ARLEM describes the interactions between the physical world, the virtual world and the learners. At the core of ARLEM is an activity modeling language (activityML) to describe activities of agents (human or non-human), a workplace modeling language (workplaceML) to describe the physical environment and the learning context. Concrete activities can be identified by an advanced multi-sensor fusion framework. In Listing 1.1 we can see a piece of XML for the description of an ARLEM activity.
Human Activity Recognition (HAR) tries to recognize activities of humans with the help of many sensors, using a dedicated machine learning approach. Human activities can be defined as sequences of human and object movements in a given context. The HAR task is to recognize these sequences. HAR is an active research area with many challenges coming from changing environments and light conditions (e.g. shadows), (multiple) movements of objects, sensor signal quality issues [25]. Non-visual sensors and body-worn sensors [26,27,28] improve the quality of HAR by improving the overall accuracy of tracking and classification tasks, but only with the help of sensor fusion approaches. The process of combining the information from different homogeneous or heterogenous sensors to provide a better description of the environment is called multi-sensor data fusion [29]. For the realization of multi-sensor data fusion, we need to process data on different levels (sensory, fusion, processing) as presented in Fig. 2. The sensor model has competitive, cooperative, and complimentary strategies, the fusion level has data, feature and decision levels and the processing level has centralized decentralized and hybrid strategies. Different combinations lead to different strategies (e.g. cooperative, decision and centralized). Machine learning techniques support the realization of such strategies.
HAR as human-object interaction is only one aspect of collaboration in wearable enhanced learning. Human-human and human-robot collaboration are scenarios with are getting more and more attention in workplace learning scenarios. Humans can collaborate with each other at the workplace, e.g. for the assembly of heavy parts or the manufacturing of complicated parts, e.g. fiber-reinforced composites in lightweight construction. Therefore, current HAR approaches need to be extended for human-human collaboration as well as for human-robot collaboration. Identified human activities can be written as Experience API (xAPI) statements, a software specification for exchanging learning data. This would fulfill three purposes. First, we can add xAPI based analytics tools from the shelf, we can share xAPI data with other researchers and practioners, even beyond the users of the ARLEM standard and we can make use of standard learning record stores like Learning LockerFootnote 1. All three tasks of storing, measuring and analyzing contribute as an infrastructuring measure for CoP [30], helping them to define their own ecosystem by a wide choice of alternatives compatible to each other. For learning purposes the environment can be augmented with gamification [31, 32].
3 Concept
We describe a possible learning scenario to illustrate our idea. A community of medical doctors is recording manipulative actions for diagnostics with a complicated ultrasound device using different wearable devices like an electromyographic (EMG) sensor (Myo) and a Microsoft HoloLens. The idea to record this data is that some doctors received training with this specific device while other doctors are knowing ultrasound diagnostics, but did not receive training for this specific ultrasound device. However, in a community of practice, newcomers learn from experts by observation and imitation. Consequently, recording of experts and non-experts for this specific device are made. These recordings are stored in the community specific repository of the community, e.g. the private learning record store of an ambulance center. In this sense, the data is belonging to the community of medical doctors and the data is not necessarily leaving the center. Modeling of activities and workplaces are typically activities where medical doctors do not receive much training. The idea here is to include modeling and content creation specialists from companies specialized on these tasks. They will deliver the necessary knowledge and training for the medical doctors, similar to the training with the ultrasound device. After receiving initial training, the doctors select analytic tools and visualization methods from a repository of possible templates in a collaborative online editor. Together, they can adjust the analytics and visualization process to their needs. They decide to use the stored learning traces as input for a privacy preserving machine learning algorithm extracting learning progress of less experienced medical doctors compared to expert users of the ultrasound device. In a training session, the medical doctors use an augmented reality head mounted device to recognize their activities and to display collaboratively selected information for additional training support in the view field of the device.
In a critical assessment of the scenario, it is fair to say, that we are not even close to its full operationalization. While training on devices through specialists for medical doctors is a profitable after-sales business and the idea that procedural medical knowledge can be shared among a community of practice is quite common in medical education, bringing together the two ideas may take some time. One challenge is the time needed for sharing practices on the job. Specialists in diagnosis with expensive devices may be very busy, too busy to spend time on sharing knowledge. More mature technical support may decrease the time needed for recording learning materials and re-purpose them for training, but still there is a lot of informal learning taking place around the recording and training sessions.
We need to implement several tools in our conceptual architecture.
-
The recorder is consisting of the ARLEM editor, the sensor fusion framework and the activity/gesture recognition modules. The recorder processes the recorded data via the sensor fusion framework into sequences of human/robot activities and/or gestures.
-
The player is an application that resides on the augmented reality device, in our case the MS HoloLens. The player uses the sensor fusion framework and the according modules to predict activities and gestures based on the available training data.
In Fig. 3 we can see an example for a fusion task. The abbreviations are according to Dasarathy’s functional model [33].
The Social Web Environment for Visual Analytics (SWEVA) [34, 35] is a collaborative environment for CoP to define tasks as described above in a collaborative and visual manner. It empowers all community members to realize their analytical requirements themselves, by using a toolset of ready-made modules for data gathering and processing. In a collaborative Web application, end users work together with developers to define data sources, aggregation and preprocessing steps, as well as to select a suitable visualization means. The resulting visualizations are highly interactive with respect to definable input parameters. They can be exported to arbitrary third-party websites running on desktop, mobile and mixed reality browsers.
4 Implementation
We implemented parts of the conceptual architecture in several EU-funded projects, also with the help of computer science students. The recorder is using an editor allowing the user to add sensors (e.g. Myo as an electromyographic (EMG) sensor, Bluetooth RFComm, Bluetooth Light Energy (BLE), Micro::bit, Microsoft Kinect, Microsoft HoloLens) as resources to a MQTT node by means of the open source library M2Mqtt. The ARLEM editor is implemented with the help of RESTful services as a node.js application with a MySQL database to store the descriptions and a single-page Web application as its frontend. The multi-sensor fusion framework is written in C#. It has many configuration parameters and is easy to extend. The player is based on the MixedRealityToolkit from Microsoft using Unity. The source code is open source and managed on GitHubFootnote 2. At the moment, the environment is not able to describe, store or analyze human-human or human-robot collaboration use cases. A collaborative research proposal has been already submitted. In a special case, we recorded interaction between a robot and a human using a special robot model for an emergency shutdown system of the robot.
Figure 4 shows the visualization of a human-robot interaction in SWEVA. Here, we see a robot (shown in blue) working together with a human (in red) for layering textile fibre composites. The goal of this visualization is to show possibly harmful situations where a robot arm is colliding with a human. These situations are a real threat in all industries, where human-robot collaboration is happening; therefore there exists a long rat’s tail of standards and other legislative regulations. The original sensor data is coming from a Microsoft Kinect device that detects human movements. We bring it together with machine data from the industrial robot. In SWEVA we designed the visualization of the process that forces the machine to stop in real-time, every time there is a risk of collision. The 3D output is accessible from every Web browser, as it is relying on state-of-the-art Web 3D technologies.
5 Conclusions and Future Work
We have presented a comprehensive approach to enable collaborative immersive community analytics in wearable enhanced learning. Starting from the idea that learning analytics at the workplace is conceptually different from traditional learning analytics we incorporated practice theories under the assumption of a digital transformation of their methodological foundations, i.e. digital ethnography. As a second major point we introduced the idea that feedback is best delivered in an immersive manner while doing training and not afterwards. As a third line of argumentation, we discussed that the necessary collaborative processes are best situated in a community of practice.
Large portions of the concept have been realized in different EU projects but not yet fully integrated. A fully functional prototype will be ready soon for further testing and evaluation in realistic learning scenarios.
Notes
References
Ferguson, R.: Learning analytics: drivers, developments and challenges. Int. J. Technol. Enhanced Learn. 4(5–6), 304–317 (2012)
Ullman, M.T.: Contributions of memory circuits to language: the declarative/procedural model. Cognition 92, 231–270 (2004)
Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, New York (1995)
Klamma, R.: Community learning analytics – challenges and opportunities. In: Wang, J.-F., Lau, R. (eds.) ICWL 2013. LNCS, vol. 8167, pp. 284–293. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41175-5_29
Wenger, E.: Communities of Practice: Learning, Meaning, and Identity. Learning in doing. Cambridge University Press, Cambridge (1998)
Keim, D., Andrienko, G., Fekete, J.-D., Görg, C., Kohlhammer, J., Melançon, G.: Visual analytics: definition, process, and challenges. In: Kerren, A., Stasko, J.T., Fekete, J.-D., North, C. (eds.) Information Visualization. LNCS, vol. 4950, pp. 154–175. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-70956-5_7
Dwyer, T., et al.: Immersive analytics: an introduction. Immersive Analytics. LNCS, vol. 11190, pp. 1–23. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01388-2_1
Billinghurst, M., Cordeil, M., Bezerianos, A., Margolis, T.: Collaborative immersive analytics. In: Marriott, K., et al. (eds.) Immersive Analytics. LNCS, vol. 11190, pp. 221–257. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01388-2_8
Bourdieu, P.: Outline of a Theory of Practice. Cambridge Studies in Social and Cultural Anthropology, vol. 16. Cambridge University Press, Cambridge (1977). English language edn
Giddens, A.: The Constitution of Society: Outline of the Theory of Structuration, 1st edn. University of California Press, Berkeley (1984)
Foucault, M.: The Archaeology of Knowledge: And the Discourse on Language. Dorset Press, New York (1987)
Savigny, E.V., Knorr-Cetina, K., Schatzki, T.R.: The Practice Turn in Contemporary Theory. Routledge, London (2001)
Schatzki, T.R.: Social Practices: A Wittgensteinian Approach to Human Activity and the Social. Digital print edn. Cambridge University Press, Cambridge (2008)
Butler, J.: Bodies That Matter: On the Discursive Limits of “Sex”. Routledge, London (2014)
Polanyi, M.: The Tacit Dimension. Anchor Books, Doubleday & Co., New York (1966)
Santos, M.E.C., Chen, A., Taketomi, T., Yamamoto, G., Miyazaki, J., Kato, H.: Augmented reality learning experiences: survey of prototype design and evaluation. IEEE Trans. Learn. Technol. 7(1), 38–56 (2014)
Chiang, H.K., Chou, Y.Y., Chang, L.C., Huang, C.Y., Kuo, F.L., Chen, H.W.: An augmented reality learning space for PC DIY. In: Proceedings of the 2nd Augmented Human International Conference, AH 2011, pp. 12:1–12:4. ACM, New York (2011)
Juan, M.C., Furió, D., Seguí, I., Aiju, N.R., Cano, J.: Lessons learnt from an experience with an augmented reality iphone learning game. In: Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, ACE 2011, pp. 52:1–52:8. ACM, New York (2011)
Tsai, C.H., Huang, J.Y.: A mobile augmented reality based scaffolding platform for outdoor fieldtrip learning. In: 2014 IIAI 3rd International Conference on Advanced Applied Informatics, pp. 307–312 (2014)
Oh, S., Byun, Y.C.: The design and implementation of augmented reality learning systems. In: 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, pp. 651–654 (2012)
Ables, A.: Augmented and virtual reality: discovering their uses in natural science classrooms and beyond. In: Proceedings of the 2017 ACM Annual Conference on SIGUCCS, SIGUCCS 2017, pp. 61–65. ACM, New York (2017)
Dass, N., Kim, J., Ford, S., Agarwal, S., Chau, D.H.: Augmenting coding: augmented reality for learning programming. In: Proceedings of the Sixth International Symposium of Chinese CHI, Chinese CHI 2018, pp. 156–159. ACM, New York (2018)
Limbu, B., Fominykh, M., Klemke, R., Specht, M., Wild, F.: Supporting training of expertise with wearable technologies: the WEKIT reference framework. In: Yu, S., Ally, M., Tsinakos, A. (eds.) Mobile and Ubiquitous Learning. PRRE, pp. 157–175. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-6144-8_10
Wild, F.: The future of learning at the workplace is augmented reality. Computer 49(10), 96–98 (2016)
Subetha, T., Chitrakala, S.: A survey on human activity recognition from videos. In: 2016 International Conference on Information Communication and Embedded Systems (ICICES), pp. 1–7 (2016)
Maurer, U., Smailagic, A., Siewiorek, D.P., Deisher, M.: Activity recognition and monitoring using multiple sensors on different body positions. In: 2011 Fifth FTRA International Conference on Multimedia and Ubiquitous Engineering, pp. 4–116 (2011)
Liu, S., Gao, R.X., John, D., Staudenmayer, J.W., Freedson, P.S.: Multisensor data fusion for physical activity assessment. IEEE Trans. Biomed. Eng. 59(3), 687–696 (2012)
Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2013)
Gravina, R., Alinia, P., Ghasemzadeh, H., Fortino, G.: Multi-sensor fusion in body sensor networks: state-of-the-art and research challenges. Inf. Fusion 35, 68–80 (2017)
de Lange, P., Göschlberger, B., Farrell, T., Klamma, R.: A microservice infrastructure for distributed communities of practice. In: Pammer-Schindler, V., Pérez-Sanagustín, M., Drachsler, H., Elferink, R., Scheffel, M. (eds.) EC-TEL 2018. LNCS, vol. 11082, pp. 172–186. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98572-5_14
Klamma, R., Arifin, M.A.: Gamification of web-based learning services. In: Xie, H., Popescu, E., Hancke, G., Fernández Manjón, B. (eds.) ICWL 2017. LNCS, vol. 10473, pp. 43–48. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66733-1_5
Hensen, B., Koren, I., Klamma, R., Herrler, A.: An augmented reality framework for gamified learning. In: Hancke, G., Spaniol, M., Osathanunkul, K., Unankard, S., Klamma, R. (eds.) ICWL 2018. LNCS, vol. 11007, pp. 67–76. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96565-9_7
Dasarathy, B.V.: Sensor fusion potential exploitation-innovative architectures and illustrative applications. Proc. IEEE 85(1), 24–38 (1997)
Koren, I., Klamma, R.: Community learning analytics with industry 4.0 and wearable sensor data. In: Beck, D., et al. (eds.) iLRN 2017. CCIS, vol. 725, pp. 142–151. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60633-0_12
Koren, I., Klamma, R.: Enabling visual community learning analytics with Internet of things devices. Comput. Hum. Behav. 89, 385–394 (2018)
Acknowledgement
The authors would like to thank the German Research Foundation (DFG) for the kind support within the Cluster of Excellence “Internet of Production” (IoP) under the project id 390621612. This project has also received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No 687669 (WEKIT) and from the European Union’s Erasmus Plus programme, grant agreement 2017-1-NO01-KA203-034192.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Klamma, R., Ali, R., Koren, I. (2019). Immersive Community Analytics for Wearable Enhanced Learning. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Ubiquitous and Virtual Environments for Learning and Collaboration. HCII 2019. Lecture Notes in Computer Science(), vol 11591. Springer, Cham. https://doi.org/10.1007/978-3-030-21817-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-21817-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-21816-4
Online ISBN: 978-3-030-21817-1
eBook Packages: Computer ScienceComputer Science (R0)