Abstract
In this paper, we propose a novel interactive video-based learning system, Brain Tagging. Through collecting both the passive BCI and the active HCI tagging information, the learns’ objective and subjective metadata to the video contents are generated. The system gives a way to visualize the learning pattern by timeline chart consisted of BCI metadata (attention and meditation) and HCI metadata (good, question, and disagree). This can help to understand the learning process and performance, and thereby to provide proper improvement in e-learning.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Video-based e-learning, due to its effectiveness and boundless, has become a promising alternative approach that helps the learning process to be more lifelong and on-demand [32]. It offers considerable knowledge in an attractive and consistent way beyond the time-space constraint. Learning platforms, such as YouTube, TED, and MOOCs, are keen to provide video-based contents for seamless learning. Video-based e-learning approach can also be contributive to the tradition of teaching. It allows teachers to uploading well-prepared videos according to their learning plans. By excluding the uncertainties, the quality of teaching can therefore to be more sustained. This forms a new environment for teaching and learning.
From the research perspectives, we consider the video-based e-learning approach with a great potential to support quantitative evaluations – not only addressed in the afterward assessment but also representing the real-time performances. This drives to the needs of more effective methods for the learning content evaluation and learner performance analysis [12]. Engaging learners to create metadata when accessing the video contents helps to make the real-time assessment applicable.
Learners contribute to the metadata through both the subjective and the objective ways [25]. Learners’ subjective expression, such as tagging or annotation, generates the subjective metadata. GibeoFootnote 1 [8] presents an example of web-based annotation system which generate metadata from learners’ explicit cognition. Through adding “gibeo.net” to the target webpage URL, a set of options is displayed to allow the learner to specify the quality of the highlighted text with labels, e.g., important, wrong, good, or cool, if a part of the text on the page is selected. This offers the learners to remark their perceptions as feedbacks. The objective metadata are generated from learner’s physiological signals such as tiredness, emotion, and attention. Koelstra and Patras [11] present a multi-modal approach that collects both facial expressions and electroencephalography (EEG) of the learners and convert the information into affective tagging implicitly.
In regard to the assessment techniques, however, there is a lack of assessment tools that can collect both the subjective and the objective information. This causes a difficulty to examine the performance between the two approaches. We in this paper introduce an interactive video-based tagging system, Brain Tagging, that integrates subjective and objective metadata of the learning experiences. We firstly review the existing works of active tagging and passive tagging. The description of Brain Tagging is addressed in follow and leads to the conclusion.
2 A Review of Affective Metadata
Studies of affective metadata are mainly derived from the affective computing [19]. Affective computing is a broad area that deals with the detection, recognize and interpretation of human emotions – with a high integration among the professions of computer science, psychology, and cognitive science [24]. The existing approach of affective metadata collection can be divided in two categories: active tagging records subjective reactions of the expressed emotion, while passive tagging collects the objective reaction, i.e., brain information, spontaneously when the user experiencing the content.
2.1 Active Tagging: Knowledge Exchange and Experience Sharing
What we tag for? In regard to teaching and learning, the collected tagging information can be useful to assess learner’s performances, so thereby to improve their learning experiences. The active tagging often set up a focus of knowledge exchange and experience sharing. It stands on its functions to describe the feedbacks that record the tagger’s experiences and beliefs [7]. Active tagging can contribute in two ways. One is meant for exchanging and sharing the information among a group of learners. Through co-tagging a shared material, e.g., selected videos, it can offer a great benefit on collaborative learning. The most famous system is Annotea [8], which improves the collaborative development of the Semantic Web via shared web tags and annotations.
Another contributive way of active tagging is specifically for single learners to organize and to define the information. The “web discussion” feature of Microsoft Office 2000 provides an useful example. It allows collaborative tagging of any web page, so thereby to generate learner-centric tagging information. Gibeo is another example. It allows the user to specify the quality of the highlighted text on the webpage, with labels such as important, wrong or cool. The JavaScript web tagging system, Marginalia [19], presents a similar case that allows users to highlight any part of the text and write associated comments in the margin of the pages.
In regard to video-based e-learning, tagging techniques to contribute to create the metadata for annotating, classifying, retrieving, or analyzing the content. This helps the videos to be more beneficial. Several examples illustrate the situation. The video analysis application, EVA [14], allows users to tag events in real-time for behavioral video analysis. Marquee [28], developed for video annotation and tagging, segments the video according to the time zones, to help the users ‘paint’ their selections. Both examples present an interesting way for users to ‘tag’ in the text-based or video-based learning contents – whilst the tagging results can be collected to improve the selection. This makes the systems be more meaningful and powerful for sustainable use.
2.2 Passive Tagging: The EEG Signal Analysis
Different from active tagging that requires direct efforts in records, passive tagging, i.e., human-centered implicit tagging, gathers tags and annotations without any effort from the users [3]. The main idea behind is to collect users’ spontaneous reactions as a given signal to identify tags. Passive tagging mostly relies on the information collected by the electroencephalography (EEG) signals when processing in brain. The EEG headset is one of the collecting techniques. It is a non-invasive monitoring technique for measuring a user’s brainwave patterns. A cap or headset is worn on the user’s head to measure the voltage fluctuations resulting from ionic current within the neurons of the brain. An international standard known as the 10–20 system determines the location of the electrodes on the scalp [11]. The combinations of different electrodes represent human’s emotions such as angry, attention, meditation etc.
Brain-computer interfacing (BCI), advances in cognitive neuroscience and brainwave technologies, has found its way in representing the activities in the brain [17]. The term “Brain-Computer Communication,” from Vidal in 1973, indicates any computer-based system that produced detailed information on brain function [27]. Vidal developed a system which recorded the scalp over visual cortex to determine the direction of eye gaze, and thus to determine the direction of the cursor movement. In 2002, Wolpaw proposed a framework of BCI to individuals with severe motor deficits. The system controlled process included both signal acquisition and signal processing to translate brainwave features into devices control commands [29].
Most of BCI applications allow the user to send voluntary and directed commands that control the connected computer system to communicate through it. Some studies [4, 31] propose an extended approach to integrate BCI technology with cognitive monitoring. This provides valuable information about the users’ intentions, situational interpretations and emotional states to the computer system. The passive brain-computer interface (passive BCI), called also the implicit BCI, provided information from the user’s mental activities to without the need of direct brain activity controlling. This helps to interpret human emotions derived from brain activities.
According to Fowler [5] and Shen [20], the relationship between learning performance and the arousal is a type of inverted-U curve. People learn best when their emotions are at a moderate optimal arousal level. These results are recognized as references in developing the assessment tool.
3 Brain Tagging: The System Design
Brain Tagging is a system that collects both the learners’ active expressions and the received passive physiology signals for evaluation. The active expression is about learners’ subjective opinions to the learning contents, such as good, question, or disagree. Those opinions are generated from learners’ cognition levels, which are hard to be measured by BCI techniques. Passive signals, such as attention and meditation, are more direct. This causes the information to be more applicable for machine measurements.
Benefit from the BCI and the HCI, Brain Tagging allows learners to instantly evaluate video-based learning contents at the same time. This externalizes the invisible cognitive and physiological data, which are called affective metadata, and apply them to assist the learning process evaluation.
3.1 Active HCI Tagging
Active tagging allows the learners to record their opinions during the learning process. As we know, in some video-based or lecture-based learning situations, the learning process would be difficult to interrupt. For the sake of annotating and retrieving needs, the appropriate method is to tag their opinions in parallel with the process. OATS (Open Annotation and Tagging System) project [1, 2] presented an example. It integrates a content-tagging and a note-taking tool that allow students to motivate the learning contents, at the meanwhile gathers learner-centric metadata.IBM In Sight project [26], another example, presents a way for collaborative editing with micro-tags. In the micro-tagging mode, learners are allowed to attach a tag to a subset of large media, such as a segment in a video. The whole video can be segmented into selected fragments according to the sort of collaborative tagging.
Our previous studies [6, 21,22,23], focusing on the active tagging, have been acknowledged to decrease the complicated and laboring works in assessing the contents. We also learn that the preliminary opinion buttons would perform more sufficient than the free texts in generating learners’ opinions. According to this, we remove the textual tag input in Brain Tagging. Three kinds of tagging buttons are addressed: the positive tagging button “GOOD”, the neutral tagging button “QUESTION”, and the negative tagging button “DISAGREE”. The tagging outcomes will be marked along with the timeline chart synchronously to visualize the feedbacks of interaction process.
3.2 Passive BCI Tagging
The passive BCI tagging collects physiology signals of the learners. It aims to dig the implicit but objective information – mostly emotional. Fowler [5] and Shen [20] both acknowledge the relationship between learning performance and emotions. The EEG techniques have been recognized to be a useful tool to monitor the emotions. Many studies devote to explore the possibility of using EEG as a means for detecting or differentiating between human basic emotions such as happiness, joy, distress, surprise, anger, fear, disgust and sadness [16, 18], or between learning related emotions such as engagement, boredom and frustration [15, 20]. The EEG techniques have also been used for automatic implicit emotional tagging of multimedia content, as an alternative to explicit approaches that require learners to tag the clips themselves [30].
Brain Tagging uses the NeuroSky commercial EEG headset to measure and to convert learners’ brainwaves into evaluable information [13]. The non-invades EEG headset detects and translates the EEG power spectrums (alpha waves, beta waves, etc.) into two kinds of outcomes: ATTENTION and MEDITATION according to Keller’s ARCS model that scales the degree of learners’ mental status from 1 to 100 [9, 10]. The outcomes are visualized as line charts along with the video timeline. The interface and the layout are presented in follow.
3.3 The Integrated Interface
We expect the interface to be applicable, friendly and encouraging for use. As shown in Fig. 1, the field of project information allows the learner to create a new video-based learning project or to retrieve an old one. When the learner creates a new project, the video source can be input from the computer webcam or videos saved in the computer. The video can therefore be displayed on the video canvas. The HCI mode provides alternative way that the learner can tag his opinions within the accurate video periods by clicking buttons. As mentioned above, we provide three tagging buttons: GOOD, QUESTION and DISAGREE. The outcomes are marked on the timeline chart according to individual tagging times.
The learner are encouraged to wear the EEG headset and to activate the BCI mode. Once the connection between EEG headset and computer is built, the learner’s attention and meditation data will be automatically documented in the timeline chart along with the video (Fig. 2).
3.4 The Timeline Chart
Both the HCI and BCI tagging outcomes are integrated to become visualization data shown in the timeline chart. The BCI tagging information (ATTENTION and MEDITATION) is presented as individual continuous line charts. Attention is displayed in red and meditation is displayed in green. The HCI tagging (GOOD-circle symbol, QUESTION-triangle symbol, and DISAGREE-cross symbol) are marked on the line charts according the clicking time.
The timeline chart is designed in this way for the learner performance analysis and for learning content evaluation. In regard to the learner performance analysis, the timeline chart documents and illustrates the learner’s interactive learning process including the BCI and HCI tagging results. It provides the opportunity to analyze the learner’s learning performance by observing the waviness of BCI tagging chart. We can see how many and how long the attention and meditation higher levels present in the timeline to evaluate the learning performance. In addition, we also can compare the relationship between the BCI tagging waves and HCI tagging markers to understand the learner’s learning preference or tendency.
In regard to the purpose of learning content evaluation, the tagging results provide a novel way to assess and extract the valuable video contents. For example, by mass learners’ tagging collection to the same video, we can analyse the video contents and locate the specific video segments with higher and denser tagging outcomes. In other words, we not only evaluate the whole video but also detail to the special video intervals.
4 Conclusion
In this paper we describe the intention, the theoretical backgrounds and the design of the BCI and HCI tagging system, Brain Tagging. Students who voluntarily participate in our experiment are asked to watch the same video for pilot study experiments (see Fig. 3).
The results of the experiment are shown as Fig. 4. The HCI tagging results are remarked in circles: good in green, question in purple and disagree in blue. These results are addressed in the particular time points. The BCI tagging results are presented through curves: attention in blue and meditation in red. This helps to understand the learner’s learning patterns.
In conclusion, Brain Tagging provides an innovative method to visualize the learning pattern. The BCI and HCI tagging record the physiology and mental states during the learning process. As a system of collecting the metadata, it provides the opportunity to further assess the learning content and the learner’s performance. The periodic study presents only the result of a small group. It is still a long way before we can conduct concreted remarks in regard to learner performance analysis and learning content evaluation. We expect to conduct more experiments in following semester to collect data for learning pattern finding. This will help us to design more proper learning contents and processes based on learners’ physiological and cognitive performance.
Notes
References
Bateman, S., Brooks, C., Mccalla, G., Brusilovsky, P.: Applying collaborative tagging to e-learning. In: Proceedings of the 16th International World Wide Web Conference (WWW 2007), May 2007
Bateman, S., Farzan, R., Brusilovsky, P., McCalla, G.: OATS: the Open Annotation and Tagging System. In: Proceedings of I2LOR (2006)
Cadiz, J.J., Gupta, A., Grudin, J.: Using web annotations for asynchronous collaboration around documents. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp. 309–318. ACM, December 2000
Cutrell, E., Tan, D.: BCI for passive input in HCI. In: Proceedings of CHI, vol. 8, pp. 1–3. ACM Press, New York, April 2008
Fowler, C.J.H.: The role of arousal in memory and attention, Doctoral dissertation, Royal Holloway, University of London (1977)
Hsu, Y.C., Jeng, T.S., Shen, Y.T., Chen, P.C.: SynTag: a web-based platform for labeling real-time video. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 715–718. ACM, February 2012
John, A., Seligmann, D.: Collaborative tagging and expertise in the enterprise. In: Collaborative Web Tagging Workshop in Conjunction with WWW 2006, May 2006
Kahan, J., Koivunen, M.R., Prud’Hommeaux, E., Swick, R.R.: Annotea: an open RDF infrastructure for shared Web annotations. Comput. Netw. 39(5), 589–608 (2002)
Keller, J.M.: Motivational design of instruction. In: Instructional Design Theories and Models: An Overview of Their Current Status, vol. 1, pp. 383–434 (1983)
Keller, J.M.: Strategies for stimulating the motivation to learn. Perform. Improv. 26(8), 1–7 (1987)
Koelstra, S., Patras, I.: Fusion of facial expressions and EEG for implicit affective tagging. Image Vis. Comput. 31(2), 164–174 (2013)
Lin, C.Y., Tseng, B.L., Smith, J.R.: VideoAnnEx: IBM MPEG-7 annotation tool for multimedia indexing and concept learning. In: IEEE International Conference on Multimedia and Expo, pp. 1–2, July 2003
Luo, A., Sullivan, T.J.: A user-friendly SSVEP-based brain–computer interface using a time-domain classifier. J. Neural Eng. 7(2), 026010 (2010)
Mackay, W.E.: EVA: an experimental video annotator for symbolic analysis of video data. ACM SIGCHI Bull. 21(2), 68–71 (1989)
Mampusti, E.T., Ng, J.S., Quinto, J.J.I., Teng, G.L., Suarez, M.T.C., Trogo, R.S.: Measuring academic affective states of students via brainwave signals. In: 2011 Third International Conference on Knowledge and Systems Engineering (KSE), pp. 226–231. IEEE, October 2011
Nie, D., Wang, X.W., Shi, L.C., Lu, B.L.: EEG-based emotion recognition during watching movies. In: 2011 5th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 667–670. IEEE, April 2011
Nijholt, A., Tan, D., Allison, B., del R Milan, J., Graimann, B.: Brain-computer interfaces for HCI and games. In: Extended Abstracts on Human Factors in Computing Systems, pp. 3925–3928. ACM, April 2008
Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 14(2), 186–197 (2010)
Picard, R.W., Picard, R.: Affective Computing, vol. 252. MIT Press, Cambridge (1997)
Shen, L.P., Leon, E., Callaghan, V., Shen, R.M.: Exploratory research on an affective e-learning model. In: Proceedings of Workshop on Blended Learning, pp. 267–278, August 2007
Shen, Y.T., Lu, P.W.: Learning by annotating: a system development study of real-time synchronous supports for distributed learning in multiple locations. In: 2012 6th International Conference on New Trends in Information Science and Service Science and Data Mining (ISSDM), pp. 701–706. IEEE, October 2012
Shen, Y.T., Lu, P.W.: Engage the power of social community in the lecture-based learning by using the collaborative tagging system. J. Convergence Inf. Technol. 8(11), 485 (2013)
Shen, Y.T., Jeng, T.S., Hsu, Y.C.: A “live” interactive tagging interface for collaborative learning. In: Luo, Y. (ed.) CDVE 2011. LNCS, vol. 6874, pp. 102–109. Springer, Heidelberg (2010). doi:10.1007/978-3-642-23734-8_16
Tao, J., Tan, T.: Affective computing: a review. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 981–995. Springer, Heidelberg (2005). doi:10.1007/11573548_125
Tkalčič, M., Burnik, U., Košir, A.: Using affective parameters in a content-based recommender system for images. User Model. User-Adap. Inter. 20(4), 279–311 (2010)
Topkara, M., Rogowitz, B., Wood, S., Boston, J.: Collaborative editing of micro-tags. In: Extended Abstracts on Human Factors in Computing Systems, CHI 2009, pp. 4297–4302. ACM, April 2009
Vidal, J.J.: Toward direct brain-computer communication. Ann. Rev. Biophys. Bioeng. 2(1), 157–180 (1973)
Weher, K., Poon, A.: Marquee: a tool for real-time video logging. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 58–64. ACM, April 1994
Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., Vaughan, T.M.: Brain-computer interfaces for communication and control. Clin. Neurophysiol. Off. J. Int. Fed. Clin. Neurophysiol. 113(6), 767–791 (2002)
Yazdani, A., Lee, J.S., Ebrahimi, T.: Implicit emotional tagging of multimedia using EEG signals and brain computer interface. In: Proceedings of the First SIGMM Workshop on Social Media, pp. 81–88. ACM, October 2009
Zander, T.O., Kothe, C.: Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. J. Neural Eng. 8(2), 025005 (2011)
Zhang, D., Zhao, J.L., Zhou, L., Nunamaker, J.F.: Can e-learning replace traditional classroom learning? Evidence and implication of the evolving e-learning technology. Commun. ACM, 47(5) 75–79 (2004)
Acknowledgements
The financial support from Ministry of Science and Technology (MOST) including project “Brain Tagging System” (MOST 105-2221-E-035-070-) and project “Resilient Livable Smart City:” (MOST 105-2627-M-035-008-), is greatly acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Shen, Y.T., Lu, P.W., Chen, X.M. (2017). Brain Tagging: A BCI and HCI Tagging System to Evaluate the Learning Contents. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Novel Learning Ecosystems. LCT 2017. Lecture Notes in Computer Science(), vol 10295. Springer, Cham. https://doi.org/10.1007/978-3-319-58509-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-58509-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58508-6
Online ISBN: 978-3-319-58509-3
eBook Packages: Computer ScienceComputer Science (R0)