Advertisement

Visualization of Real World Activity on Group Work

  • Daisuke Deguchi
  • Kazuaki Kondo
  • Atsushi Shimada
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10922)

Abstract

Group work is widely introduced and practiced as a method to achieve the learning goal efficiently by collaborating group members. However, since most types of group works are carried out in the real environment, it is very difficult to perform formative assessment and real time evaluation without students’ feedbacks. Therefore, there is a strong demand to develop a method that supports evaluation of group work. To support evaluation of group work, this paper proposes a method to visualize the real world activity during group work by using first person view cameras and wearable sensors. Here, the proposed method visualizes three scores: (1) individual attention, (2) hand visibility, (3) individual activity. To evaluate the performance and analyze the relationships between scores, we conducted experiments of “Marshmallow challenge” that is a collaborative work to construct a tower using marshmallow and spaghetti within a limit of time. Through the experiments, we confirmed that the proposed method has potential to become a evaluation tool for visualizing the activity of the group work.

Keywords

Visualization Real world activity Group work 

Notes

Acknowledgement

Parts of this research were supported by JSPS KAKENHI Grant Number 16K12786.

References

  1. 1.
    Baker, M., Lund, K.: Promoting reflective interactions in a CSCL environment. J. Comput. Assist. Learn. 13(3), 175–193 (1997)CrossRefGoogle Scholar
  2. 2.
    Suthers, D.D.: Technology affordances for intersubjective meaning making: a research agenda for CSCL. Int. J. Comput. Support. Collaborative Learn. 1(3), 315–337 (2006)CrossRefGoogle Scholar
  3. 3.
    Damsa, C.I., Kirschner, P.A., Andriessen, J.E.B., Erkens, G., Sins, P.H.M.: Shared epistemic agency: an empirical study of an emergent construct. J. Learn. Sci. 19(2), 143–186 (2010)CrossRefGoogle Scholar
  4. 4.
    Ogata, H., Matsuka, Y., Moushir, E.M., Yano, Y.: LORAMS: capturing, sharing and reusing experience by linking physical objects and videos. In: Proceedings of the Workshop on Pervasive Learning, pp. 34–42 (2007)Google Scholar
  5. 5.
    Filippeschi, A., Schmitz, N., Miezal, M., Bleser, G., Ruffaldi, E., Stricker, D.: Survey of motion tracking methods based on inertial sensors: a focus on upper limb human motion. Sensors 17(6), 1257 (2018)CrossRefGoogle Scholar
  6. 6.
    Deguchi, D., Kondo, K., Shimada, A.: Subjective sensing of real world activity on group study. In: The Eighth International Conference on Collaboration Technologies (CollabTech 2016), pp. 5–8 (2016)Google Scholar
  7. 7.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), pp. 779–788 (2016)Google Scholar
  8. 8.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common Objects in Context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  9. 9.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR2017), pp. 2881–2890 (2017)Google Scholar
  10. 10.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRefGoogle Scholar
  11. 11.
    Li, C., Kitani, K.M.: Pixel-level hand detection in ego-centric videos. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (CVPR2013) (2013)Google Scholar
  12. 12.
    Zhu, X., Liu, W., Jia, X., Wong, K.K.: A two-stage detector for hand detection in ego-centric videos. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV) (2016)Google Scholar
  13. 13.
    Bambach, S., Lee, S., Crandall, D.J., Yu, C.: Lending a hand: detecting hands and recognizing activities in complex egocentric interactions. In: Proceedings of IEEE International Conference on Computer Vision (ICCV2015) (2015)Google Scholar
  14. 14.
    Lee, S., Bambach, S., Crandall, D.J., Franchak, J.M., Yu, C.: This hand is my hand: a probabilistic approach to hand disambiguation in egocentric video. In: Proceedings of IEEE Int. Conference on Computer Vision and Pattern Recognition Workshops (CVPRW2014) (2014)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Daisuke Deguchi
    • 1
  • Kazuaki Kondo
    • 2
  • Atsushi Shimada
    • 3
  1. 1.Information Strategy Office, Information and CommunicationsNagoya UniversityNagoyaJapan
  2. 2.Academic Center for Computing and Media StudiesKyoto UniversityKyotoJapan
  3. 3.Faculty of Information Science and Electrical EngineeringKyushu UniversityFukuokaJapan

Personalised recommendations