Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

  • Masaya Okamoto
  • Keiji Yanai
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8333)


In this paper, we propose a method to summarize an egocentric moving video (a video recorded by a moving wearable camera) for generating a walking route guidance video. To summarize an egocentric video, we analyze it by applying pedestrian crosswalk detection as well as ego-motion classification, and estimate an importance score of each section of the given video. Based on the estimated importance scores, we dynamically control video playing speed instead of generating a summarized video file in advance. In the experiments, we prepared an egocentric moving video dataset including more than one-hour-long videos totally, and evaluated crosswalk detection and ego-motion classification methods. Evaluation of the whole system by user study has been proved that the proposed method is much better than a simple baseline summarization method without video analysis.


Egocentric Vision Video Summarization Walking Route Guidance 


  1. 1.
    Hoiem, D., Efros, A., Hebert, M.: Recovering surface layout from an image. International Journal of Computer Vision (2006)Google Scholar
  2. 2.
    Chong-Wah, N., Yu-Fei, M., Hong-Jiang, Z.: Video Summarization and Scene Detection by Graph Modeling. IEEE Transactions on Circuits and Systems for Video Technology 15 (2005)Google Scholar
  3. 3.
    Arthur, G., Harry, A.: Video summarization: A conceptual framework and survey of the state of the art. Visual Communication and Image Representation 19 (2008)Google Scholar
  4. 4.
    Lee, Y.J., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: Proc. of IEEE Computer Vision and Pattern Recognition (2012)Google Scholar
  5. 5.
    Tancharoen, D., Yamasaki, T., Aizawa, K.: Practical experience recording and indexing of life log video. In: Proc. of ACM SIGMM Workshop on Continuous Archival and Retrieval of Personal Experiences (2005)Google Scholar
  6. 6.
    Kitani, K.M., Okabe, T., Sato, Y., Sugimoto, A.: Fast unsupervised ego-action learning for first-person sports videos. In: Proc. of IEEE Computer Vision and Pattern Recognition, pp. 3241–3248 (2011)Google Scholar
  7. 7.
    Ogaki, K., Kitani, K.M., Sugano, Y., Sato, Y.: Coupling eye-motion and ego-motion features for first-person activity recognition. In: Proc. of CVPR Workshop on Egocentric Vision (2012)Google Scholar
  8. 8.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Masaya Okamoto
    • 1
  • Keiji Yanai
    • 1
  1. 1.Department of InformaticsThe University of Electro-CommunicationsChofu-shiJapan

Personalised recommendations