Skip to main content

An Instant Approach with Visual Concepts and Query Formulation Based on Users’ Information Needs for Initial Retrieval of Lifelog Moments

  • Conference paper
  • First Online:
  • 281 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11966))

Abstract

Smart devices, such as smartphones and wearable cameras, have become widely used, and lifelogging with such gadgets has been recognized as a common activity. Since this trend produces a large amount of individual lifelog records, it is important to support users’ efficient access of their personal lifelog archives. NTCIR Lifelog task series have studied the retrieval setting as a task called Lifelog Semantic Access sub-task (LSAT). This task is that, given a topic of users’ daily activity or events, e.g. “Find the moments when a user was eating any food at his/her desk at work”, as a query, a system retrieves the relevant images of the moments from users’ lifelogging records of their daily lives. Although, in the NTCIR conferences, interactive systems, which can utilize searchers’ feedback in the retrieval process, have showed the higher performance than systems in automatic manner without users’ feedback in the retrieval process, interactive systems rely on the quality of initial results, which can be seen as results of automatic systems. We envision that automatic retrieval that will be used in interactive systems later. In this paper, therefore, based on a principal that the system should be easy to implement for the later applicability, we propose a method scoring lifelog moments using only the meta information generated by publicly available pretrained detectors with word embeddings. Experimental results show the higher performance of the proposed method than the automatic retrieval systems presented in the NTCIR-14 Lifelog-3 task. We also show the retrieval can be further improved by about 0.3 of MAP with query formulation considering relevant/irrelvant writing about multimodal information in query topics.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://en.wikipedia.org/wiki/Autographer.

  2. 2.

    http://www.moves-app.com/.

  3. 3.

    https://www.fitbit.com/.

  4. 4.

    https://code.google.com/word2vec/.

  5. 5.

    https://zenodo.org/record/3445638.

References

  1. Fu, M.-H., Chang, C.-C., Huang, H.-H., Chen, H.-H.: Incorporating external textual knowledge for life event recognition and retrieval. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies, pp. 61–71 (2019)

    Google Scholar 

  2. Gurrin, C., Joho, H., Hopfgartner, F., Zhou, L., Albatal, R.: Overview of NTCIR-12 lifelog task. In: Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, pp. 354–360 (2016)

    Google Scholar 

  3. Gurrin, C., et al.: Overview of NTCIR-13 lifelog-2 task. In: Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies, pp. 6–11 (2017)

    Google Scholar 

  4. Gurrin, C., et al.: Overview of NTCIR-14 lifelog-3 task. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies, pp. 14–26 (2019)

    Google Scholar 

  5. Le, N.-K., et al.: HCMUS at the NTCIR-14 lifelog-3 task. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies, pp. 48–60 (2019)

    Google Scholar 

  6. Lin, J., Garcia del Molino, A., Xu, Q., Fang, F., Subbaraju, V., Lim, J.: VCI2R at NTCIR-13 lifelog-2 lifelog semantic access task. In: Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies, pp. 28–32 (2017)

    Google Scholar 

  7. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  8. Mikolov, T., Sutskever I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  9. Mitra, B., Nalisnick, E., Craswell, N., Caruana, R.: A dual embedding space model for document ranking. arXiv preprint arXiv:1602.01137 (2016)

  10. Ninh, V.-T., et al.: A baseline interactive retrieval engine for the NTCIR-14 lifelog-3 semantic access task. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies, pp. 72–80 (2019)

    Google Scholar 

  11. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  12. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  13. Safadi, B., Mulhem, P., Quénot G., Chevallet, J.-P.: LIG-MRIM at NTCIR-12 lifelog semantic access task. In: Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, pp. 361–365 (2016)

    Google Scholar 

  14. Suzuki, T., Ikeda, D.: Smart lifelog retrieval system with habit-based concepts and moment visualization. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies, pp. 40–47 (2019)

    Google Scholar 

  15. Yamamoto, S., Nishimura, T., Akagi, Y., Takimoto, Y., Inoue, T., Toda, H.: PBG at NTCIR-13 lifelog-2 lat, lsat, and lest tasks. In: Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies, pp. 12–19 (2017)

    Google Scholar 

  16. Ye, X., Shen, H., Ma, X., Bunescu, R., Liu, C.: From word embeddings to document similarities for improved information retrieval in software engineering. In: Proceedings of the 38th IEEE International Conference on Software Engineering, pp. 404–415 (2016)

    Google Scholar 

  17. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2017)

    Article  Google Scholar 

  18. Zhou, L., Dang-Nguyen, D.-T., Gurrin, C.: A baseline search engine for personal life archives. In: Proceedings of the 2nd Workshop on Lifelogging Tools and Applications, pp. 21–24 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tokinori Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suzuki, T., Ikeda, D. (2019). An Instant Approach with Visual Concepts and Query Formulation Based on Users’ Information Needs for Initial Retrieval of Lifelog Moments. In: Kato, M., Liu, Y., Kando, N., Clarke, C. (eds) NII Testbeds and Community for Information Access Research. NTCIR 2019. Lecture Notes in Computer Science(), vol 11966. Springer, Cham. https://doi.org/10.1007/978-3-030-36805-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36805-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36804-3

  • Online ISBN: 978-3-030-36805-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics