A Large-Scale Visual Check-In System for TV Content-Aware Web with Client-Side Video Analysis Offloading

  • Shuichi KurabayashiEmail author
  • Hiroki Hanaoka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10570)


The intuitive linkage between TV and the web brings about new opportunities to motivate people to watch video content or visit websites. A check-in system that recognizes which specific programs are being watched by users is highly effective in promoting TV content. However, such a check-in system faces two technical problems: the temporal characteristics of broadcasting media, resulting in a massive number of simultaneous check-in requests, and the wide variation of audience environments, such as lighting, cameras, and TV devices. We propose a visual check-in system for linking websites and TV programs. The system identifies what program a user is watching by analyzing the visual features of a video captured with a smartphone. The key technology is a real-time video analysis framework that achieves both scalability to an enormous number of simultaneous requests and practical robustness in terms of content identification. We have constructed a special color scheme consisting of 120 (non-neutral) colors to absorb differences in the illumination levels of user environments. This color scheme plays an important role in offloading video analysis tasks onto the client-side in a tamper-proof way. Our system assigns a unique color scheme to each user and verifies a check-in request using the corresponding color scheme, thus preventing malicious users from sharing the analysis results with others. Experimental results using a real dataset demonstrate the accuracy and efficiency of the proposed method. We have applied the system to actual TV programs and clarified its scalability and precision in a production environment.


Check-in Content-awareness Tamper-proof Client-side offloading 


  1. 1.
  2. 2.
    Lazer, D., Pentland (Sandy), A., Adamic, L., et al.: Life in the network: the coming age of computational social science. Science 323(5915), 721–723 (2009)Google Scholar
  3. 3.
    Rao, Y., Xie, H., Li, J., Jin, F., Wang, F.L., Li, Q.: Social emotion classification of short text via topic-level maximum entropy model. Inf. Manag. 53(8), 978–986 (2016)CrossRefGoogle Scholar
  4. 4.
    Wang, J., He, C., Liu, Y., Tian, G., Peng, I., Xing, J., Ruan, X., Xie, H., Wang, F.L.: Efficient alarm behavior analytics for telecom networks. Inf. Sci. 402, 1–14 (2017)CrossRefGoogle Scholar
  5. 5.
    Joseph, K., Carley, K.M., Hong, J.I.: Check-ins in “blau space”: applying blau’s macrosociological theory to foursquare check-ins from New York City. ACM Trans. Intell. Syst. Technol. 5(3), 1–22 (2014). Article 46CrossRefGoogle Scholar
  6. 6.
    Frith, J.: Communicating through location: the understood meaning of the foursquare check-in. J. Comput. Mediat. Commun. 19, 890–905 (2014). John Wiley & SonsCrossRefGoogle Scholar
  7. 7.
    Sang, J., Mei, T., Xu, C.: Activity sensor: check-in usage mining for local recommendation. ACM Trans. Intell. Syst. Technol. 6(3), 1–24 (2015). Article 41CrossRefGoogle Scholar
  8. 8.
    Tuomi, P., Bachmayer, S.: The convergence of tv and web (2.0) in Austria and Finland. In: Proceedings of EuroITV 2011, pp. 55–64. ACM (2011)Google Scholar
  9. 9.
    Tuomi, P.: TV-related content online: a brief history of the use of webplatforms. In: Proceedings of EuroITV 2013, pp. 139–142. ACM (2013)Google Scholar
  10. 10.
    YouTube Video. Check-In Tutorial, Cygames, Inc. (2017).
  11. 11.
    ISO/IEC 18004:2015, Information technology – Automatic identification and data capture techniques – QR Code bar code symbology specificationGoogle Scholar
  12. 12.
    ISO/IEC 16022:2006 Information technology – Automatic identification and data capture techniques – Data Matrix bar code symbology specificationGoogle Scholar
  13. 13.
  14. 14.
    Lew, M.S., Sebe, N., Djeraba, C., Jain, R.: Content-based multimedia information retrieval: state of the art and challenges. ACM TOMCCAP 2(1), 1–19 (2006)CrossRefGoogle Scholar
  15. 15.
    Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1349–1380 (2000)CrossRefGoogle Scholar
  16. 16.
    Smeaton, A.F.: Techniques used and open challenges to the analysis, indexing and retrieval of digital video. Inf. Syst. 32(4), 545–559 (2007). Elsevier ScienceCrossRefGoogle Scholar
  17. 17.
    Newhall, S.M., Nickerson, D., Judd, D.B.: Final report of the O.S.A. subcommittee on the spacing of the munsell colors. J. Opt. Soc. Am. 33(7), 385–411 (1943)CrossRefGoogle Scholar
  18. 18.
    Valdez, P., Mehrabian, A.: Effects of color on emotions. J. Exp. Psychol. General 123(4), 394–409 (1994)CrossRefGoogle Scholar
  19. 19.
    Kobayashi, S.: The aim and method of the color image scale. Color Res. Appl. 6(2), 93–107 (1981). John Wiley & SonsCrossRefGoogle Scholar
  20. 20.
    Kurabayashi, S., Kiyoki, Y.: Impression-aware video stream retrieval system with temporal color-sentiment analysis and visualization. In: Liddle, S.W., Schewe, K.-D., Tjoa, A.M., Zhou, X. (eds.) DEXA 2012. LNCS, vol. 7447, pp. 168–182. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-32597-7_15CrossRefGoogle Scholar
  21. 21.
    Aniplex Inc., Granblue Fantasy the Animation (2017).
  22. 22.
    WebRTC 1.0: Real-time Communication Between Browsers, W3C Working Draft (2017).
  23. 23.
    Lehane, B., O’Connor, N.E., Lee, H., Smeaton, A.F.: Indexing of fictional video content for event detection and summarisation. EURASIP J. Image Video Process, Article ID 14615, 1–15 (2007)Google Scholar
  24. 24.
    Yonezawa, T., Ogawa, M., Kyono, Y., Nozaki, H., Nakazawa, J., Nakamura, O., Tokuda, H.: SENSeTREAM: enhancing online live experience with sensor-federated video stream using animated two-dimensional code. In: Proceedings of UbiComp 2014, pp. 301–305. ACM (2014)Google Scholar
  25. 25.
    Shi, S., Chen, L., Hu, W., Gruteser, M.: Reading between lines: high-rate, non-intrusive visual codes within regular videos via ImplicitCode. In: Proceedings of UbiComp 2015, pp. 157–168. ACM (2015)Google Scholar
  26. 26.
    Woo, G., Lippman, A., Raskar, R.: VRCodes: unobtrusive and active visual codes for interaction by exploiting rolling shutter. In: Proceedings of ISMAR 2012, pp. 59–64. IEEE (2012)Google Scholar
  27. 27.
    Li, T., An, C., Xiao, X., Campbell, A.T., Zhou, X.: Real-time screen-camera communication behind any scene. In: Proceedings of MobiSys 2015, pp. 197–211. ACM (2015)Google Scholar
  28. 28.
    Godlove, I.H.: Improved color-difference formula, with applications to the perceptibility and acceptability of fadings. J. Opt. Soc. Am. 41(11), 760–770 (1951)CrossRefGoogle Scholar
  29. 29.
    Luo, M.R., Cui, G., Rigg, B.: The development of the CIE 2000 colour-difference formula: CIEDE 2000. Color Res. Appl. 26(5), 340–350 (2001)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Cygames ResearchCygames, Inc.TokyoJapan

Personalised recommendations