Advertisement

AniCode: authoring coded artifacts for network-free personalized animations

Abstract

Time-based media are used in applications ranging from demonstrating the operation of home appliances to explaining new scientific discoveries. However, creating effective time-based media is challenging. We introduce a new framework for authoring and consuming time-based media. An author encodes an animation in a printed code and affixes the code to an object. A consumer captures an image of the object through a mobile application, and the image together with the code is used to generate a video on their local device. Our system is designed to be low cost and easy to use. By not requiring an Internet connection to deliver the animation, the framework enhances privacy of the communication. By requiring the user to have a direct line-of-sight view of the object, the framework provides personalized animations that only decode in the intended context. Animation schemes in the system include 2D and 3D geometric transformations, color transformation, and annotation. We demonstrate the new framework with sample applications from a wide range of domains. We evaluate the ease of use and effectiveness of our system with a user study.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

References

  1. 1.

    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

  2. 2.

    Agrawala, M., Li, W., Berthouzoz, F.: Design principles for visual communication. Commun. ACM 54(4), 60–69 (2011)

  3. 3.

    Agrawala, M., Phan, D., Heiser, J., Haymaker, J., Klingner, J., Hanrahan, P., Tversky, B.: Designing effective step-by-step assembly instructions. ACM Trans. Gr. 22(3), 828–837 (2003)

  4. 4.

    Andolina, S., Pirrone, D., Russo, G., Sorce, S., Gentile, A.: Exploitation of Mobile Access to Context-Based Information in Cultural Heritage Fruition. In: Proceedings of the International Conference on Broadband, Wireless Computing, Communication and Applications, pp. 322–328. IEEE (2012)

  5. 5.

    Appiah, O.: Rich media, poor media: The impact of audio/video vs. text/picture testimonial ads on browsers’ evaluations of commercial web sites and online products. J. Curr. Issues Res. Advert. 28(1), 73–86 (2006)

  6. 6.

    Ashok, A.: Design, modeling, and analysis of visual mimo communication. Ph.D. thesis, Rutgers The State University of New Jersey-New Brunswick (2014)

  7. 7.

    Badam, S.K., Elmqvist, N.: Visfer: camera-based visual data transfer for cross-device visualization. Inf. Vis. 18(1), 68–93 (2019)

  8. 8.

    Barak, M., Ashkar, T., Dori, Y.J.: Teaching science via animated movies: its effect on students’ thinking and motivation. Comput. Edu. 56(3), 839–846 (2011)

  9. 9.

    Van den Bergh, M., Boix, X., Roig, G., de Capitani, B., Van Gool, L.: Seeds: Superpixels extracted via energy-driven sampling. In: Proceedings of the European Conference on Computer Vision, pp. 13–26. Springer (2012)

  10. 10.

    Carter, S., Cooper, M., Adcock, J., Branham, S.: Tools to support expository video capture and access. Educ. Inf. Technol. 19(3), 637–654 (2014)

  11. 11.

    Carter, S., Qvarfordt, P., Cooper, M., Mäkelä, V.: Creating tutorials with web-based authoring and heads-up capture. IEEE Pervasive Comput. 14(3), 44–52 (2015)

  12. 12.

    Chang, C.S., Chu, H.K., Mitra, N.J.: Interactive videos: plausible video editing using sparse structure points. Comput. Gr. Forum 35(2), 489–500 (2016)

  13. 13.

    Cho, N.H., Wu, Q., Xu, J., Zhang, J.: Content authoring using single image in urban environments for augmented reality. In: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, pp. 1–7. IEEE (2016)

  14. 14.

    Chu, J., Bryan, C., Shih, M., Ferrer, L., Ma, K.L.: Navigable videos for presenting scientific data on affordable head-mounted displays. In: Proceedings of the ACM Conference on Multimedia Systems, pp. 250–260. ACM (2017)

  15. 15.

    Clarine, B.: 11 reasons why video is better than any other medium. http://www.advancedwebranking.com/blog/11-reasons-why-video-is-better (2016)

  16. 16.

    Feiner, S.K., McKeown, K.R.: Automating the generation of coordinated multimedia explanations. Computer 24(10), 33–41 (1991)

  17. 17.

    Fidas, C., Sintoris, C., Yiannoutsou, N., Avouris, N.: A survey on tools for end user authoring of mobile applications for cultural heritage. In: Proceedings of the International Conference on Information, Intelligence, Systems and Applications, pp. 1–5 (2015)

  18. 18.

    Hern, A.: Fitness tracking app strava gives away location of secret us army bases. The Guardian (2018). https://www.theguardian.com/world/2018/jan/28/fitness-tracking-app-gives-away-location-of-secret-us-army-bases

  19. 19.

    Kaplan, A.M., Haenlein, M.: Users of the world, unite! the challenges and opportunities of social media. Bus. Horizons 53(1), 59–68 (2010)

  20. 20.

    Karat, C.M., Pinhanez, C., Karat, J., Arora, R., Vergo, J.: Less clicking, more watching: Results of the iterative design and evaluation of entertaining web experiences. In: Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction, pp. 455–463 (2001)

  21. 21.

    Li, D., Nair, A.S., Nayar, S.K., Zheng, C.: AirCode: Unobtrusive physical tags for digital fabrication. In: Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 449–460. ACM (2017)

  22. 22.

    Li, Z., Chen, J.: Superpixel segmentation using linear spectral clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1356–1363 (2015)

  23. 23.

    Liao, I., Hsu, W.H., Ma, K.L.: Storytelling via navigation: A novel approach to animation for scientific visualization. In: M. Christie, T.Y. Li (eds.) Proceedings of the International Symposium on Smart Graphics, pp. 1–14. Springer International Publishing, Cham (2014)

  24. 24.

    Lin, S.S., Hu, M.C., Lee, C.H., Lee, T.Y.: Efficient qr code beautification with high quality visual content. IEEE Trans. Multimed. 17(9), 1515–1524 (2015)

  25. 25.

    McKercher, B., Du Cros, H.: Cultural Tourism: The Partnership Between Tourism and Cultural Heritage Management. Routledge, Abingdon (2002)

  26. 26.

    OpenCV team: OpenCV for Android SDK. https://opencv.org/platforms/android (2017)

  27. 27.

    Owen, S., Switkin, D., team, Z.: ZXing barcode scanning library. https://github.com/zxing/zxing (2017)

  28. 28.

    Parent, R.: Computer Animation: Algorithms and Techniques. Newnes, Oxford (2012)

  29. 29.

    Revell, T.: App creates augmented-reality tutorials from normal videos. New Scientist (2017). https://www.newscientist.com/article/2146850-app-creates-augmented-reality-tutorials-from-normal-videos

  30. 30.

    Schnotz, W.: Commentary: towards an integrated view of learning from text and visual displays. Educ. Psychol. Rev. 14(1), 101–120 (2002)

  31. 31.

    Telea, A.: An image inpainting technique based on the fast marching method. J. Gr. Tools 9(1), 23–34 (2004)

  32. 32.

    Upson, C., Faulhaber Jr., T., Kamins, D., Laidlaw, D., Schlegel, D., Vroom, J., Gurwitz, R., Van Dam, A.: The application visualization system: a computational environment for scientific visualization. IEEE Comput. Gr. Appl. 9(4), 30–42 (1989)

  33. 33.

    Wouters, P., Paas, F., van Merriënboer, J.J.: How to optimize learning from animated models: a review of guidelines based on cognitive load. Rev. Educ. Res. 78(3), 645–675 (2008)

  34. 34.

    Xiao, C., Zhang, C., Zheng, C.: FontCode: Embedding information in text documents using glyph perturbation. ACM Trans. Gr. 37(2), 15 (2018)

  35. 35.

    Yeshurun, Y., Carrasco, M.: Attention improves or impairs visual performance by enhancing spatial resolution. Nature 396(6706), 72–75 (1998)

  36. 36.

    Yuan, W., Dana, K., Varga, M., Ashok, A., Gruteser, M., Mandayam, N.: Computer vision methods for visual mimo optical system. In: CVPR Workshops, pp. 37–43 (2011)

  37. 37.

    Yue, Y.T., Yang, Y.L., Ren, G., Wang, W.: SceneCtrl: Mixed reality enhancement via efficient scene editing. In: Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 427–436. ACM (2017)

  38. 38.

    Zheng, Y., Chen, X., Cheng, M.M., Zhou, K., Hu, S.M., Mitra, N.J.: Interactive images: cuboid proxies for smart image manipulation. ACM Trans. Gr. 31(4), 99 (2012)

Download references

Author information

Correspondence to Shiyu Qiu.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 347410 KB)

Supplementary material 2 (mp4 338771 KB)

Supplementary material 1 (mp4 347410 KB)

Supplementary material 2 (mp4 338771 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Qiu, S., Chen, Q. et al. AniCode: authoring coded artifacts for network-free personalized animations. Vis Comput 35, 885–897 (2019) doi:10.1007/s00371-019-01681-y

Download citation

Keywords

  • Authoring time-based media
  • Encoding animations
  • Personalized demonstrations
  • Network-free communication