Abstract
Augmented Virtual Environment (AVE) fuses real-time video streaming with virtual scenes to provide a new capability of the real-world run-time perception. Although this technique has been developed for many years, it still suffers from the fusion correctness, complexity and the image distortion during flying. The image distortion could be commonly found in an AVE system, which is decided by the viewpoint of the environment. Existing work lacks of the evaluation of the viewpoint quality, and then failed to optimize the fly path for AVE. In this paper, we propose a novel method of viewpoint quality evaluation (VQE), taking texture distortion as evaluation metric. The texture stretch and object fragment are taken as the main factors of distortion. We visually compare our method with viewpoint entropy on campus scene, demonstrating that our method is superior in reflecting distortion degree. Furthermore, we conduct a user study, revealing that our method is suitable for the good quality demonstration with viewpoint control for AVE.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Neumann, U., You, S., Hu, J.: Augmented virtual environments (AVE): dynamic fusion of imagery and 3D models. In: Proceedings of IEEE Virtual Reality, pp. 61–67. IEEE Computer Society (2003)
Moezzi, S., Katkere, A., Kuramura, D.Y., et al.: Immersive video. In: Proceedings of IEEE Virtual Reality (1996)
Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: Proceedings of ACM Transactions on Graphics, pp. 835–846. ACM (2006)
Decamp, P., Shaw, G., Kubat, R.: An immersive system for browsing and visualizing surveillance video. In: Proceedings of the International Conference on Multimedia, pp. 371–380. ACM (2010)
Jian, H., Liao, J., Fan, X.: Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system. Int. J. Digit. Earth 10(9), 1–20 (2017)
Sebe, I.O., Hu, J., You, S.: 3D video surveillance with augmented virtual environments. In: Proceedings of 1st ACM SIGMM International Workshop on Video Surveillance, pp. 107–112. ACM (2003)
Zhou, Z., You, J., Yan, J., et al.: Method for 3D Scene Structure Modeling And Camera Registration From Single Image. US 20160249041 A1 (2016)
Neumann, L., Sbert, M., Gooch, B.: Viewpoint quality: measures and applications. In: Proceedings of Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, pp. 185–192. ACM (2005)
Polonsky, O., Patané, G., Biasotti, S.: What’s in an image? Vis. Comput. 21(8–10), 840–847 (2005)
Zquez, P.P., Feixas, M., Sbert, M.: Viewpoint selection using viewpoint entropy. In: Proceedings of the 6th International Fall Workshop on Vision, Modeling, and Visualization, pp. 273–280 (2001)
Freitag, S., Weyers, B., Bönsch, A.: Comparison and evaluation of viewpoint quality estimation algorithms for immersive virtual environments. In: Proceedings of International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (2015)
Yamauchi, H., Saleem, W., Yoshizawa, S.: Towards stable and salient multi-view representation of 3D shapes. In: Proceedings of IEEE International Conference on Shape Modeling and Applications, pp. 40. IEEE Computer Society (2006)
Page, D.L., Koschan, A., Sukumar, S.R., Rouiabidi, B., Abidi, M.A.: Shape analysis algorithm based on information theory. In: Proceedings of International Conference on Image Processing, pp. 229–232 (2003)
Lee, C.H., Varshney, A., Jacobs, D.W.: Mesh saliency. ACM Trans. Graph. 24(3), 659–666 (2005)
Vázquez, P.-P.: Automatic view selection through depth-based view stability analysis. Vis. Comput. 25, 5–7 (2009)
Miao, Y., Wang, H., Hang, Z.: Best viewpoint selection driven by relief saliency entropy. J. Comput.-Aided Des. Comput. Graph. 23(12), 2033–2039 (2011)
Christie, M., Normand, J.M.: A semantic space partitioning approach to virtual camera composition. Comput. Graph. Forum 24(3), 247–256 (2005)
Tsai, G., Xu, C.H., Liu, J.E.: Real-time indoor scene understanding using Bayesian filtering with motion cues. In: IEEE International Conference on Computer Vision, pp. 121–128. IEEE Computer Society (2011)
Zeng, Y., Hu, Y., Liu, S.: GeoCueDepth: exploiting geometric structure cues to estimate depth from a single image. In: IEEE International Conference on Intelligent Robots and Systems (2017)
Liu, M.M., Salzmann, M., He, X.M.: Discrete-continuous depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 716–723 (2014)
Liu, F.Y., Shen, C.H., Lin, G.S.: Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5162–5170 (2015)
Roy, A., Todorovic, S.: Monocular depth estimation using neural regression forest. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5506–5514 (2016)
Godard, C., Aodha, O.M., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 270–279 (2017)
Lee, D.C., Hebert, M., Kanade, T.: Geometric reasoning for single image structure recovery. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2136–2143 (2009)
Zhao, Y.B., Zhu, S.C.: Image parsing via stochastic scene grammar. In: Proceedings of the Conference and Workshop on Neural Information Processing System, pp. 73–81 (2011)
Chen, L., Papandreou, G., Kokkinos, I., Murphy, K.: Semantic image segmentation with deep convolutional nets and fully connected crfs. Comput. Sci. 4, 357–361 (2014)
Chen, L.C., Papandreou, G., Kokkinos, I.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)
Chen, L.C., Yang, Y., Wang, J.: Attention to scale: scale-aware semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3640–3649 (2016)
Lin, G.S., Milan, A., Shen, C.H.: RefineNet: multi-path refinement networks with identity mappings for highresolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1925–1934 (2017)
Zhao, H.S., Shi, J.P., Qi, X.J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)
Zhou, Y., Xie, J.Q., Wu, W.: Path planning for virtual-reality integration surveillance system. J. Comput.-Aided Des. Comput. Graph. 30(3), 514–523 (2018)
Guo, R.X., Pei, Q.C., Min, H.W.: Bhattacharyya distance feature selection. In: Proceedings of the 13th International Conference on Pattern Recognition, pp. 195–199 (1996)
Acknowledgement
This work is supported by the Natural Science Foundation of China under Grant No.61572061, 61472020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Meng, M., Zhou, Y., Tan, C., Zhou, Z. (2018). Viewpoint Quality Evaluation for Augmented Virtual Environment. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11166. Springer, Cham. https://doi.org/10.1007/978-3-030-00764-5_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-00764-5_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00763-8
Online ISBN: 978-3-030-00764-5
eBook Packages: Computer ScienceComputer Science (R0)