Perceptually Improved 3D Object Representation Based on Guided Adaptive Weighting of Feature Channels of a Visual-Attention Model
- 66 Downloads
Real-time interaction in virtual environments composed of numerous objects modeled with a high number of faces remains an important issue in interactive virtual environment applications. A well-established approach to deal with this problem is to simplify small or distant objects where minor details are not informative for users. Several approaches exist in literature to simplify a 3D mesh uniformly. A possible improvement to this approach is to take advantage of a visual attention model to distinguish regions of a model which are considered important from the point of view of the human visual system. These regions can then be preserved during simplification to improve the perceived quality of the model. In the present article, we present an original application of biologically-inspired visual attention for improved perception-based representation of 3D objects. An enhanced visual attention model extracting information about color, intensity, orientation, as in the classical bottom-up visual attention model, but that also considers supplementary features believed to guide the deployment of human visual attention (such as symmetry, curvature, contrast, entropy and edge information), is introduced to identify such salient regions. Unlike the classical model where these features contribute equally to the identification of salient regions, a novel solution is proposed to adjust their contribution to the visual-attention model based on their compliance with points identified as salient by human subjects. An iterative approach is then proposed to extract salient points from salient regions. Salient points derived from images taken from best viewpoints of a 3D object are then projected to the surface of the object to identify salient vertices which will be preserved in the mesh simplification. The obtained results are compared with existing solutions from the literature to demonstrate the superiority of the proposed approach.
KeywordsInterest point and salient region detections Visual attention Visual perception 3D mesh Simplification Level-of-detail
This work is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Compliance with Ethical Standards
Conflict of interest
The authors declare that they have no conflict of interest.
- 2.Luebke, D., & Hallen, B. (2001). Perceptually driven simplification for interactive rendering. In S. J. Gortler & K. Myszkowski (Eds.), Rendering techniques. Eurographics. Vienna: Springer.Google Scholar
- 5.Frintrop, S., Rome, E., & Christensen, H. I. (2010). Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception (TAP), 7(1), 6.Google Scholar
- 7.Rouhafzay, G., & Cretu, A. -M. (2017). Selectively-densified mesh construction for virtual environments using salient points derived from a computational model of visual attention. In 2017 IEEE international conference on computational intelligence and virtual environments for measurement systems and applications (CIVEMSA), Annecy, 2017 (pp. 99–104).Google Scholar
- 8.Luebke, D., Reddy, M., Cohen, J. D., Varshney, A., Watson, B., & Huebner, R. (2003). Level of details for 3D graphics. Amsterdam: Morgan Kaufmann.Google Scholar
- 9.Pojar, E., & Schmalstieg, D. (2003). User-controlled creation of multiresolution meshes. In Proceedings of the symposium on Interactive 3D graphics (pp. 127–130). Monterey, CA.Google Scholar
- 10.Kho, Y., & Garland, M. (2003). User-guided simplification. In Proceedings of ACM symposium on interactive 3D graphics (pp. 123–126).Google Scholar
- 11.Ho, T. -C., Lin, Y. -C., Chuang, J. -H., Peng, C. -H. & Cheng, Y. -J. (2006). User-assisted mesh simplification. In Proceedings of ACM international conference on virtual-reality continuum and its applications (pp. 59–66).Google Scholar
- 14.Frintrop, S. (2006). The visual attention system VOCUS: Top-down extension. In J. G. Carbonell & J. Siekmann (Eds.), VOCUS: A visual attention system for object detection and goal-directed search. Lecture notes in computer science (Vol. 3899, pp. 55–86). Berlin: Springer.Google Scholar
- 15.Castellani, U., Cristani, M., Fantoni, S., & Murino, V. (2008). Sparse points matching by combining 3D mesh saliency. Eurographics, 27, 643–652.Google Scholar
- 18.Godil, A., & Wagan, A. I. (2011). Salient local 3D features for 3D shape retrieval. SPIE 3D Image Processing and Application, 7864, 78640S.Google Scholar
- 19.Sipiran, I., & Bustos, B. (2010). A robust 3D interest points detector based on Harris operator. In Eurographics 2010 Workshop on 3D Object Retrieval (3DOR’10) (pp. 7–14).Google Scholar
- 20.Novatnak, J., & Nishino, K. (2007). Scale-dependent 3D geometric features. In IEEE international conference on computer vision (pp. 1–8).Google Scholar
- 21.Sun, J., Ovsjanikov, M., & Guibas, L. (2009). A concise and provably informative multi-scale signature based on heat diffusion. In Eurographics symposium on geometry processing (Vol. 28, pp. 1383–1392).Google Scholar
- 24.Song, R., Liu, Y., Zhao, Y., Martin, R. R., & Rosin, P. L. (2012). Conditional random field-based mesh saliency. In IEEE international conference on image processing (pp. 637–640).Google Scholar
- 25.Howlett, S., Hammil, J., & O’Sullivan, C. (2005). An experimental approach to predicting saliency for simplified polygonal models. ACM Transaction on Applied Perception, 2(3), 1–23.Google Scholar
- 26.Harel, J., Koch, C., & Perona, P. (2006). Graph-based visual saliency. In Proceedings of the neural information processing systems (pp. 545–552).Google Scholar
- 27.Loy, G., & Eklundh, J. -O. (2006). Detecting symmetry and symmetric constellations of features. In IEEE ECCV (pp. 508–521).Google Scholar
- 30.Dutagaci, H., Cheung, C. -P., Godil, A. (2016) A benchmark for 3D interest points marked by human subjects. http://www.itl.nist.gov/iad/vug/sharp/benchmark/3DInterestPoint. Accessed August 1, 2017.
- 34.Garland, M., & Heckbert, P. S. (1997). Surface simplification using quadric error meshes. In SIGGRAPH '97 proceedings of the 24th annual conference on computer graphics and interactive techniques (pp. 209–216).Google Scholar