Skip to main content
Log in

An object expression system using depth-maps

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The current of Augmented Reality are data gloves or markers used for interactions between object and background. But, this results in the inconvenience in use and reduced immersiveness. To reinforce immersiveness in AR, added input devices should be removed. To this end, spatial coordinates should be accurately perceived even when a marker has been attached. In this paper, an object expression system was proposed that uses depth-maps for interactions without any additional input devices in order to improve immersiveness. Immersiveness was improved by projecting obtained images on 2D spaces, extracting vanishing lines, calculating the virtual spatial coordinates of the projected images, and varying the sizes of the inserted objects in accordance with the sizes of the areas of virtual coordinates, based on the images projected on the 2D coordinates. By using this system, the use of 3D modelers could be excluded when 3D objects were created; thus, the efficiency of object creation could be improved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Azuma RT (1997) A survey of augmented reality. In presence: Teleoperators and Virtual Environment 6(4):355–385

  2. Ban KJ, Kim JC, Kim EK (2010) An object representation system using virtual space coordinates. KIMICS 8(4):411–415

    Google Scholar 

  3. Battiato S, Curti S, La Cascia M, Scordato E, Tortora M (2004) Depth-map generation by image classification. In Proceedings of SPIE Electronic Imaging 2004, Three-Dimensional Image Capture and Applications VI, Vol. 5302–13, San Jose, California, USA, January

  4. Carsten R (2002) A new approach for vanishing point detection in architectural environments. IVC 20(9–10):647–656

    Google Scholar 

  5. Hassner T, Basri R (2006) Example based 3D reconstruction from single 2D images. Conf. CVPRW '06, pp. 15–15

  6. Khedhiri C, Karmani M, Hamdi B, Man KL (2011) Concurrent error detection adder based on two path output computation. JoC 2(1):17–22

    Google Scholar 

  7. Kong H, Audibert JY, Ponce J (2009) Vanishing point detection for road detection. IEEE Conference, CVPR 96–103

  8. Kosecká J, Zhang W (2002) Efficient computation of vanishing points. ICRA 2002:223–228

    Google Scholar 

  9. Kryvinska N, Thanh DV, Strauss C (2010) Integrated management platform for seamless services provisioning in converged network. IJITCC 1(1):77–91

    Article  Google Scholar 

  10. Liu B, Gould S, Koller D (2010) Single image depth estimation from predicted semantic labels. IEEE Conf. Computer Vision and Pattern Recognition 1253–1260

  11. Malakuti S, Aksit M, Bockish C (2011) Runtime Verification in Distribted Computing. JoC 2(1):1–10

    Google Scholar 

  12. Nedovic V et al (2007) Depth information by stage classification. IEEE 11th International Conf ICCV 2007:1–8

    Google Scholar 

  13. Rasmussen C, Machine B (2004) Texture-based vanishing point voting for road shape estimation. Proc British Machine Vision Conference 1:47–56, 5

    Google Scholar 

  14. Saxena A, Chung SH, Andrew YN (2005) Learning depth from single monocular images. Proc. 19th Ann. Conf. Neural Information Processing Systems 18

  15. Saxena A, Chung SH, Andrew YN (2008) 3D depth reconstruction from a single still image. Int J Comput Vis 76:53–69

    Article  Google Scholar 

  16. Saxena A, Chung SH, Andrew YN (2009) Learning 3D scene structure from a single still image. IEEE Trans Pattern Anal Mach Intell 31(5):824–840

    Article  Google Scholar 

  17. Schmalstieg D, Fuhrmann A, Hesina G, Szalavári Z, Encarnação LM, Gervautz M, Purgathofer W (2001) Augmented reality: the interface is everywhere. SIGGRAPH course note, No. 27

  18. Shufelt JA (1996) Performance evaluation and analysis of vanishing point detection techniques. Proc. ARPA Image Understanding Workshop pp.1,113–1,132

  19. Surendran D, Purusothaman T, Balachandar RA (2011) A generic interface for resource aggregation in grid of grids. IJITCC 1(2):159–172

    Article  Google Scholar 

  20. Torralba A, Oliva A (2002) “Depth estimation from image structure. IEEE Trans Pattern Anal Mach Intell 24(9):1226–1238

    Article  Google Scholar 

  21. Tsai YM, Chang YL, Chen LG (2006) Block-based vanishing line and vanishing point detection for 3D scene reconstruction. International Symposium ISPACS '06, pp. 586–589

  22. Xie B, Kumar A, Zhao D, Reddy R, He B (2010) On secure communication in integrated heterogeneous wireless networks. IJITCC 1(1):4–23

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported by the Industrial Strategic technology development program, 10040125, Development of the Integrated Environment Control S/W Platform for Constructing an Urbanized Vertical Farm funded by the Ministry of Knowledge Economy (MKE, Korea).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to YangSun Lee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, JC., Ban, KJ., Park, D. et al. An object expression system using depth-maps. Multimed Tools Appl 63, 247–263 (2013). https://doi.org/10.1007/s11042-011-0955-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-011-0955-2

Keywords

Navigation