International Conference on Human-Computer Interaction

HCI 2015: HCI International 2015 - Posters’ Extended Abstracts pp 683-689 | Cite as

Construction of 3-Dimensional Virtual Environment Based on Photographed Image (the Acquisition and Processing of the Photographed Image)

  • Tetsuya HanetaEmail author
  • Hiroyo Ohishi
  • Tadasuke Furuya
  • Takahiro Takemoto
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 528)


In this study, we propose to construction of 3-dimensional virtual environment in the bay. To support construction of structure on the route of ship and a large ship’s position for arrival at the pier, it is suitable for utilizing real images. When we construct places to put ships, for example container yards, we need to think an influence toward a route of ships and surrounding environments. There are many things we cannot understand only by design drawings. Therefore, we need to watch real environments to understand the present situation more clearly. A captain probably hopes to simulate on basis of real images before arriving in a port. Then, we consider a method of virtual environmental construction.


Virtual environment Image-Based rendering Tour into the picture Panoramic image 

1 Background and Purpose of the Study

In late years studies that construction of 3-dimensional virtual environment is conducted flourishingly in fields such as Virtual Reality (VR) or the Computer Graphics (CG). Wide area and complicated space data are taken by a computer, rebuilt it in a computer and visualized it. With the operation performance gain of the computer, construction of large-scale 3-dimensional virtual environment of close reality is realized. The method of construction of 3-dimensional virtual environment is classified roughly into model-base method and image-base method.

Model-base method means that we reproduce the object using 3-dimensional geometry shape and constructed 3-dimensional virtual environment.

Image-base method means that we construct 3-dimensional virtual environment using photographed image. Using a lot of photographed image such as cityscape, it enables the rendering from any place other than the photography position [1].

Studies of related to construction of 3-dimensional virtual environment such as the cityscape have been already suggested. However, there are few studies on 3-dimensional virtual environment of congestion at sea area and narrow aqueduct. We think that it supports officer’s watch if we can construct 3-dimensional virtual environment of a route. In addition, we can provide the foreknowledge of some routes by showing it for a navigation office who enters the first route beforehand. As a result, we think it prevents sea disaster accident.

Therefore, in this paper, we effectively acquire photographed image of the cityscape of route from the training ship “Shioji-maru” that school owns, use image-based method based on this photographed image and suggest construction method of “the marine virtual environment” that we reproduce route in 3-dimensional virtual environment. At the time, it enables us to walk through in the marine virtual environment. Furthermore, corresponding to the change of the position of the virtual viewpoint, we aim at the construction of the marine virtual environment where the context of the structure is maintained.

2 For Marine Virtual Environmental Construction

2.1 Overall Flow

At first, we produce panoramic images Fig. 1-(a),(b). We separate foreground and background images from those images Fig. 1-(c). The foreground images are buildings constructed along routes. When we separated to images of Fig. 1-(c), extra domains exist. Therefore we permeabilize extra domains using alpha value Fig. 1-(d). We make permeabilized foreground images billboards Fig. 1-(e). Finally we construct marine virtual environment Fig. 1-(f).
Fig. 1.

Overall flow

2.2 Preparation

At first, we acquire photographed images to use for the marine virtual environment.

We captured them on the route around Hinode-Takeshiba Wharf from 13:30 to 13:40 on October 17.

We acquire a photographed images from videos which we photographed. Capture interval is 30 s. The size of captured images is 1,920*1,080. We produce panoramic image from captured images. The size of panoramic image is 10,737*847. Figure 2.
Fig. 2.

Panoramic image in each time, (a) is panoramic image in time t, (b) is in time t + 30(sec), (c) is in time t + 60(sec)

2.3 Base of Marine Virtual Environmental Construction

The marine virtual environment is constructed by the method called “Tour Into the Picture [2] (TIP)”. TIP is a method to construct virtual environment from a single image. We divide a single image into five rectangles (Front wall, Ceiling, Floor, Right wall and Left wall) based on a vanishing point and map textures in virtual environment. However, the back that a certain front wall makes a pair behind a photographer is the blank in TIP. As a result, a sense of presence and a higher immersion feeling for user decrease.

Therefore, in study of Kang and others [3], they use panoramic images for an input image of TIP. On the basis of a vanishing line, they construct 3-dimensional virtual environment by mapping it onto a column. The visibility is good when we watched around in a virtual viewpoint, but it follows that a feeling of depth is spoiled in comparison with original TIP, and, as for the contextual maintenance of the structure, it is difficult.

We create the back by using panoramic images for an input image in TIP and construct the virtual environment of the hexahedron. Each rectangle with panoramic images performs a classification to be like Fig. 3.
Fig. 3.

Each rectangular classification in panoramic image

2.4 Separation Foreground and Background Image

As for the structure built on the both sides of the route, the context of the depth direction does not change by the change of the viewpoint position. However, we cannot express a feeling of depth with the 2- dimensional image. By separating foreground/background structure from right/left wall images and locating these in the virtual environment to keep a feeling of depth, the context of the structure can maintain.

For maintain the context of structures, we extract these from the image of right/left wall images. We use “GrowCut” which is one of the method of the image-segmentation for the extraction of this structure.

We show the result image which extracted a foreground/background images. Figures 4, 5.
Fig. 4.

Foreground image

Fig. 5.

Background image

Furthermore, we extract every structure from the background image. Figure 6
Fig. 6.

Extract every structure

2.5 The Billboard of the Structures

The billboard is the plane surface which rotates while corresponding to the position of the virtual camera. This plane surface always moves so as to confront the camera. Due to maintaining context foreground/background structures, we convert structures into billboards and locate them in the marine virtual environment.

2.6 Constructing of the Marine Virtual Environment

We show the result of constructing of the marine virtual environmental. Figures 7, 8.
Fig. 7.

Front wall

Fig. 8.

Right wall, (a) is right wall from left view, (b) is from right view

The virtual environment of the hexahedron was constructed. We can watch around with the virtual viewpoint and can grasp the scene of the route easily. In addition, the structures that are equal to the background are made by billboards.

Therefore, in the right and left wall, we can reproduce a scene with a feeling of depth.

However, the height of the horizontal ground of the wall of right and left is different depend on the photography environment.

2.7 Expand the Marine Virtual Environment

In TIP, the texture is produced by projection conversion. These have blurred area. To reduce blurred area and expand the scale, we connect the virtual environments of the hexahedron. Figure 9.
Fig. 9.

Method of expanding

In the overlapping areas, we should produce panoramic images overlapping scene of the real world.

3 Conclusions

In this paper, we construct the marine virtual environment. We can watch around virtual viewpoint, and can walk through in the marine virtual environment. In accordance with changing position of the virtual viewpoint, we consider the contextual maintenance of the structure.

However, because the objects which are unnecessary for the marine virtual environment construction, such as the international signal flag, are included in panoramic images, hole parts are increase. So as not to increase hole parts, we have to be careful about the camera setting position for image acquisition. When hole parts increase, we are able to construct more realistic marine virtual environment by complementing hole parts [4].

4 Future Works

We can contribute to more grasp positional relation of the whole route by placing the buoys which there is in the route as a billboard in marine virtual environment.

In addition, the marine virtual environment of this paper reproduced the space with the source of light in the daytime, but it is necessary to assume it in the case of the night navigation. We aim at the marine virtual environmental construction using the virtual source of light to reproduce the scene of the route in the night.

Furthermore, with Point Cloud Library [5] (PCL), we reproduce ships in marine virtual environment and aim at the construction of higher realistic marine virtual environment.


  1. 1.
    Devebec, P.E., Taylor, C.A., Malik, J.: Modeling and rendering architecture from photographs: a hybrid geometry- and image- based approach. In: Proceedings SIGGRAPH 1996, In Computer Graphics Proceedings. ACM SIGGRAPH, Annual Conference Series, New Orleans, Louisiana, pp. 11–20, August 4–9, 1996Google Scholar
  2. 2.
    Horry, Y., Anjyo, Y., Arai, K.: Tour into the picture:using spidery mesh interface to make animation from a single image. In: ACM SIGGRAPH 1997, pp. 225–232 (1997)Google Scholar
  3. 3.
    Kang, W.H., Pyo, H.S., Anjyo, K., Shin, Y.S.: Tour into the picture using a vanishing line and its extension to panoramic images. In: Proceedings Eurographics, pp. 132–141 (2001)Google Scholar
  4. 4.
    Wexler, Y., Shechtman, E., Irani, M.: Space-Time video completion. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, pp. I-120–I-127 (2004)Google Scholar
  5. 5.
    Point Cloud Library.

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Tetsuya Haneta
    • 1
    Email author
  • Hiroyo Ohishi
    • 1
  • Tadasuke Furuya
    • 1
  • Takahiro Takemoto
    • 1
  1. 1.Tokyo University of Marine Science and TechnologyMinatoJapan

Personalised recommendations