Visualization is one of the most standard applications of 3D video. Its essential functionality includes interactive free-viewpoint and 3D (pop-up) visualization of the captured scene as is. Following an ordinary 3D video visualization system, this chapter presents a novel free-viewpoint visualization method for 3D video stream of a single human in action. The novelty rests in that the 3D video is visualized from the performer’s viewpoint. Ordinary free-viewpoint visualization methods render the object action viewed from the outside of the scene. We may call it an objective, or third-person, view of the object action. With 3D video data, moreover, we can render a subjective, or first-person, view of the object action, where the object action is visualized as if it were captured from a head-mounted camera. Such subjective visualization is very useful to understand where to look when performing juggling or traditional dances; in MAIKO dances, for example, eye motions are very important to express mental feelings.
- 3.Furukawa, Y., Ponce, J.: Accurate, dense, and robust multi-view stereopsis. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007) Google Scholar
- 7.Kuroda, M., Nobuhara, S., Matsuyama, T.: 3d face geometry and gaze estimation from multi-view images using symmetry prior. In: Proc. of MIRU (2011) (in Japanese) Google Scholar
- 8.Nobuhara, S., Kimura, Y., Matsuyama, T.: Object-oriented color calibration of multi-viewpoint cameras in sparse and convergent arrangement. IPSJ Trans. Comput. Vis. Appl. 2, 132–144 (2010) Google Scholar
- 9.Nobuhara, S., Tsuda, Y., Ohama, I., Matsuyama, T.: Multi-viewpoint silhouette extraction with 3D context-aware error detection, correction, and shadow suppression. IPSJ Trans. Comput. Vis. Appl. 1, 242–259 (2009) Google Scholar
- 11.Tobii Technology: X120 eye tracker Google Scholar
- 12.Tung, T., Nobuhara, S., Matsuyama, T.: Simultaneous super-resolution and 3D video using graph-cuts. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008) Google Scholar