An adaptive camera-selection algorithm to acquire higher-quality images
- 137 Downloads
Various types of three-dimensional (3D) cameras have been used to analyze real-world objects or environments effectively. However, because most existing 3D cameras capture scenes by statically using one type of camera, there may be a limit to the quality of the captured images. Therefore, in this paper, we build a hybrid camera system that combines passive triangulation (PT)- and active triangulation (AT)-based cameras and suggest a new mechanism of estimating accurate 3D depth by adaptively switching between the two types of cameras depending on the complexity of the environment. The suggested method initially uses initial input images to extract brightness and texture, which are major features representing the current state of the surrounding environment. The method subsequently generates a set of rules that dynamically select the PT- or AT-based camera, whichever can operate more suitably in the current environment, by analyzing the two extracted features. In experimental results, we demonstrate that the proposed adaptive camera-selection approach can be applied to extract 3D depth reliably with reasonable performance in terms of accuracy and time.
KeywordsCamera selection Feature extraction Rule generation Environmental complexity
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (2011-0021984).
- 10.Cui, Y., Schuon, S., Chan, D., Thrun, S., Theobalt, C.: 3D shape scanning with a time-of-flight camera. In: Proceedings of the IEEE international conference on computer vision and pattern recognition (CVPR), pp. 1173–1180, June 2010Google Scholar
- 13.Hansard, M., Lee, S., Choi, O., Horaud, R.: Time-of-Flight Cameras: Principles, Methods and Applications. Springer, New York (2012)Google Scholar
- 14.Jang, S.-W., Choi, H.-J., Byun, S.: dynamic camera switching based on the complexity of environments. In proceedings of the international conference on digital policy and management (ICDPM), Jeju, Korea, pp. 203–204, Oct 2013Google Scholar
- 15.Wang, Y., Li, J.: Entertainment robot hand gesture recognition. In proceedings of the IEEE international workshop on database technology and applications (DBTA), pp. 1–3, Nov 2010Google Scholar
- 18.Rodríguez-Jiménez, S., Burrus, N., Abderrahim, M.: A-contrario detection of aerial target using a time-of-flight camera. In: Proceedings of the sensor signal processing for defence (SSPD), pp. 1–5, Sep 2012Google Scholar
- 19.Ishida, Y., Izuoka, H., Chinthaka, H., Premachandra, N., Kato, K.: A study on plane extraction from distance images using 3D Hough transform. In: Proceedings of the International symposium on advanced intelligent systems (ISIS), Kobe, Japan, pp. 812–816, Nov 2012Google Scholar
- 22.Gao, L., Gai, Y.-X., Fu, S.: Simultaneous localization and mapping for autonomous mobile robots using binocular stereo vision system. In: Proceedings of the international conference on mechatronics and automation (ICMA), Harbin, China, pp. 326–330, Aug 2007Google Scholar
- 24.Mara, H., Krömker, S., Jakob, S., Breuckmann, B.: GigaMesh and Gilgamesh: 3D multiscale integral invariant cuneiform character extraction. In: Proceedings of the conference on virtual reality, archaeology and cultural heritage, pp. 131–138, 2010Google Scholar
- 27.Jang, S.-W., Chung, M.-A., Kim, G.-Y.: Face model-based image registration for generating facial textures. In: Proceedings of the ACM research in applied computation symposium, pp. 121–125, Oct 2012Google Scholar
- 28.Ma, W. Y., Manjunath, B. S.: Texture features and learning similarity. In: Proceedings of the IEEE international conference on computer vision and pattern recognition (CVPR), San Francisco, CA, pp. 425–430, June 1996Google Scholar