Pixel-wise Background Segmentation with Moving Camera
This paper proposes a novel approach for background extraction of a scene captured by a moving camera. Proposed method uses a codebook, which is a compression technique used to store data from a long sequence of video frames. This technique has been used to construct a model which can segment out the foreground and that with using only few initial video frames as a training sequence. It is a dynamic model which keeps on learning from new video frames throughout its lifetime and simultaneously produces the output. It uses a pixel-wise approach, and the codebooks for each pixel are made independently. Special emphasis has been laid on the intensity of an image as the human eye is more sensitive to intensity variations. A two layer modelling has been performed where codebooks are passed from the cache to the background model after satisfying the frequency and negative run length conditions. Experimental results show the efficacy of the proposed method.
KeywordsVideo Surveillance Codebook Background Modelling
- 2.Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 246–252 (1999)Google Scholar
- 3.Seki, M., Fujiwara, H., Sumi, K.: A robust background subtraction method for changing background. In: 5th IEEE Workshop on Applications of Computer Vision, pp. 207–213 (2000)Google Scholar
- 4.Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Background modeling and subtraction by codebook construction. In: Proc. of the IEEE Conf. on Image Processing, pp. 3061–3064 (2004)Google Scholar
- 6.Sun, I.-T., Hsu, S.-C., Huang, C.-L.: A hybrid codebook background model for background subtraction. In: IEEE Workshop on Signal Processing Systems (SiPS), pp. 96–101 (2011)Google Scholar
- 7.Jin, Y., Tao, L., Di, H., Rao, N.I., Xu, G.: Background modeling from a free-moving camera by multi-layer homography algorithm. In: Proc. of the IEEE Conf. on Image Processing, pp. 1572–1575 (2008)Google Scholar