Background Subtraction with Superpixel and k-means
In this paper, we presents a background subtraction approach with superpixel and k-means that aims to use less memory to establish a background model and less computation time for moving object detection. We use superpixels to divide similar pixels into the same area, K-mean is used to obtain the main color values of the superpixel. The mean and variance of superpixels and changes in the number of previous attractions are used as the discriminative features. The main contribution of this paper is to propose features suitable for superpixel-based moving object detection. We test this method in different videos demonstrate that this method demonstrated equal or better segmentation than the other techniques and proved capable of processing 320 * 240 video at 114 fps, including post-processing, faster than most existing algorithms.
KeywordsBackground subtraction Superpixel K-Means
The work was supported by a grant from National Natural Science Foundation of China (No. 61370109), a key project of support program for outstanding young talents of Anhui province university (No. gxyqZD2016013), a grant of science and technology program to strengthen police force (No. 1604d0802019), and a grant for academic and technical leaders and candidates of Anhui province (No. 2016H090).
- 1.Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proceedings of the CVPR, vol. 2, p. 2246 (1998)Google Scholar
- 5.Giordano, D., Murabito, F., Palazzo, S., et al.: Superpixel-based video object segmentation using perceptual organization and location prior. In: Computer Vision and Pattern Recognition, pp. 4814–4822 (2015)Google Scholar
- 6.Chen, T.Y., Biglari-Abhari, M., Wang, I.K.: SuperBE: computationally light background estimation with superpixels. J. Real-Time Image Process. 11, 1–17 (2018)Google Scholar
- 8.Wang, Y., Jodoin, P.M., Porikli, F., et al.: CDnet 2014: an expanded change detection benchmark dataset. In: Computer Vision and Pattern Recognition Workshops, pp. 393–400 (2014)Google Scholar