Advertisement

Video flickering removal using temporal reconstruction optimization

  • Chao Li
  • Zhihua ChenEmail author
  • Bin ShengEmail author
  • Ping Li
  • Gaoqi He
Article
  • 10 Downloads

Abstract

In this paper, we introduce an approach to remove the flickers in the videos, and the flickers are caused by applying image-based processing methods to original videos frame by frame. First, we propose a multi-frame based video flicker removal method. We utilize multiple temporally corresponding frames to reconstruct the flickering frame. Compared with traditional methods, which reconstruct the flickering frame just from an adjacent frame, reconstruction with multiple temporally corresponding frames reduces the warp inaccuracy. Then, we optimize our video flickering method from following aspects. On the one hand, we detect the flickering frames in the video sequence with temporal consistency metrics, and just reconstructing the flickering frames can accelerate the algorithm greatly. On the other hand, we just choose the previous temporally corresponding frames to reconstruct the output frames. We also accelerate our video flicker removal with GPU. Qualitative experimental results demonstrate the efficiency of our proposed video flicker method. With algorithmic optimization and GPU acceleration, the time complexity of our method also outperforms traditional video temporal coherence methods.

Keywords

Video processing Flickering removal Multiple frames Temporal coherence Spatial coherence 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61672228, Grant 61872241, Grant 61572316, and Grant 61370174, in part by the National Key Research and Development Program of China under Grant 2017YFE0104000 and Grant 2016YFC1300302, in part by the Macau Science and Technology Development Fund under Grant 0027/2018/A1, in part by the Science and Technology Commission of Shanghai Municipality under Grant 18410750700, Grant 17411952600, and Grant 16DZ0501100, and in part by the Shanghai Automotive Industry Science and Technology Development Foundation under Grant 1837.

References

  1. 1.
    Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282CrossRefGoogle Scholar
  2. 2.
    Aydin T, Stefanoski N, Croci S, Gross M, Smolic A (2014) Temporally coherent local tone mapping of HDR video. ACM Trans Graph 33(6):196:1–196:13CrossRefGoogle Scholar
  3. 3.
    Bell S, Bala K, Snavely N (2014) Intrinsic images in the wild. ACM Trans Graph 33(4):159:1–159:12CrossRefGoogle Scholar
  4. 4.
    Bhattacharya S, Venkatesh KS, Gupta S (2016) Restoration of scene flicker using video decomposition. In: International conference on signal processing, computing and control, pp 396–400Google Scholar
  5. 5.
    Bonneel N, Sunkavalli K, Paris S, Pfister H (2013) Example-based video color grading. ACM Trans Graph 32(4):39:1–39:12CrossRefGoogle Scholar
  6. 6.
    Bonneel N, Sunkavalli K, Tompkin J, Sun D, Paris S, Pfister H (2014) Interactive intrinsic video editing. ACM Trans Graph 33(6):197:1–197:10CrossRefGoogle Scholar
  7. 7.
    Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H (2015) Blind video temporal consistency. ACM Trans Graph 34(6):196:1–196:9CrossRefGoogle Scholar
  8. 8.
    Dan BG, Lischinski D (2013) Optimizing color consistency in photo collections. ACM Trans Graph 32(4):38Google Scholar
  9. 9.
    Dong X, Bonev B, Zhu Y, Yuille AL (2015) Region-based temporally consistent video post-processing. In: IEEE conference on computer vision and pattern recognition, pp 714–722Google Scholar
  10. 10.
    Durand F, Dorsey J (2002) Fast bilateral filtering for the display of high-dynamic-range images. In: Conference on computer graphics and interactive techniques, pp 257–266Google Scholar
  11. 11.
    Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–451MathSciNetCrossRefGoogle Scholar
  12. 12.
    Farbman Z, Lischinski D (2011) Tonal stabilization of video. ACM Trans Graph 30(4):89:1–89:10CrossRefGoogle Scholar
  13. 13.
    Gong W, Wang W, Li W, Tang S (2014) Temporal consistency based method for blind video deblurring. In: International conference on pattern recognition, pp 861–864Google Scholar
  14. 14.
    Guthier B, Kopf S, Eble M, Effelsberg W (2011) Flicker reduction in tone mapped high dynamic range video. In: Proceedings of the SPIE, vol 7866, pp 1–15Google Scholar
  15. 15.
    Hsu CY, Lu CS, Pei SC (2007) Video halftoning preserving temporal consistency. In: IEEE international conference on multimedia and expo, pp 1938–1941Google Scholar
  16. 16.
    Hsu E, Mertens T, Paris S, Avidan S, Durand F (2008) Light mixture estimation for spatially varying white balance. ACM Trans Graph 27(3):70:1–70:7CrossRefGoogle Scholar
  17. 17.
    Huang CR, Chiu KC, Chen CS (2011) Temporal color consistency-based video reproduction for dichromats. IEEE Trans Multimedia 13(5):950–960CrossRefGoogle Scholar
  18. 18.
    Kalantari NK, Shechtman E, Barnes C, Darabi S, Goldman DB, Sen P (2013) Patch-based high dynamic range video. ACM Trans Graph 32(6):202:1–202:8CrossRefGoogle Scholar
  19. 19.
    Kanj A, Talbot H, Luparello RR (2017) Flicker removal and superpixel-based motion tracking for high speed videos. In: 2017 IEEE international conference on image processing (ICIP), pp 245–249Google Scholar
  20. 20.
    Lang M, Wang O, Aydin T, Smolic A, Gross M (2012) Practical temporal consistency for image-based graphics applications. ACM Trans Graph 31(4):34:1–34:8CrossRefGoogle Scholar
  21. 21.
    Liu C, Yuen J, Torralba A (2011) SIFT flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994CrossRefGoogle Scholar
  22. 22.
    Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. In: International conference on artificial intelligence, pp 1617–1623Google Scholar
  23. 23.
    Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115CrossRefGoogle Scholar
  24. 24.
    Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) Fortune teller: predicting your career path. In: AAAI conference on artificial intelligence, pp 201–207Google Scholar
  25. 25.
    Mantiuk R, Daly S, Kerofsky L (2008) Display adaptive tone mapping. ACM Trans Graph 27(3):1–10CrossRefGoogle Scholar
  26. 26.
    Oskam T, Hornung A, Sumner RW, Gross M (2012) Fast and stable color balancing for images and augmented reality. In: Second international conference on 3d imaging, modeling, processing, visualization and transmission, pp 49–56Google Scholar
  27. 27.
    Reso M, Jachalsky J, Rosenhahn B, Ostermann J (2018) Occlusion-aware method for temporally consistent superpixels. IEEE Trans Pattern Anal Mach Intell PP(99):1–1CrossRefGoogle Scholar
  28. 28.
    Shin DK, Yong MK, Park KT, Lee DS, Choi W, Moon YS (2014) Video dehazing without flicker artifacts using adaptive temporal average. In: The IEEE international symposium on consumer electronics, pp 1–2Google Scholar
  29. 29.
    Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: IEEE conference on computer vision and pattern recognition, pp 295–302Google Scholar
  30. 30.
    Wang CM, Huang YH, Huang ML (2006) An effective algorithm for image sequence color transfer. Math Comput Model 44(7):608–627CrossRefGoogle Scholar
  31. 31.
    Yao CH, Chang CY, Chien SY (2017) Occlusion-aware video temporal consistency. In: ACM multimedia, pp 777–785Google Scholar
  32. 32.
    Ye G, Garces E, Liu Y, Dai Q, Gutierrez D (2014) Intrinsic video and applications. ACM Trans Graph 33(4):80:1–80:11CrossRefGoogle Scholar
  33. 33.
    Zeng H, Ma KK (2012) Content-adaptive temporal consistency enhancement for depth video. In: IEEE international conference on image processing, pp 3017–3020Google Scholar
  34. 34.
    Zhao X, Ding W, Liu C, Li H (2018) Haze removal for unmanned aerial vehicle aerial video based on spatial-temporal coherence optimisation. IET Image Process 12(1):88–97CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringEast China University of Science and TechnologyShanghaiChina
  2. 2.Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
  3. 3.MoE Key Lab of Artificial IntelligenceAI Institute, Shanghai Jiao Tong UniversityShanghaiChina
  4. 4.Faculty of Information TechnologyMacau University of Science and TechnologyMacauChina

Personalised recommendations