Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 22, pp 32393–32417 | Cite as

PAD: a perceptual application-dependent metric for quality assessment of segmentation algorithms

  • Silvio R. R. SanchesEmail author
  • Antonio C. Sementille
  • Romero Tori
  • Ricardo Nakamura
  • Valdinei Freire
Article
  • 27 Downloads

Abstract

Extracting elements of interest from video frames is a necessary task in many applications, such as those that require replacing the original background. Quality assessment of foreground extraction algorithms is essential to find the best algorithm for a particular application. This paper presents an application-dependent objective metric capable of evaluating the quality of those algorithms by considering user perception. Our metric identifies types of errors that cause the greatest annoyance based on regions of the scene where users tend to keep their attention during videoconference sessions. We demonstrate the efficiency of our metric by evaluating bilayer segmentation algorithms. The results showed that metric is effective compared to others used to evaluate algorithms for videoconferencing systems.

Keywords

Objective metric Segmentation quality Segmentation evaluation Videoconference 

Notes

References

  1. 1.
    Baf F, Bouwmans T, Vachon B (2008) Type-2 fuzzy mixture of gaussians model: Application to background modeling. In: Proceedings of the 4th international symposium on advances in visual computing. Springer, Berlin, pp 772–781,  https://doi.org/10.1007/978-3-540-89639-5_74
  2. 2.
    Bailenson JN, Blascovich J, Beall AC, Loomis JM (2001) Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence Teleop Virt 10(6):583–598.  https://doi.org/10.1162/105474601753272844 CrossRefGoogle Scholar
  3. 3.
    Baron R, Jones A, Massimi M, McConnell K (2016) Background replacement for videoconferencing. https://www.google.com/patents/US9503685. Accessed 9 Dec 2016
  4. 4.
    Bohannon LS, Herbert AM, Pelz JB, Rantanen EM (2013) Eye contact and video-mediated communication: a review. Displays 34(2):177–185.  https://doi.org/10.1016/j.displa.2012.10.009 CrossRefGoogle Scholar
  5. 5.
    Correia P, Pereira F (2003) Objective evaluation of video segmentation quality. IEEE Trans Image Process 12(2):186–200.  https://doi.org/10.1109/TIP.2002.807355 CrossRefGoogle Scholar
  6. 6.
    Criminisi A, Cross G, Blake A, Kolmogorov V (2006) Bilayer segmentation of live video. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition - CVPR ’06, vol 1. IEEE Computer Society, Washington, pp 53–60,  https://doi.org/10.1109/CVPR.2006.69
  7. 7.
    Erdem CE, Sankur B (2000) Performance evaluation metrics for object-based video segmentation. In: 2000 10th European signal processing conference, pp 1–4Google Scholar
  8. 8.
    European Broadcasting Union (2019) EBU – European broadcasting union. http://www.ebu.ch. Accessed 22 February 2019
  9. 9.
    Gelasca E, Ebrahimi T (2009) On evaluating video object segmentation quality: a perceptually driven objective metric. IEEE J Sel Top Sign Proces 3(2):319–335.  https://doi.org/10.1109/JSTSP.2009.2015067 CrossRefGoogle Scholar
  10. 10.
    Godbehere A, Matsukawa A, Goldberg K (2012) Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In: American control conference (ACC), pp 4305–4312.  https://doi.org/10.1109/ACC.2012.6315174
  11. 11.
    Gonçalves V M, Delamaro ME, Nunes FLS (2017) Applying graphical oracles to evaluate image segmentation results. J Braz Comput Soc 23(1):1.  https://doi.org/10.1186/s13173-016-0050-7 CrossRefGoogle Scholar
  12. 12.
    Goyette N, Jodoin PM, Porikli F, Konrad J, Ishwar P (2012) Changedetection.net: a new change detection benchmark dataset. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 1–8.  https://doi.org/10.1109/CVPRW.2012.6238919
  13. 13.
    Heikkila M, Pietikainen M (2006) A texture-based method for modeling the background and detecting moving objects. IEEE Trans Pattern Anal Mach Intell 28 (4):657–662.  https://doi.org/10.1109/TPAMI.2006.68 CrossRefGoogle Scholar
  14. 14.
    Huang G, Pun CM (2018) On-line video multi-object segmentation based on skeleton model and occlusion detection. Multimed Tools Appl 77(23):31313–31329.  https://doi.org/10.1007/s11042-018-6208-x CrossRefGoogle Scholar
  15. 15.
    Huang X, Søgaard J, Forchhammer S (2017) No-reference pixel based video quality assessment for hevc decoded video. J Vis Commun Image Represent 43(C):173–184.  https://doi.org/10.1016/j.jvcir.2017.01.002 CrossRefGoogle Scholar
  16. 16.
    Isik S (2019) SWCD : a sliding window and self-regulated learning based background updating method for change detection in videos. https://github.com/isahhin/swcd. Accessed 3 March 2019
  17. 17.
    Isik S, Özkan K, Günal S, Gerek ON (2018) SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos. J Electron Imaging 27(2):1–11.  https://doi.org/10.1117/1.JEI.27.2.023002 CrossRefGoogle Scholar
  18. 18.
    ITU-R (2009) Methodology for the subjective assessment of the quality of television pictures. https://www.itu.int/rec/r-rec-bt.500. Accessed 1 March 2019
  19. 19.
    ITU-R (2019) International telecommunications union – committed to connecting the world. http://www.itu.int. Accessed 21 February 2019
  20. 20.
    Kim W (2018) Background subtraction with variable illumination in outdoor scenes. Multimed Tools Appl 77(15):19439–19454.  https://doi.org/10.1007/s11042-017-5410-6 CrossRefGoogle Scholar
  21. 21.
    Kozamernik F, Steinmann V, Sunna P, Wyckens E (2005) Samviq – a new ebu methodology for video quality evaluations in multimedia. SMPTE Motion Imaging Journal 114(4):152–160.  https://doi.org/10.5594/J11535 CrossRefGoogle Scholar
  22. 22.
    Lacassagne L, Manzanera A, Dupret A (2009) Motion detection: Fast and robust algorithms for embedded systems. In: 16Th IEEE international conference on image processing (ICIP) 2009, pp 3265–3268,  https://doi.org/10.1109/ICIP.2009.5413946
  23. 23.
    Lee S, Yun ID, Lee SU (2010) Robust bilayer video segmentation by adaptive propagation of global shape and local appearance. J Vis Commun Image Represent 21 (7):665–676.  https://doi.org/10.1016/j.jvcir.2010.04.005 CrossRefGoogle Scholar
  24. 24.
    Li Q, Fang YM, Xu JT (2016) A novel spatial pooling strategy for image quality assessment. J Comput Sci Technol 31(2):225–234.  https://doi.org/10.1007/s11390-016-1623-9 CrossRefGoogle Scholar
  25. 25.
    Lin W, Kuo CCJ (2011) Perceptual visual quality metrics: a survey. J Vis Commun Image Represent 22(4):297–312.  https://doi.org/10.1016/j.jvcir.2011.01.005 CrossRefGoogle Scholar
  26. 26.
    McFarlane NJB, Schofield CP (1995) Segmentation and tracking of piglets in images. Mach Vis Appl 8(3):187–193.  https://doi.org/10.1007/BF01215814 CrossRefGoogle Scholar
  27. 27.
    Mech R, Marqués F (2002) Objective evaluation criteria for 2d-shape estimation results of moving objects. EURASIP Journal on Advances in Signal Processing 2002(4):273674.  https://doi.org/10.1155/S1110865702000732 CrossRefzbMATHGoogle Scholar
  28. 28.
    Ou XY, Li P, Ling HF, Liu S, Wang TJ, Li D (2017) Objectness region enhancement networks for scene parsing. J Comput Sci Technol 32(4):683–700.  https://doi.org/10.1007/s11390-017-1751-x CrossRefGoogle Scholar
  29. 29.
    Péchard S, Pépion R, Callet PL (2008) Suitable methodology in subjective video quality assessment: a resolution dependent paradigm. In: Proceedings of the international workshop on image media quality and its applications – IMQA2008, pp 1–6Google Scholar
  30. 30.
    Qian R, Sezan M (1999) Video background replacement without a blue screen. In: Proceedings of the international conference on image processing - ICIP 99. IEEE computer society, Washington, DC, USA, vol 4, pp 143–146,  https://doi.org/10.1109/ICIP.1999.819566
  31. 31.
    Rosin PL, Ioannidis E (2003) Evaluation of global image thresholding for change detection. Pattern Recogn Lett 24(14):2345–2356.  https://doi.org/10.1016/S0167-8655(03)00060-6 CrossRefzbMATHGoogle Scholar
  32. 32.
    Salioni D, Sanches S, Silva V, De Gaspari T, Sementille A (2015) Segmentation quality for augmented reality: an objective metric. In: XVII symposium on virtual and augmented reality (SVR), pp 212–219.  https://doi.org/10.1109/SVR.2015.38
  33. 33.
    Sanches SRR, da Silva VF, Tori R (2012) Bilayer segmentation augmented with future evidence. In: Proceedings of the 12th international conference on computational science and its applications - volume part II. ICCSA’12.  https://doi.org/10.1007/978-3-642-31075-1_52. Springer, Berlin, pp 699–711
  34. 34.
    Sanches SRR, Silva VF, Nakamura R, Tori R (2013) Objective assessment of video segmentation quality for augmented reality. In: Proceedings of IEEE international conference on multimedia and expo – ICME 2013. IEEE Computer Society, Washington, pp 1–6,  https://doi.org/10.1109/ICME.2013.6607476
  35. 35.
    Sanches SRR, Oliveira C, Sementille AC, Freire V (2019) Challenging situations for background subtraction algorithms. Appl Intell 49(5):1771–1784.  https://doi.org/10.1007/s10489-018-1346-4 CrossRefGoogle Scholar
  36. 36.
    Shi R, Ngan KN, Li S, Paramesran R, Li H (2015) Visual quality evaluation of image object segmentation: Subjective assessment and objective measure. IEEE Trans Image Process 24(12):5033–5045.  https://doi.org/10.1109/TIP.2015.2473099 CrossRefMathSciNetzbMATHGoogle Scholar
  37. 37.
    Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122 (2014):4–21.  https://doi.org/10.1016/j.cviu.2013.12.005 CrossRefGoogle Scholar
  38. 38.
    St-Charles PL, Bilodeau GA (2014) Improving background subtraction using local binary similarity patterns. In: IEEE Winter conference on applications of computer vision (WACV), vol 2014, pp 509–515,  https://doi.org/10.1109/WACV.2014.6836059
  39. 39.
    Stauffer C, Grimson WEL (2000) Learning patterns of activity using real-time tracking. IEEE Trans Pattern Anal Mach Intell 22(8):747–757.  https://doi.org/10.1109/34.868677 CrossRefGoogle Scholar
  40. 40.
    Université de Sherbrooke (2018) ChangeDetection.NET – a video database for testing change detection algorithms. http://www.changedetection.net. Accessed 22 July 2018
  41. 41.
    Villegas P, Marichal X (2004) Perceptually-weighted evaluation criteria for segmentation masks in video sequences. IEEE Trans Image Process 13(8):1092–1103.  https://doi.org/10.1109/TIP.2004.828433 CrossRefGoogle Scholar
  42. 42.
    Villegas P, Marichal X, Salcedo A (1999) Objective evaluation of segmentation masks in video sequences. In: Proceedings of the workshop on image analysis for multimedia interactive services - WIAMIS’99, pp 85–88Google Scholar
  43. 43.
    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition, vol 1, pp 511–518,  https://doi.org/10.1109/CVPR.2001.990517
  44. 44.
    Vojodi H, Fakhari A, Moghadam AME (2013) A new evaluation measure for color image segmentation based on genetic programming approach. Image Vis Comput 31(11):877–886.  https://doi.org/10.1016/j.imavis.2013.08.002 CrossRefGoogle Scholar
  45. 45.
    Yin P, Criminisi A, Winn J, Essa I (2011) Bilayer segmentation of webcam videos using tree-based classifiers. IEEE Trans Pattern Anal Mach Intell 33(1):30–42.  https://doi.org/10.1109/TPAMI.2010.65 CrossRefGoogle Scholar
  46. 46.
    Zhao Z, Bouwmans T, Zhang X, Fang Y (2012) A fuzzy background modeling approach for motion detection in dynamic backgrounds. In: Wang FL, Lei J, Lau RWH, Zhang J (eds) Multimedia and signal processing. Springer, Berlin, pp 177–185,  https://doi.org/10.1007/978-3-642-35286-7_23
  47. 47.
    Zheng Y, Kambhamettu C (2009) Learning based digital matting. In: Proceedings of IEEE 12th international conference on computer vision, pp 889–896, DOI  https://doi.org/10.1109/ICCV.2009.5459326

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Universidade Tecnológica Federal do ParanáCornélio ProcópioBrazil
  2. 2.Universidade Estadual Paulista “Julio de Mesquita Filho”BauruBrazil
  3. 3.Universidade de São PauloSão PauloBrazil

Personalised recommendations