Machine Vision and Applications

, Volume 29, Issue 4, pp 617–631 | Cite as

Interactive 1-bit feedback segmentation using transductive inference

Original Paper
  • 33 Downloads

Abstract

This paper presents an effective algorithm, interactive 1-bit feedback segmentation using transductive inference (FSTI), that interactively reasons out image segmentation. In each round of interaction, FSTI queries the user one superpixel for acquiring 1-bit user feedback to define the label of that superpixel. The labeled superpixels collected so far are used to refine the segmentation and generate the next query. The key insight is treating the interactive segmentation as a transductive inference problem, and then suppressing the unnecessary queries via an intrinsic-graph-structure derived from transductive inference. The experiments conducted on five publicly available datasets show that selecting query superpixels concerning the intrinsic-graph-structure is helpful to improve the segmentation accuracy. In addition, an efficient boundary refinement is presented to improve segmentation quality by revising the misaligned boundaries of superpixels. The proposed FSTI algorithm provides a superior solution to the interactive image segmentation problem is evident.

Keywords

Interactive image segmentation Transductive inference Intrinsic-graph-structure 

References

  1. 1.
  2. 2.
    Achanta, R., Hemami, S.S., Estrada, F.J., Süsstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604 (2009)Google Scholar
  3. 3.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRefGoogle Scholar
  4. 4.
    Adams, R., Bischof, L.: Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16(6), 641–647 (1994)CrossRefGoogle Scholar
  5. 5.
    Batra, D., Kowdle, A., Parikh, D., Luo, J., Chen, T.: ICOSEG: interactive co-segmentation with intelligent scribble guidance. In: CVPR, pp. 3169–3176 (2010)Google Scholar
  6. 6.
    Bearman, A.L., Russakovsky, O., Ferrari, V., Li, F.: What’s the point: semantic segmentation with point supervision. In: ECCV, pp. 549–565 (2016)Google Scholar
  7. 7.
    Boykov, Y., Jolly, M.: Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In: ICCV, pp. 105–112 (2001)Google Scholar
  8. 8.
    Carreira, J., Sminchisescu, C.: Constrained parametric min-cuts for automatic object segmentation. In: CVPR, pp. 3241–3248 (2010)Google Scholar
  9. 9.
    Chen, D., Chen, H., Chang, L.: Interactive segmentation from 1-bit feedback. In: ACCV, pp. 261–274 (2016)Google Scholar
  10. 10.
    Cheng, M., Prisacariu, V.A., Zheng, S., Torr, P.H.S., Rother, C.: Densecut: densely connected crfs for realtime grabcut. Comput. Graph. Forum 34(7), 193–201 (2015)CrossRefGoogle Scholar
  11. 11.
    Dong, X., Shen, J., Shao, L., Yang, M.: Interactive cosegmentation using global and local energy optimization. IEEE Trans. Image Process. 24(11), 3966–3977 (2015)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Dubost, F., Peter, L., Rupprecht, C., Gutiérrez-Becker, B., Navab, N.: Hands-free segmentation of medical volumes via binary inputs. In: DLMIA, pp. 259–268 (2016)Google Scholar
  13. 13.
    Everingham, M., Gool, L.J.V., Williams, C.K.I., Winn, J.M., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRefGoogle Scholar
  14. 14.
    Feng, J., Price, B., Cohen, S., Chang, S.: Interactive segmentation on rgbd images via cue selection. In: CVPR (2016)Google Scholar
  15. 15.
    Fowlkes, C.C., Martin, D.R., Malik, J.: Local figure-ground cues are valid for natural images. J. Vis. 7(8), 1–9 (2007)CrossRefGoogle Scholar
  16. 16.
    Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: ICCV, pp. 1–8 (2009)Google Scholar
  17. 17.
    Grady, L.: Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1768–1783 (2006)CrossRefGoogle Scholar
  18. 18.
    Gulshan, V., Rother, C., Criminisi, A., Blake, A., Zisserman, A.: Geodesic star convexity for interactive image segmentation. In: CVPR, pp. 3129–3136 (2010)Google Scholar
  19. 19.
    He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)CrossRefGoogle Scholar
  20. 20.
    Kass, M., Witkin, A.P., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)CrossRefMATHGoogle Scholar
  21. 21.
    Kowdle, A., Chang, Y., Gallagher, A.C., Chen, T.: Active learning for piecewise planar 3d reconstruction. In: CVPR, pp. 929–936 (2011)Google Scholar
  22. 22.
    Küttel, D., Ferrari, V.: Figure-ground segmentation by transferring window masks. In: CVPR, pp. 558–565 (2012)Google Scholar
  23. 23.
    Li, H., Shen, C.: Interactive color image segmentation with linear programming. Mach. Vis. Appl. 21(4), 403–412 (2010)CrossRefGoogle Scholar
  24. 24.
    Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. In: CVPR (2007)Google Scholar
  25. 25.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  26. 26.
    Mortensen, E.N., Barrett, W.A.: Intelligent scissors for image composition. In: SIGGRAPH, pp. 191–198 (1995)Google Scholar
  27. 27.
    Rother, C., Kolmogorov, V., Blake, A.: "Grabcut": interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)CrossRefGoogle Scholar
  28. 28.
    Rother, C., Minka, T.P., Blake, A., Kolmogorov, V.: Cosegmentation of image pairs by histogram matching—incorporating a global constraint into mrfs. In: CVPR, pp. 993–1000 (2006)Google Scholar
  29. 29.
    Rupprecht, C., Peter, L., Navab, N.: Image segmentation in twenty questions. In: CVPR, pp. 3314–3322 (2015)Google Scholar
  30. 30.
    Stein, A.N., Stepleton, T.S., Hebert, M.: Towards unsupervised whole-object segmentation: combining automated matting with boundary detection. In: CVPR (2008)Google Scholar
  31. 31.
    Straehle, C.N., Köthe, U., Knott, G., Briggman, K.L., Denk, W., Hamprecht, F.A.: Seeded watershed cut uncertainty estimators for guided interactive segmentation. In: CVPR, pp. 765–772 (2012)Google Scholar
  32. 32.
    Vezhnevets, V., Konushin, V.: "Growcut"—interactive multi-label n-d image segmentation by cellular automata. In: GraphiCon (2005)Google Scholar
  33. 33.
    Vicente, S., Rother, C., Kolmogorov, V.: Object cosegmentation. In: CVPR, pp. 2217–2224 (2011)Google Scholar
  34. 34.
    Waggoner, J.W., Zhou, Y., Simmons, J.P., Graef, M.D., Wang, S.: Graph-cut based interactive segmentation of 3d materials-science images. Mach. Vis. Appl. 25(6), 1615–1629 (2014)CrossRefGoogle Scholar
  35. 35.
    Wang, Q., Gao, J., Yuan, Y.: A joint convolutional neural networks and context transfer for street scenes labeling. IEEE Trans. Intell. Transp. Syst. http://ieeexplore.ieee.org/document/8012463/
  36. 36.
    Wang, Q., Gao, J., Yuan, Y.: Embedding structured contour and location prior in siamesed fully convolutional networks for road detection. IEEE Trans. Intell. Transp. Syst. 19(1), 230–241 (2018)CrossRefGoogle Scholar
  37. 37.
    Wang, T., Han, B., Collomosse, J.P.: Touchcut: fast image and video segmentation using single-touch interaction. Comput. Vis. Image Underst. 120, 14–30 (2014)CrossRefGoogle Scholar
  38. 38.
    Xu, C., Whitt, S., Corso, J.J.: Flattening supervoxel hierarchies by the uniform entropy slice. In: ICCV, pp. 2240–2247 (2013)Google Scholar
  39. 39.
    Xu, J., Collins, M.D., Singh, V.: Incorporating user interaction and topological constraints within contour completion via discrete calculus. In: CVPR, pp. 1886–1893 (2013)Google Scholar
  40. 40.
    Yan, C., Xie, H., Liu, S., Yin, J., Zhang, Y., Dai, Q.: Effective uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans. Intell. Transp. Syst. 19(1), 220–229 (2018)CrossRefGoogle Scholar
  41. 41.
    Yan, C., Xie, H., Yang, D., Yin, J., Zhang, Y., Dai, Q.: Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 19(1), 284–295 (2018)CrossRefGoogle Scholar
  42. 42.
    Yan, C., Zhang, Y., Xu, J., Dai, F., Li, L., Dai, Q., Wu, F.: A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett. 21(5), 573–576 (2014)CrossRefGoogle Scholar
  43. 43.
    Yan, C.C., Zhang, Y., Xu, J., Dai, F., Zhang, J., Dai, Q., Wu, F.: Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circuits Syst. Video Technol. 24(12), 2077–2089 (2014)CrossRefGoogle Scholar
  44. 44.
    Zhang, L., Peng, X., Li, G., Li, H.: A novel active contour model for image segmentation using local and global region-based information. Mach. Vis. Appl. 28(1–2), 75–89 (2017)CrossRefGoogle Scholar
  45. 45.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR, pp. 6230–6239 (2017)Google Scholar
  46. 46.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: NIPS, pp. 321–328 (2003)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceNational Tsing Hua UniversityHsinchuTaiwan

Personalised recommendations