Interactive Segmentation from 1-Bit Feedback

  • Ding-Jie ChenEmail author
  • Hwann-Tzong Chen
  • Long-Wen Chang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10111)


This paper presents an efficient algorithm for interactive image segmentation that responds to 1-bit user feedback. The goal of this type of segmentation is to propose a sequence of yes-or-no questions to the user. Then, according to the 1-bit answers from the user, the segmentation algorithm progressively revises the questions and the segments, so that the segmentation result can approach the ideal region of interest (ROI) in the mind of the user. We define a question as an event that whether a chosen superpixel hits the ROI or not. In general, an interactive image segmentation algorithm is better to achieve high segmentation accuracy, low response time, and simple manipulation. We fulfill these demands by designing an efficient interactive segmentation algorithm from 1-bit user feedback. Our algorithm employs techniques from over-segmentation, entropy calculation, and transductive inference. Over-segmentation reduces the solution set of questions and the computational costs of transductive inference. Entropy calculation provides a way to characterize the query order of superpixels. Transductive inference is used to estimate the similarity between superpixels and to partition the superpixels into ROI and region of uninterest (ROU). Following the clues from the similarity between superpixels, we design the query-superpixel selection mechanism for human-machine interaction. Our key idea is to narrow down the solution set of questions, and then to propose the most informative question based on the clues of the similarities among the superpixels. We assess our method on four publicly available datasets. The experiments demonstrate that our method provides a plausible solution to the problem of interactive image segmentation with merely 1-bit user feedback.


Image Segmentation Segmentation Algorithm Average Response Time Segmentation Accuracy Semantic Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work was support in part by MOST Grants 103-2221-E-007-045-MY3 and 103-2218-E-007-017-MY3 in Taiwan.

Supplementary material (7.1 mb)
Supplementary material 1 (zip 7254 KB)


  1. 1.
    Achanta, R., Hemami, S.S., Estrada, F.J., Süsstrunk, S.: Frequency-tuned salient region detection. In: CVPR, pp. 1597–1604 (2009)Google Scholar
  2. 2.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012)CrossRefGoogle Scholar
  3. 3.
    Adams, R., Bischof, L.: Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16, 641–647 (1994)CrossRefGoogle Scholar
  4. 4.
    Batra, D., Kowdle, A., Parikh, D., Luo, J., Chen, T.: iCoseg: Interactive co-segmentation with intelligent scribble guidance. In: CVPR, pp. 3169–3176 (2010)Google Scholar
  5. 5.
    Boykov, Y., Jolly, M.: Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In: ICCV, pp. 105–112 (2001)Google Scholar
  6. 6.
    Cheng, M., Prisacariu, V.A., Zheng, S., Torr, P.H.S., Rother, C.: Densecut: Densely connected crfs for realtime grabcut. Comput. Graph. Forum 34, 193–201 (2015)CrossRefGoogle Scholar
  7. 7.
    Dong, X., Shen, J., Shao, L., Yang, M.: Interactive cosegmentation using global and local energy optimization. IEEE Trans. Image Process. 24, 3966–3977 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge (VOC 2007) Results (2007).
  9. 9.
    Fowlkes, C.C., Martin, D.R., Malik, J.: Local figure-ground cues are valid for natural images. J. Vis. 7, 1–9 (2007)CrossRefGoogle Scholar
  10. 10.
    Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: ICCV, pp. 1–8 (2009)Google Scholar
  11. 11.
    Grady, L.: Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1768–1783 (2006)CrossRefGoogle Scholar
  12. 12.
    Gulshan, V., Rother, C., Criminisi, A., Blake, A., Zisserman, A.: Geodesic star convexity for interactive image segmentation. In: CVPR, pp. 3129–3136 (2010)Google Scholar
  13. 13.
    Kass, M., Witkin, A.P., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vision 1, 321–331 (1988)CrossRefzbMATHGoogle Scholar
  14. 14.
    Kowdle, A., Chang, Y., Gallagher, A.C., Chen, T.: Active learning for piecewise planar 3d reconstruction. In: CVPR, pp. 929–936 (2011)Google Scholar
  15. 15.
    Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. In: CVPR (2007)Google Scholar
  16. 16.
    Mortensen, E.N., Barrett, W.A.: Intelligent scissors for image composition. In: SIGGRAPH, pp. 191–198 (1995)Google Scholar
  17. 17.
    Rother, C., Kolmogorov, V., Blake, A.: “grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004)CrossRefGoogle Scholar
  18. 18.
    Rother, C., Minka, T.P., Blake, A., Kolmogorov, V.: Cosegmentation of image pairs by histogram matching - incorporating a global constraint into MRFs. In: CVPR, pp. 993–1000 (2006)Google Scholar
  19. 19.
    Rupprecht, C., Peter, L., Navab, N.: Image segmentation in twenty questions. In: CVPR, pp. 3314–3322 (2015)Google Scholar
  20. 20.
    Straehle, C.N., Köthe, U., Knott, G., Briggman, K.L., Denk, W., Hamprecht, F.A.: Seeded watershed cut uncertainty estimators for guided interactive segmentation. In: CVPR, pp. 765–772 (2012)Google Scholar
  21. 21.
    Vicente, S., Rother, C., Kolmogorov, V.: Object cosegmentation. In: CVPR, pp. 2217–2224 (2011)Google Scholar
  22. 22.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: NIPS, pp. 321–328 (2003)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ding-Jie Chen
    • 1
    Email author
  • Hwann-Tzong Chen
    • 1
  • Long-Wen Chang
    • 1
  1. 1.Department of Computer ScienceNational Tsing Hua UniversityHsinchuTaiwan

Personalised recommendations