Interactive Image Segmentation Method of Eye Movement Data and EEG Data

  • Jiacai ZhangEmail author
  • Song Liu
  • Jialiang Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10284)


Interactive image segmentation method plays a vital role in various applications, such as image processing, computer vision and other fields. Traditional interactive image segmentation methods focus on using the way of manually adding interactive information, such as sketching the edges, distinguishing foreground backgrounds with dotted frames, and so on. The information acquisition and decoding technology has become more mature, such as in eye movement and electroencephalogram, and based on which, this paper presents an interactive image segmentation method that uses eye movement trajectory and EEG as interactive information. While observing the image, it collects the data from EEG and eye movement, based on these physiological signals to establish a more natural interactive image object segmentation method. The results show that the method of brain-computer interaction based image segmentation has advantages in the following aspects: first, it is hand-free, and can be applied to special occasions; second, there will be higher efficiency and better results in multi-target image segmentation. This research provides a new way to establish a new method of image segmentation based on human-computer cooperation.


Interactive image segmentation method Human-computer cooperation 



This work is supported by the NSFC Key Program (91520202), and General Program (61375116). This work is also supported by Beijing Advanced Innovation Center For Future Education with grant No. BJAICFE2016IR-003.


  1. Vezhnevets, V., Konouchine, V.: GrowCut: interactive multi-label ND image segmentation by cellular automata. In: Proceedings of Graphicon, vol. 1, pp. 150–156 (2005)Google Scholar
  2. Hernández-Vela, A., Hernández-Vela, A., Primo, C., et al.: Automatic user interaction correction via multi-label graph cuts. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1276–1281. IEEE (2011)Google Scholar
  3. Veksler, O., Boykov, Y., Mehrani, P.: Superpixels and supervoxels in an energy optimization framework. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 211–224. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15555-0_16 CrossRefGoogle Scholar
  4. Harley, K., Reese, E.: Origins of autobiographical memory. Dev. Psychol. 35(5), 1338 (1999)CrossRefGoogle Scholar
  5. Mortensen, E.N., Barrett, W.A.: Toboggan-based intelligent scissors with a four-parameter edge model. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 452–458. IEEE (1999)Google Scholar
  6. Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: Proceedings of Eighth IEEE International Conference on Computer Vision, ICCV 2001, vol. 1, pp. 105–112. IEEE (2001)Google Scholar
  7. Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG), vol. 23, no. 3, pp. 309–314. ACM (2004)Google Scholar
  8. Wang, T., Han, B., Collomosse, J.: Touchcut: fast image and video segmentation using single-touch interaction. Comput. Vis. Image Underst. 120, 14–30 (2014)CrossRefGoogle Scholar
  9. Ablikim, M., Achasov, M.N., Albayrak, O., et al.: Observation of a charged charmoniumlike structure in e + e − →(D* D¯*) ± π∓ at s = 4.26 GeV. Phys. Rev. Lett. 112(13), 132001 (2014)CrossRefGoogle Scholar
  10. Achanta, R., Shaji, A., Smith, K., et al.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRefGoogle Scholar
  11. Thut, G., Nietzel, A., Brandt, S.A., et al.: α-Band electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. J. Neurosci. 26(37), 9494–9502 (2006)CrossRefGoogle Scholar
  12. Worden, M.S., Foxe, J.J., Wang, N., et al.: Anticipatory biasing of visuospatial attention indexed by retinotopically specific-band electroencephalography increases over occipital cortex. J. Neurosci. 20(RC63), 1–6 (2000)Google Scholar
  13. Sauseng, P., Klimesch, W., Stadler, W., et al.: A shift of visual spatial attention is selectively associated with human EEG alpha activity. Eur. J. Neurosci. 22(11), 2917–2926 (2005)CrossRefGoogle Scholar
  14. Larivière, V., Haustein, S., Mongeon, P.: The oligopoly of academic publishers in the digital era. PLoS ONE 10(6), e0127502 (2015)CrossRefGoogle Scholar
  15. Stankovich, S., Dikin, D.A., Dommett, G.H.B., et al.: Graphene-based composite materials. Nature 442(7100), 282–286 (2006)CrossRefGoogle Scholar
  16. Andrews, T.J., Coppola, D.M.: Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments. Vis. Res. 39(17), 2947–2953 (1999)CrossRefGoogle Scholar
  17. Wengert, C., Douze, M., Jégou, H.: Bag-of-colors for improved image search. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 1437–1440. ACM (2011)Google Scholar
  18. Kanwisher, N., McDermott, J., Chun, M.M.: The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17(11), 4302–4311 (1997)Google Scholar
  19. Raymond, J., Varney, C., Parkinson, L.A., et al.: The effects of alpha/theta neurofeedback on personality and mood. Cogn. Brain. Res. 23(2), 287–292 (2005)CrossRefGoogle Scholar
  20. Egner, T., Gruzelier, J.H.: EEG biofeedback of low beta band components: frequency-specific effects on variables of attention and event-related brain potentials. Clin. Neurophysiol. 115(1), 131–139 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Beijing Normal UniversityBeijingPeople’s Republic of China

Personalised recommendations