Saliency driven image manipulation

  • Roey MechrezEmail author
  • Eli Shechtman
  • Lihi Zelnik-Manor
Special Issue Paper


Have you ever taken a picture only to find out that an unimportant background object ended up being overly salient? Or one of those team sports photographs where your favorite player blends with the rest? Wouldn’t it be nice if you could tweak these pictures just a little bit so that the distractor would be attenuated and your favorite player will stand out among her peers? Manipulating images in order to control the saliency of objects is the goal of this paper. We propose an approach that considers the internal color and saliency properties of the image. It changes the saliency map via an optimization framework that relies on patch-based manipulation using only patches from within the same image to maintain its appearance characteristics. Comparing our method with previous ones shows significant improvement, both in the achieved saliency manipulation and in the realistic appearance of the resulting images.


Saliency Manipulation Attention retargeting Image editing 



This research was supported by the Israel Science Foundation under Grant 1089/16, by the Ollendorf Foundation and by Adobe.


  1. 1.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM TOG 28(3), 24 (2009)CrossRefGoogle Scholar
  2. 2.
    Bell, S., Bala, K., Snavely, N.: Intrinsic images in the wild. ACM TOG 33(4), 159 (2014)CrossRefGoogle Scholar
  3. 3.
    Bernhard, M., Zhang, L., Wimmer, M.: Manipulating attention in computer games. In: 2011 IEEE 10th IVMSP Workshop, pp. 153–158 (2011)Google Scholar
  4. 4.
    Bhat, P., Curless, B., Cohen, M., Zitnick, C.L.: Fourier analysis of the 2D screened Poisson equation for gradient domain problems. In: ECCV, pp. 114–128 (2008)Google Scholar
  5. 5.
    Boiman, O., Irani, M.: Detecting irregularities in images and in video. IJCV 74(1), 17–31 (2007)CrossRefGoogle Scholar
  6. 6.
    Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., Durand, F.: What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605 (2016)
  7. 7.
    Cheng, M.M., Mitra, N.J., Huang, X., Torr, P.H., Hu, S.M.: Global contrast based salient region detection. TPAMI 37(3), 569–582 (2015)CrossRefGoogle Scholar
  8. 8.
    Cheng, M.M., Warrell, J., Lin, W.Y., Zheng, S., Vineet, V., Crook, N.: Efficient salient region detection with soft image abstraction. In: ICCV, pp. 1529–1536 (2013)Google Scholar
  9. 9.
    Chu, H.K., Hsu, W.H., Mitra, N.J., Cohen-Or, D., Wong, T.T., Lee, T.Y.: Camouflage images. ACM TOG 29(4), 51–1 (2010)Google Scholar
  10. 10.
    Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: combining inconsistent images using patch-based synthesis. ACM TOG 31(4), 82:1–82:10 (2012)Google Scholar
  11. 11.
    Dekel, T., Michaeli, T., Irani, M., Freeman, W.T.: Revealing and modifying non-local variations in a single image. ACM TOG 34(6), 227 (2015)CrossRefGoogle Scholar
  12. 12.
    Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: ICCV, vol. 2, pp. 1033–1038 (1999)Google Scholar
  13. 13.
    Farbman, Z., Fattal, R., Lischinski, D.: Convolution pyramids. ACM TOG 30(6), 175 (2011)Google Scholar
  14. 14.
    Fried, O., Shechtman, E., Goldman, D.B., Finkelstein, A.: Finding distractors in images. In: CVPR, pp. 1703–1712 (2015)Google Scholar
  15. 15.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. TPAMI 34(10), 1915–1926 (2012)Google Scholar
  16. 16.
    Hagiwara, A., Sugimoto, A., Kawamoto, K.: Saliency-based image editing for guiding visual attention. In: Proceedings of International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction, pp. 43–48. ACM (2011)Google Scholar
  17. 17.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: ICCV (2009)Google Scholar
  18. 18.
    Kim, Y., Varshney, A.: Saliency-guided enhancement for volume visualization. IEEE Trans. Vis. Comput. Graph. 12(5), 925–932 (2006)CrossRefGoogle Scholar
  19. 19.
    Kim, Y., Varshney, A.: Persuading visual attention through geometry. IEEE Trans. Vis. Comput. Graph. 14(4), 772–782 (2008)CrossRefGoogle Scholar
  20. 20.
    Li, G., Yu, Y.: Visual saliency based on multiscale deep features. In: CVPR (2015)Google Scholar
  21. 21.
    Li, X., Lu, H., Zhang, L., Ruan, X., Yang, M.H.: Saliency detection via dense and sparse reconstruction. In: ICCV, pp. 2976–2983 (2013)Google Scholar
  22. 22.
    Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: common objects in context. In: ECCV, pp. 740–755 (2014)Google Scholar
  23. 23.
    Liu, H., Heynderickx, I.: TUD image quality database: eye-tracking release 1 (2010)Google Scholar
  24. 24.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  25. 25.
    Margolin, R., Tal, A., Zelnik-Manor, L.: What makes a patch distinct? In: CVPR, pp. 1139–1146 (2013)Google Scholar
  26. 26.
    Margolin, R., Zelnik-Manor, L., Tal, A.: How to evaluate foreground maps? In: CVPR, pp. 248–255 (2014)Google Scholar
  27. 27.
    Mateescu, V.A., Bajić, I.: Visual attention retargeting. IEEE MultiMed. 23(1), 82–91 (2016)CrossRefGoogle Scholar
  28. 28.
    Mateescu, V.A., Bajić, I.V.: Attention retargeting by color manipulation in images. In: Proceedings of the 1st International Workshop on Perception Inspired Video Processing, pp. 15–20. ACM (2014)Google Scholar
  29. 29.
    Mendez, E., Feiner, S., Schmalstieg, D.: Focus and context in mixed reality by modulating first order salient features. In: International Symposium on Smart Graphics, pp. 232–243 (2010)Google Scholar
  30. 30.
    Nguyen, T.V., Ni, B., Liu, H., Xia, W., Luo, J., Kankanhalli, M., Yan, S.: Image re-attentionizing. IEEE Trans. Multimed. 15(8), 1910–1919 (2013)CrossRefGoogle Scholar
  31. 31.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM TOG 23, 309–314 (2004)CrossRefGoogle Scholar
  32. 32.
    Simakov, D., Caspi, Y., Shechtman, E., Irani, M.: Summarizing visual data using bidirectional similarity. In: CVPR, pp. 1–8 (2008)Google Scholar
  33. 33.
    Su, S.L., Durand, F., Agrawala, M.: De-emphasis of distracting image regions using texture power maps. In: Proceedings of IEEE International Workshop on Texture Analysis and Synthesis, pp. 119–124. ACM (2005)Google Scholar
  34. 34.
    Wong, L.K., Low, K.L.: Saliency retargeting: an approach to enhance image aesthetics. In: WACV, pp. 73–80 (2011)Google Scholar
  35. 35.
    Xu, N., Price, B., Cohen, S., Yang, J., Huang, T.S.: Deep interactive object selection. In: CVPR, pp. 373–381 (2016)Google Scholar
  36. 36.
    Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: CVPR, pp. 1155–1162 (2013)Google Scholar
  37. 37.
    Yan, Z., Zhang, H., Wang, B., Paris, S., Yu, Y.: Automatic photo adjustment using deep neural networks. ACM TOG (2015)Google Scholar
  38. 38.
    Zhang, J., Sclaroff, S., Lin, Z., Shen, X., Price, B., Mech, R.: Minimum barrier salient object detection at 80 FPS. In: ICCV, pp. 1404–1412 (2015)Google Scholar
  39. 39.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Technion – Israel Institute of TechnologyHaifaIsrael
  2. 2.Adobe ResearchSeattleUSA

Personalised recommendations