Skip to main content

T-SAMnet: A Segmentation Driven Network for Image Manipulation Detection

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1143))

Included in the following conference series:

  • 2308 Accesses

Abstract

Although most of the current image manipulation detection algorithms gain great breakthrough, it still generally has problems in detecting multiple types of tampering techniques, and in receiving the classification, detection and segmentation results simultaneously. We propose a two streams network driven by segmentation mask, called T-SAMnet, where RGB images and noise images provide semantic features and noise inconsistency features for the network respectively. RGB stream generates tampered region detection bounding box and segmentation mask, and then is fused with the noise stream to generate classification results. The segmentation mask, on the one hand, supervises the characteristics of the network learning tampered region, feeds back as segmentation attention mechanism to constraint detection branch. The experimental results demonstrates that our method achieves state-of-the-art performance on the three standard image manipulation detection datasets.

This work was supported in part by the National Natural Science Foundation of China under Grants 61571382, 81671766, 61571005, 81671674, 61671309, 61971369 and U1605252, in part by the Fundamental Research Funds for the Central Universities under Grants 20720160075 and 20720180059, in part by the CCF-Tencent open fund, and the Natural Science Foundation of Fujian Province of China (No. 2017J01126).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Farid, H.: Creating and detecting doctored and virtual images: implications to the child pornography prevention act. Department of Computer Science, Dartmouth College, TR2004-518, 13:970 (2004)

    Google Scholar 

  2. Wu, Q., Li, G., Tu, D., Sun, S.: A survey of blind digital image forensics technology for authenticity detection. Acta Autom. Sinica 34(12), 1458–1466 (2008)

    Article  Google Scholar 

  3. Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Learning rich features for image manipulation detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1053–1061 (2018)

    Google Scholar 

  4. Ng, T.T., Chang, S.F.: A model for image splicing. In: 2004 IEEE International Conference on Image Processing, vol. 2, pp. 1169–1172 (2004)

    Google Scholar 

  5. Zhou, Z., Hu, C., Huang, H.: Image blur forgery detection based on color consistency. Comput. Eng. 42(1), 237–242 (2016)

    Google Scholar 

  6. Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 7(3), 868–882 (2012)

    Article  Google Scholar 

  7. Rao, Y., Ni, J.: A deep learning approach to detection of splicing and copy-move forgeries in images. In: 2016 IEEE International Workshop on Information Forensics and Security, pp. 1–6 (2016)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Gao, Y., Beijbom, O., Zhang, N.: Compact bilinear pooling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 317–326 (2016)

    Google Scholar 

  10. Fu J., Zheng, H., Mei, T.: Look closer to see better: recurrent attention convolutional neural network for finegrained image recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  11. Nist nimble 2016 datasets. https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation/

  12. Dong, J., Wang W., Tan, T.: Casia image tampering detection evaluation database (2010). http://forensics.idealtest.org

  13. Dong, J., Wang, W., Tan, T.: Casia image tampering detection evaluation database. In: 2013 IEEE China Summit and International Conference on Signal and Information Processing, pp. 422–426 (2013)

    Google Scholar 

  14. Wen, B., Zhu, Y., Subramanian, R., Ng, T.-T., Shen, X., Winkler, S.: COVERAGEA novel database for copy-move forgery detection. In: 2016 IEEE International Conference on Image Processing, pp. 161–165 (2016)

    Google Scholar 

  15. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  16. Krawetz, N.: A picture’s worth.... Hacker Factor Solutions 6, 2 (2007)

    Google Scholar 

  17. Mahdian, B., Saic, S.: Using noise inconsistencies for blind image forensics. Image Vis. Comput. 27(10), 1497–1503 (2009)

    Article  Google Scholar 

  18. Ferrara, P., Bianchi, T., De Rosa, A.: Image forgery localization via finegrained analysis of CFA artifacts. IEEE Trans. Inf. Forensics Secur. 7(5), 1566–1577 (2012)

    Article  Google Scholar 

  19. Dirik, A.E., Memon, N.: Image tamper detection based on demosaicing artifacts. In: 2009 IEEE International Conference on Image Processing, pp. 1497–1500 (2009)

    Google Scholar 

  20. Salloum, R., Ren, Y., Kuo, C.C.J.: Image splicing localization using a multi-task fully convo-lutional network (MFCN). J. Vis. Commun. Image Represent. 51, 201–209 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to En Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pan, J., Chen, Y., Huang, Y., Ding, X., Cheng, E. (2019). T-SAMnet: A Segmentation Driven Network for Image Manipulation Detection. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1143. Springer, Cham. https://doi.org/10.1007/978-3-030-36802-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36802-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36801-2

  • Online ISBN: 978-3-030-36802-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics