Fragile image watermarking scheme based on VQ index sharing and self-embedding
- 361 Downloads
In this paper, we propose a self-embedding fragile watermarking scheme using vector quantization (VQ) and index sharing. First, the principle contents of original image are compactly represented by a series of VQ indices. Then, after permutation, the binary bits of VQ indices are extended to generate reference-bits by a random binary matrix, which can make all reference-bits share the information of VQ index bits from different regions of the whole image. The image is embedded with watermark-bits including hash-bits for tampering localization and reference-bits for content recovery, and is transmitted to receiver side. Tampered regions in the received, suspicious image can be accurately located and then be recovered by VQ index reconstruction. Experimental results demonstrate that the proposed scheme can achieve successful content recovery for larger tampering rate and obtain better visual quality of recovered results than the reported schemes.
KeywordsFragile watermarking Vector quantization Self-embedding Tampering localization Content recovery
This work was supported by the National Natural Science Foundation of China (61303203, 61232016, U1405254), the Natural Science Foundation of Shanghai, China (13ZR1428400), the Innovation Program of Shanghai Municipal Education Commission (14YZ087), Shanghai Engineering Center Project of Massive Internet of Things Technology for Smart Home (GCZX14014), Research Base Special Project of Hujiang Foundation (C14001), Hujiang Foundation of China (C14002), the PAPD Fund, and the Open Project Program of Shenzhen Key Laboratory of Media Security.
- 4.Fridrich J, Goljan M (1999) Images with self-correcting capabilities. Proc IEEE Int Conf Image Process 3:792–796Google Scholar
- 22.Yang SS, Qin C, Qian ZX, Xu BQ (2014) Tampering detection and content recovery for digital images using halftone mechanism. In: Proceedings of the 10th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Kitakyushu, Japan, pp. 130–133Google Scholar