Robust additive watermarking in the DTCWT domain based on perceptual masking
 687 Downloads
Abstract
In this paper, a robust additive image watermarking system operating in the Dual Tree Complex Wavelet Transform (DTCWT) domain is proposed. The system takes advantage of a new perceptual masking model that exploits the Human Visual System (HVS) characteristics at the embedding stage. It also uses an efficient watermark detection structure, called the Raotest, to verify the presence of the candidate watermark. This structure relies on the statistical modeling of high frequency DTCWT coefficients by the Generalized Gaussian distribution. Experimental results show that the proposed system outperforms related stateoftheart watermarking systems in terms of imperceptibility and robustness.
Keywords
DTCWT Perceptual masking Additive watermarking Watermark detection1 Introduction
Recent advances in information technologies have enabled users to access, manipulate and distribute digital multimedia easily, allowing massive production and sharing of digital data. However, issues have arisen regarding the protection of intellectual property because the current technology also facilitates unauthorized copying and illegal distribution of multimedia. To overcome these issues, security approaches such as encryption, watermarking, and perceptual hashing have been reported in the literature.
Digital watermarking has evolved very quickly and is gaining more and more interest in practical applications. Besides copyright protection, digital watermarking has been introduced in digital copy tracking, broadcast monitoring, steganography and data authentication. There are two classes of digital watermarking; multibit and onebit watermarking. In multibit watermarking, the watermark consists of a sequence of bits representing a meaningful information such as an ID or a binary logo. In this case, the role of the decoding scheme is to extract, bit by bit, the full version of the watermark in order to recover the hidden information [1, 7, 15, 16, 24, 25, 26, 27, 35, 37, 39, 40, 43]. In onebit watermarking, however, the watermark serves as a verification code where the role of the detector is to check the presence/absence of the watermark [2, 13, 21, 22, 30]. In practice, onebit watermarking can be used in copy detection, copyright protection, and broadcast monitoring. The key idea of watermark embedding is to introduce controlled modifications to all or some selected samples of the host data. These modifications can be performed in the spatial domain or in the transform domain. Although spatial domain methods are simple and easy to apply and implement, embedding in the transform domain provides higher performance in terms of imperceptibility and robustness. Commonly used transforms include the Discrete Wavelet Transform (DWT), the Discrete Cosine Transform (DCT), the Singular Value Decomposition (SVD) and the Discrete Fourier Transform (DFT). Due to its desirable features, especially the ability to exploit the Human Visual System (HVS) characteristics in a better way, the DWT is viewed as one of the most broadly used and studied domain in the field of digital watermarking. However, this transform has two major drawbacks: (i) lack of shift invariance, which means that small shifts in the input data leads to major variations in the distribution of the energy between DWT coefficients at different scales; and (ii) poor directional selectivity for diagonal features [20].
To overcome these limitations, Kingsbury has derived a new kind of wavelet transform called the DualTree Complex Wavelet Transform (DTCWT) [19] which combines desirable properties from the DWT and the Complex Wavelet Transform (CWT), namely: (i) nearly shift invariance; (ii) good directional selectivity; (iii) Perfect reconstruction; (iv) Limited redundancy; and (v) low computational complexity [20]. Due to these advantageous properties, the DTCWT has become an attractive embedding domain for designing efficient watermarking systems. The first work in this context has been proposed by Loo and Kingsbury [29] and then many works have built upon the idea of DTCWT domain for watermarking images [3, 7, 12, 21, 27, 31, 43] and videos [4, 5, 9, 34]. In image watermarking, most of the work published in the literature is concerned with the multibit approach [3, 7, 12, 27, 31, 43] and to the best of our knowledge, very little effort has been put on onebit watermarking [21]. The only work that is worth mentioning here was reported in [21] where the authors have proposed two blind additive watermark detection structures in the DTCWT domain. The authors have first demonstrated that the concatenated real and imaginary components of the DTCWT detail subbands can be statistically modeled by the Generalized Gaussian Distribution (GGD). Then, they adjusted a Likelihoodratio based detector, initially proposed in [13] and the Rao detector as reported in [33] to operate in the DTCWT domain. The authors have found that the Raobased detector is more practical and provides better results than the Likelihoodratio based detector. In video watermarking, a number of onebit watermarking techniques have been published [4, 5, 9, 34]. Recently, Asikuzzaman et al. [5] have presented three versions of a blind additive watermarking algorithm to combat illegal video distribution. The watermark is additively embedded in all the 3^{rd} level DTCWT subbands of the video chrominance channel and the detection was carried out using a normalized cross correlation rule. In their first version, the authors built upon a previous work published in [4], to detect the watermark only from the subbands where it was originally embedded (i.e. the subbands of level 3 of the DTCWT decomposition). The second version was designed to resist the downscaling in resolution attack, by extracting the watermark from any level of DTCWT decomposition depending on the downscaling resolution rate, rather than extracting it from the subbands of the 3^{rd} level. Unlike these two versions that use a symmetric key approach, the third version is based on a keyless detection approach where the watermark can be detected by only using information extracted from the frames. This version can resist temporal desynchronization attacks, such as frame dropping, frame insertion or frame rate conversion.
In this paper, a blind additive watermarking system for still images operating in the DTCWT domain is proposed. In order to overcome the problem of controlling the watermark imperceptibility in additive watermarking, a new perceptual masking model is proposed. This model builds upon the work of [44], but adjusted here to operate in onebit additive watermarking in the DTCWT domain. Note that the system developed in [44] is a multibit watermarking scheme using a multiplicative rule and operating in the DWT domain. The proposed model exploits HVS characteristics, namely: the frequency band sensitivity, the brightness masking and the texture masking to quantify the amount of unnoticeable changes in the DTCWT domain. It is worth mentioning that there has not been any masking model reported in the literature exploiting the aforementioned characteristics and operating in the DTCWT domain. At the watermark detection stage, we have introduced and adapted a well known watermark detector, which is based on the RaoTest. As known, the performance of this detector relies heavily on the statistical modeling of the host data. Therefore, the DTCWT coefficients are modeled by a GGD as suggested in previous works. Extensive experiments have been carried out to assess the performance of the proposed system and results show its efficiency in terms of imperceptibility and robustness with a clear superiority over related schemes. Also, through experiments, we have demonstrated that it is possible to achieve a good detection performance with fixed GGD parameters rather than estimating them for each image. This reduces the computational complexity at the detection stage.
The rest of the paper is structured as follows. Section 2 provides a brief introduction to the DTCWT. Section 3 describes the proposed watermarking system. Experimental results are reported and discussed in Section 4. Conclusions are drawn in Section 5.
2 Introduction to the dual tree complex wavelet transform
The Dual Tree Complex Wavelet Transform was first introduced by Kingsbury [19]. This transform has gained a special attention because it exhibits the desirable properties of the DWT and CWT. That is, perfect reconstruction, computational efficiency, approximate shift invariance and directionally selective filters [20]. Instead of using one filter tree in the original DWT, the DTCWT uses two filter trees to produce two sets of coefficients which can be combined to obtain complex coefficients. In practice, the DTCWT is implemented by using two real DWTs that use different sets of filters. The first DWT generates the real part of the transform while the second DWT gives the imaginary part. This makes this transform redundant with a factor of 2^{d} for ddimension signals. To obtain the inverse of the DTCWT, the real part and the imaginary part are each inverted using the inverse of each of the two real DWTs to get two real signals. These two signals are then averaged to reconstruct the final signal [36].
The DTCWT has been introduced in many image processing applications such as image denoising, classification, segmentation and sharpening, digital watermarking, textures analysis and synthesis, etc. In the field of watermarking, the nearly shift invariance property of the DTCWT is particularly important since the watermark can resist geometric distortions. Also, the DTCWT offers powerful perceptual characteristics as it exhibits better directional sensitivity in high frequency subbands when compared to the DWT, hence, offering higher imperceptibility of embedded watermarks [28].
3 Proposed watermarking system
3.1 Proposed masking model
3.1.1 Spatial frequency sensitivity
It is known that the HVS is sensitive to patterns and textures which can be perceived as spatial frequencies. Furthermore, this sensitivity has been shown to be dependent on the orientation of texture. Particularly, the HVS is more sensitive to vertical and horizontal lines and edges in an image than those with a 45degree orientation [10]. Normally, the spatial frequency response is described by the sensitivity to luminance contrast as a function of spatial frequency, and this is referred to as the Contrast Sensitivity Function (CSF). In the case of DWT, a CSF is usually implemented by assigning a single value to each subband. This represents a frequency weighting factor that describes the average sensitivity of the HVS for the covered frequency range [32].
The values of SF [14]
𝜃  

λ  ± 15^{∘}  ± 45^{∘}  ± 75^{∘} 
1  9.9702  15.7935  9.9702 
2  4.1779  6.1508  4.1779 
3  2.2117  3.0035  2.2117 
4  1.4612  1.8484  1.4612 
5  1.1440  1.3755  1.1440 
6  1.0000  1.1652  1.0000 
3.1.2 Local brightness masking
3.1.3 Texture masking
3.2 Watermark embedding and detection
In detection theory, Kay [17] has proven that the Raotest has an asymptotically optimal performance similar to that of the generalized likelihood ratio test (GLRT). In other words, under the assumption that the noise probability density function (pdf) is symmetric, the performance of the Raobased detector is equivalent to that of GLRTbased one that is designed with a priori knowledge of the noise parameters.
After inspecting the Rao detector structure, there is only one parameter (i.e., the shape parameter β) to be estimated directly from the watermarked coefficients. However, as mentioned in [22, 33], the detector presented by (11) is asymptotically optimal, which means that the host data needs to be adequately large.
4 Experimental results

Cheng et al. [8]: In their work, a perceptual model constrained approach to information hiding in the DWT and the DCT domains is proposed. In this paper, the DWTbased and the DCTbased models are referred to as Cheng (DWT) and Cheng (DCT), respectively.

Kwitt et al. [21]: In their work, a watermark detection structure has been proposed in the DTCWT based on the Raotest, where no perceptual model has been used. It is referred to as Kwitt (DTCWT) in this paper.

Asikuzzaman et al. [5]: Their work builds upon the idea published in [9] where a perceptual mask is used in the embedding phase and the detection relies on an inverse mask to decode the watermark. The correlation is then used to verify the presence of the candidate watermark. The main difference from [9] is that in [5] the chrominance plane is used to enhance watermark imperceptibility and the watermark is embedded in high frequency DTCWT coefficients. This system is referred to as Asikuzzaman (DTCWT).
Three aspects are considered in our experiments: (i) the imperceptibility of the hidden watermark, (ii) the detection performance in absence of attacks, (iii) the robustness of the watermark against common signal processing attacks, and (iv) the computational complexity of the embedding and the detection processes.
4.1 Imperceptibility analysis
4.2 Detection performance
In order to evaluate the performance of the watermark detection, the Receiver Operationg Characteristics (ROC) curves were used. These curves represent the variation of the P_{Det} against the theoretical \(P_{FA}^{*}\). To obtain the ROC curves, the test images have been watermarked by 10000 randomly generated watermarks. For each tested system, the strength of the watermark is set to obtain a PSNR value of ≈ 60 dB for Baboon and ≈ 65dB for the other images.
Values of the equal error rate (EER)
Grayscale images  Color images  

Systems  Lena  Baboon  Barbara  Pepper  Airplane  Cameraman  Lena  Barbara 
Proposed  0.0016  0.0100  0.0021  0.0017  0.0010  0.0010  0.0043  0.0008 
Kwitt (DTCWT) [21]  0.0087  0.0363  0.0249  0.0123  0.0027  0.0022  0.0389  0.0118 
Asikuzzaman (DTCWT) [5]  0.5030  0.5028  0.5016  0.5025  0.5018  0.5101  0.5009  0.4988 
Cheng (DWT) [8]  0.0075  0.0260  0.0167  0.0149  0.0019  0.0031  0.0504  0.0364 
Cheng (DCT) [8]  0.0017  0.1607  0.2051  0.0093  0.4036  0.0422  0.0836  0.1647 
4.3 Robustness analysis
The robustness of the proposed scheme against some image processing techniques and geometric attacks is assessed. To this end, a set of 10000 randomly generated watermarks are embedded into each test image, then each attack with a fixed strength value is applied to the watermarked images. In all tests, the value of the strength is set to obtain a PSNR around 55 dB. In this section, only the results obtained on Lena (grayscale image) and Barbara (color image) are reported because similar findings have been reached on the remaining test images.
4.3.1 Robustness against image processing
4.3.2 Robustness against geometric attacks
4.4 Computational complexity
5 Conclusions
This paper proposes a blind additive image watermarking scheme in the DTCWT domain. In order to enhance imperceptibility, a new visual masking model exploiting the HVS characteristics has been used. The structure of the watermark detector is an adapted version of the Raotest based detector. The host data in which the watermark is embedded (i.e. the high frequency DTCWT coefficients) is modeled by the generalized Gaussian distribution. Experimental results have shown that the proposed visual masking enhances significantly the performance of the system in terms of imperceptibility, detection accuracy and robustness to common attacks when compared with recent stateoftheart techniques. Furthermore, we have found that the MLE of the GGD shape parameter does not provide good detection performance in most cases and a fixed shape parameter can offer better results. In future, it would be sensible to extend this work and use HVSbased masking models for multibit watermarking in the DTCWT domain. This would serve other practical applications of watermarking such as covert communication and source tracking. The optimization of the watermark embedding process would also be of interest to the authors.
References
 1.Agarwal H, Atrey PK, Raman B (2015) Image watermarking in real oriented wavelet transform domain. Multimedia Tools and Applications 74:10883–10921CrossRefGoogle Scholar
 2.Albalawi U, Mohanty SP, Kougianos E (2016) A new region aware invisible robust blind watermarking approach. Multimedia Tools and Applications 1–35Google Scholar
 3.Alkhathami M, Han F, van Schyndel R (2013) Fingerprint image watermarking approach using DTCWT without corrupting minutiae. In: 6th international congress on image and signal processing (CISP), pp 1717–1723Google Scholar
 4.Asikuzzaman M, Alam MJ, Lambert AJ, Pickering MR (2012) A blind digital video watermarking scheme with enhanced robustness to geometric distortion. In: Proceedings of the international conference on digital image computing technology and applications, pp 1–8Google Scholar
 5.Asikuzzaman M, Alam MJ, Lambert AJ, Pickering MR (2014) Imperceptible and robust blind video watermarking using chrominance embedding: a set of approaches in the DT CWT domain. IEEE Trans Inf Forensics Secur 9(9):1502–1517CrossRefGoogle Scholar
 6.Barni M, Bartolini F, Piva A (2001) Improved waveletbased watermarking through pixelwise masking. IEEE Trans Image Process 10(5):783–791CrossRefzbMATHGoogle Scholar
 7.Benyoussef M, Mabtoul S, El Marraki M, Aboutadjine D (2014) Robust image watermarking scheme using visual cryptography in dualtree complex wavelet domain. Journal of Theoretical and Applied Information Technology 60(2):372–379Google Scholar
 8.Cheng Q, Huang TS (2001) An additive approach to transformdomain information hiding and optimum detection structure. IEEE Trans Multimedia 3(3):273–284CrossRefGoogle Scholar
 9.Coria LE, Pickering MR, Nasiopoulos P, Ward RK (2008) A video watermarking scheme based on the dualtree complex wavelet transform. IEEE Trans Inf Forensics Secur 3(3):466–474CrossRefGoogle Scholar
 10.Cox IJ, Miller ML, Bloom JA, Fridrich J, Kalker T (2008) Digital watermarking and steganography, 2nd edn. Morgan Kaufmann Publishers, San MateoGoogle Scholar
 11.Do MN, Vetterli M (2002) Waveletbased texture retrieval using generalized gaussian density and kullbackleibler distance. IEEE Trans Image Process 11(2):146–158MathSciNetCrossRefGoogle Scholar
 12.Guo B, Li L, Pan JS, Yang L, Wu X (2008) Robust image watermarking using mean quantization in DTCWT domain. In: 8th international conference on intelligent systems design and applications (ISDA), pp 19–22Google Scholar
 13.Hernandez JR, Amado M, PérezGonzalez F (2000) Dctdomain watermarking techniques for still images: detector performance analysis and a new structure. IEEE Trans Image Process 9(1):55–68CrossRefGoogle Scholar
 14.Hill P, Achim A, AlMulla ME, Bull D (2016) Contrast sensitivity of the wavelet, dual tree complex wavelet, curvelet and steerable pyramid transforms. IEEE Trans Image Process 25(6):2739–2751MathSciNetCrossRefGoogle Scholar
 15.Horng SJ, Rosiyadi D, Li T, Takao T, Guo M, Khan MK (2013) A blind image copyright protection scheme for egovernment. J Vis Commun Image Represent 24(7):1099–1105CrossRefGoogle Scholar
 16.Horng SJ, Rosiyadi D, Fan P, Wang X, Khan MK (2014) An adaptive watermarking scheme for egovernment document images. Multimedia Tools and Applications 72(3):3085–3103CrossRefGoogle Scholar
 17.Kay SM (1989) Asymptotically optimal detection in incompletely characterized nongaussian noise. IEEE Trans Acoust Speech Signal Process 37(5):627–633CrossRefzbMATHGoogle Scholar
 18.Kay SM (1998) Fundamentals of statistical signal processing: detection theory, vol 2. Prentice Hall, Englewood CliffsGoogle Scholar
 19.Kingsbury NG (1998) The dualtree complex wavelet transform: a new technique for shift invariance and directional filters. In: IEEE digital signal processing workshop, Bryce CanyonGoogle Scholar
 20.Kingsbury NG (2001) Complex wavelets for shift invariant analysis and filtering of signals. Journal of Applied and Computational Harmonic Analysis 10(3):234–253MathSciNetCrossRefzbMATHGoogle Scholar
 21.Kwitt R, Meerwald P, Uhl A (2009) Blind DTCWT domain additive spreadspectrum watermark detection. In: Proceedings of the 16th international conference on digital signal processing, pp 1–8Google Scholar
 22.Kwitt R, Meerwald P, Uhl A (2011) Lightweight detection of additive watermarking in the DWTdomain. IEEE Trans Image Process 20(2):474–484MathSciNetCrossRefzbMATHGoogle Scholar
 23.Lewis AS, Knowles G (1992) Image compression using the 2d wavelet transform. IEEE Trans Image Process 2(1):244–250CrossRefGoogle Scholar
 24.Lin WH, Horng SJ, Kao TW, Fan P, Lee CL, Pan Y (2008) An efficient watermarking method based on significant difference of wavelet coefficient quantization. IEEE Trans Multimedia 10(5):746–757CrossRefGoogle Scholar
 25.Lin WH, Horng SJ, Kao TW, Chen RJ, Chen YH, Lee CL, Terano T (2009) Image copyright protection with forward error correction. Expert Syst Appl 36(9):11888–11894CrossRefGoogle Scholar
 26.Lin WH, Wang YR, Horng SJ, Kao T W, Pan Y (2009) A blind watermarking method using maximum wavelet coefficient quantization. Expert Syst Appl 36(9):11509–11516CrossRefGoogle Scholar
 27.Liu J, She K (2010) Robust image watermarking using dual tree complex wavelet transform based on human visual system. In: Proceedings of the international conference on image analysis and signal processing (IASP), pp 675–679Google Scholar
 28.Loo P (2002) Digital watermarking with complex wavelets. PhD Thesis, University of Cambridge, United KingdomGoogle Scholar
 29.Loo P, Kingsbury N (2000) Digital watermarking with complex wavelets. In: Proceedings of the IEEE international conference on image processing, ICIP 2000, pp 29–32Google Scholar
 30.Lu W, Sun W, Lu H (2009) Robust watermarking based on DWT and nonnegative matrix factorization. Comput Electr Eng 35(1):183–188MathSciNetCrossRefzbMATHGoogle Scholar
 31.Mabtoul S, IbnElhadj E, Aboutadjine D (2008) A blind image watermaking algorithm based on dual tree complex wavelet transform. In: IEEE symposium on computers and communications (ISCC 2008), pp 1000–1004Google Scholar
 32.Nadenau MJ, Reichel J, Kunt M (2003) Waveletbased color image compression: exploiting the contrast sensitivity function. IEEE Trans Image Process 12(1):58–70CrossRefGoogle Scholar
 33.Nikolaidis A, Pitas I (2003) Asymptotically optimal detection for additive watermarking in the DCT and the DWT domains. IEEE Trans Image Process 12(5):563–571CrossRefGoogle Scholar
 34.Pickering M, Coria L, Nasiopoulos P (2007) A novel blind video watermarking scheme for access control using complex wavelets. In: International conf. consum. electron. dig. tech. papers, pp 1–2Google Scholar
 35.Rosiyadi D, Horng SJ, Fan P, Wang X, Khan MK, Pan Y (2012) Copyright protection for egovernment document images. IEEE MultiMedia 19(3):62–73CrossRefGoogle Scholar
 36.Selesnick IW, Baraniuk RG, Kingsbury NG (2005) The dual tree complex wavelet transform: a coherent framework for multiscale signal and image processing. IEEE Signal Proc Mag 22(6):123–151CrossRefGoogle Scholar
 37.Tang LL, Huang CT, Pan JS, Liu CY (2015) Dual watermarking algorithm based on the fractional fourier transform. Multimedia Tools and Applications 74:4397–4413CrossRefGoogle Scholar
 38.Voloshynovskiy S, Herrigel A, Baumgaertner N, Pun T (1999) A stochastic approach to content adaptive digital image watermarking. In: International workshop on information hiding, vol 1768, pp 211–236Google Scholar
 39.Wang C, Wang X, Zhang C, Xi Z (2017) Geometric correction based color image watermarking using fuzzy least squares support vector machine and bessel K form distribution. Signal Process 134:197–208CrossRefGoogle Scholar
 40.Wang XY, Yang HY, Wang AL, Zhang Y, Wang CP (2014) A new robust digital watermarking based on exponent moments invariants in nonsubsampled contourlet transform domain. Comput Electr Eng 40(3):942–955CrossRefGoogle Scholar
 41.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 14(4):600–612CrossRefGoogle Scholar
 42.Xie G, Shen H (2005) Toward improved waveletbased watermarking using the pixelwise masking model. In: IEEE international conference on image processing (ICIP 2005), pp 689–692Google Scholar
 43.Yang H, Jiang X, Kot AC (2011) Embedding binary watermarks in dual tree complex wavelets domain for access control of digital images. Transactions on Data Hiding and Multimedia Security 6730:18–36CrossRefGoogle Scholar
 44.Zebbiche K, Khelifi F (2014) Efficient waveletbased perceptual watermark masking for robust fingerprint image watermarking. IET Image Process 8(1):23–32CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.