Advertisement

Digital Watermarking Method Warranting the Lower Limit of Image Quality of Watermarked Images

  • Motoi Iwata
  • Tomoo Kanaya
  • Akira Shiozaki
  • Akio Ogihara
Open Access
Research Article
  • 1k Downloads
Part of the following topical collections:
  1. Advanced Image Processing for Defense and Security Applications

Abstract

We propose a digital watermarking method warranting the lower limit of the image quality of watermarked images. The proposed method controls the degradation of a watermarked image by using a lower limit image. The lower limit image means the image of the worst quality that users can permit. The proposed method accepts any lower limit image and does not require it at extraction. Therefore lower limit images can be decided flexibly. In this paper, we introduce 2-dimensional human visual MTF model as an example of obtaining lower limit images. Also we use JPEG-compressed images of quality 75% and 50% as lower limit images. We investigate the performance of the proposed method by experiments. Moreover we compare the proposed method using three types of lower limit images with the existing method in view of the tradeoff between PSNR and the robustness against JPEG compression.

Keywords

Spatial Frequency Color Space Contrast Sensitivity Watermark Image Modulation Transfer Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1. Introduction

Digital watermarking is a technique that embeds additional data into digital contents so that the distortion by embedding them is perceptually undetectable [1]. The distortion of watermarked images by general digital watermarking methods is fixed only after embedding. Some digital watermarking methods [2] prevent the degradation of the image quality of watermarked images by using human visual system. However the lower limit of the image quality of the watermarked images was not clear. Such obscurity of the lower limit disturbs the practical use of digital watermarking.

The method proposed by Yoshiura and Echizen [2] decided the embedding strength by introducing uniform color space so that the degradation of all regions in a image was equalized. However there is the fact that the degradation by modification in uniform color space depends on the direction of the modification described in Section 2.

In this paper, we propose a digital watermarking method warranting the lower limit of the image quality of watermarked images. The proposed method controls the degradation of a watermarked image by using a lower limit image. The lower limit image means the image of the worst quality that users can permit. The proposed method accepts any lower limit image and does not require it at extraction. Therefore lower limit images can be decided flexibly. In this paper, we introduce 2-dimensional human visual MTF model as an example of obtaining lower limit images. Also we use JPEG-compressed images of quality 75% and 50% as lower limit images, which are popular formats as degraded images.

The rest of this paper consists of five sections. We describe our approach in Section 2 and introduce the existing techniques in Section 3. Then we describe the detail of the proposed method in Section 4 and show and discuss the performance of the proposed method in Section 5. Finally we conclude this paper in Section 6.

2. Our Approach

We assume that there is a range in which the changes for pixel values are imperceptible. We call the range "perceptual capacity." Existing methods do not modify pixel values in the perceptual capacity strictly. Therefore we introduce a lower limit image which means the image of the worst quality that users can permit, that is, which provides with perceptual capacity. The contribution of the introduction of lower limit images is the separation of perceptual capacity and watermarking procedures. The separation yields the independence of investigation.

The proposed method warrants the lower limit of the image quality of a watermarked image by approximating an original image to the corresponding lower limit image for embedding. Moreover we introduce L Open image in new window a Open image in new window b Open image in new window color space for equalizing the degradation by embedding, where L Open image in new window a Open image in new window b Open image in new window color space is one of the popular uniform color spaces. Then the quality of a watermarked image is between that of the original image and that of the lower limit image in L Open image in new window a Open image in new window b Open image in new window color space. The lower limit image can be decided flexibly because the proposed method does not require it at extraction.

In general, the modification with the same quantity in a uniform color space perceptually yields the same degradation. However the direction of the modification is important, too. We found this fact by comparing the degradation of the modified images approaching human visual filtered images with that of the modified images leaving the filtered images, where the modification was done in L Open image in new window a Open image in new window b Open image in new window color space. The human visual filter cuts off redundant component for visual sensation. Figure 1 shows the difference in quality by the direction of modification, where the human visual filter used here is mathematical 2-dimensional human visual MTF model described in Section 3.2.2. As shown in Figure 1, the degradation of the modified image approaching the filtered image is more imperceptible than that of the modified image leaving the filtered image. We utilize this feature by employing the images filtered by mathematical 2-dimensional human visual MTF model as one of the types of lower limit images. Also we use JPEG compressed images of quality 75% and 50% as lower limit images, which are popular formats as degraded images. In other words, employing the MTF model is a theoretical approach to generate lower limit images, while using JPEG-compression is a practical approach.
Figure 1

Difference by the direction of modification.

3. Existing Techniques

3.1. Color Spaces

In this section, we describe XYZ color space, L Open image in new window a Open image in new window b Open image in new window color space, and opponent color space in Sections 3.1.1, 3.1.2, and 3.1.3, respectively.

3.1.1. XYZ Color Space

XYZ color space is a color space established by CIE (Commission Internationale de l'Eclairage) in 1931. The transformation of sRGB color space into XYZ color space is as follows [3].

First we obtain gamma-transformed sRGB color space by the following equations:

where Open image in new window , Open image in new window and Open image in new window are the values in gamma-transformed sRGB color space, and Open image in new window , Open image in new window and Open image in new window are the values in sRGB color space.

Then we obtain XYZ color space from gamma-transformed sRGB color space by the following equations:

3.1.2. L Open image in new window a Open image in new window b Open image in new window Color Space

L Open image in new window a Open image in new window b Open image in new window color space is one of uniform color spaces established by CIE in 1976 [4]. In a uniform color space, the distances between colors are fixed based on the perceptual differences between the colors [3, 5, 6].

L Open image in new window a Open image in new window b Open image in new window color space is obtained from XYZ color space by the following equations:

where Open image in new window , Open image in new window and Open image in new window are coefficients which depend upon the illuminant (for daylight illuminant D65, Open image in new window , Open image in new window and Open image in new window ).

3.1.3. Opponent Color Space

Opponent color space is based on input signals from L cone, M cone, and S cone in retina. Opponent color space is obtained from XYZ color space by the following equation:

where Open image in new window , Open image in new window , and Open image in new window represent luminance channel and the opponent channels of red-green and yellow-blue, respectively.

3.2. Two-Dimensional Human Visual MTF Model

3.2.1. Modulation Transfer Function

Modulation Transfer Function (MTF) describes the relationship between spacial frequency and contrast sensitivity. Spatial frequency is a measure of how often a structure repeats per unit of distance. As shown in Figure 2, any pattern corresponds to the spacial frequency. On the other hand, contrast sensitivity is a measure of the ability to discern luminances of different levels in a static image. Contrast sensitivity depends on spatial frequency. For example, it tends to be high for medium spatial frequency, while it tends to be low for high spatial frequency.
Figure 2

Patterns with low spatial frequency and high spatial frequency.

Figure 3 shows the shape of human visual MTF for luminance. As shown in Figure 3, contrast sensitivity is numerically expressed by MTF. In human visual MTF for luminance, contrast sensitivity is high for medium spatial frequency and is suddenly low for high spatial frequency. It is known that the shape of human visual MTF for other color stimulus is similar to that for luminance.
Figure 3

Shape of human visual MTF for luminance.

3.2.2. Mathematical 2-Dimensional Human Visual MTF Model

Ishihara et al. [7, 8] and Miyake [9] revealed that human visual MTF depends on directivity in spatial frequency and mean of stimulus. Moreover they proposed mathematical 2-dimensional human visual MTF model about tristimulus on opponent color space.

Let Open image in new window be horizontal and vertical spatial frequency, respectively, let Open image in new window be the spatial frequency on Open image in new window - Open image in new window plane, and let Open image in new window be the direction of Open image in new window . Then contrast sensitivity Open image in new window obtained by mathematical 2-dimensional human visual MTF model is defined as follows:
where Open image in new window represents the ratio of diagonal contrast sensitivity to horizontal contrast sensitivity when the horizontal spatial frequency is equal to Open image in new window , and Open image in new window is defined as follows:

where Open image in new window represents the maximum value of horizontal contrast sensitivity on human visual MTF when the mean of stimulus is equal to Open image in new window .

We define Open image in new window and Open image in new window as contrast sensitivity Open image in new window for red-green channel and yellow-blue channel, respectively. We can obtain Open image in new window and Open image in new window from (5) using the parameters shown in Table 1. The parameters Open image in new window , Open image in new window , Open image in new window and Open image in new window in Table 1 are calculated by the following equations:

3.3. Filtering Based on Two-dimensional Human Visual MTF Model

The filter of 2-dimensional human visual MTF model cuts aoff imperceptible components from images. In this paper, only red-green and yellow-blue channels are filtered, which are based on the characteristic that modification in luminance is more perceptual than that in red-green or yellow-blue channel.

Step 1.

An original image with Open image in new window pixels is transformed into opponent color space. Let Open image in new window and Open image in new window be the values of red-green and yellow-blue channels located at the coordinate Open image in new window respectively.

Step 2.

Open image in new window and Open image in new window are transformed into Open image in new window and Open image in new window by discrete Fourier transform (DFT), respectively.

Step 3.

The filtered discrete Fourier transform coefficients Open image in new window and Open image in new window are, respectively, obtained by the following equations:

Step 4.

The filtered pixel values in opponent color space are obtained from Open image in new window and Open image in new window by inverse DFT. Then the lower limit image is obtained by the transformation of opponent color space into sRGB color space.

4. Proposed Method

4.1. Embedding Procedure

Firstly we divide an original image with Open image in new window pixels and the corresponding lower limit image into blocks with Open image in new window pixels. Moreover the blocks are divided into subblocks with Open image in new window pixels. Let Open image in new window and Open image in new window be the Open image in new window -th block in the original image and the lower limit image, respectively, where Open image in new window , Open image in new window . Let Open image in new window and Open image in new window be the Open image in new window -th subblock in Open image in new window and Open image in new window respectively, where Open image in new window , Open image in new window (" Open image in new window " is omitted in the representation of Open image in new window and Open image in new window for simplicity). The proposed method embeds one watermark bit into one block. Let Open image in new window be the watermark bit embedded in Open image in new window .

The embedding procedure of Open image in new window is as follows.

Step 1.

Let Open image in new window be the pixel value located at Open image in new window in Open image in new window , where Open image in new window . Then the pixel value Open image in new window is regarded as the point Open image in new window in L Open image in new window a Open image in new window b Open image in new window color space. In the same manner, Open image in new window and Open image in new window are defined from Open image in new window .

Step 2.

Let Open image in new window be the distance between the origin Open image in new window and the point Open image in new window , and let Open image in new window be the distance between the origin Open image in new window and the point Open image in new window . Open image in new window and Open image in new window are obtained by the following:equations:

Step 3.

The difference Open image in new window between the norms of the pixels in the original image and the lower limit image is obtained by the following equation:
Moreover the sum Open image in new window of positive values and the sum Open image in new window of negative values in Open image in new window are obtained as follows:

Step 4.

Step 5.

The mean Open image in new window of the sums Open image in new window of all subblocks in Open image in new window is obtained by the following equation:

Step 6.

The quantized mean Open image in new window is obtained by the following equation:

where Open image in new window means the maximum integer which is smaller than Open image in new window . The quantizer Open image in new window acts as embedding strength.

Step 7.

The quantized mean Open image in new window will be modified so as to be even when Open image in new window and be modified so as to be odd when Open image in new window by the following steps (Step 7~Step 9).

The watermarked value Open image in new window of the quantized mean is obtained as follows:
Moreover we obtain the quantity Open image in new window which is added to Open image in new window for embedding by the following equation:

Step 8.

We obtain the quantity Open image in new window which is added to each pixel value Open image in new window in Open image in new window for embedding as follows:

Step 9.

Let Open image in new window be the watermarked point of Open image in new window . As shown in Figure 5, we change Open image in new window into Open image in new window so as to satisfy Open image in new window by the following equation:
where Open image in new window is the ratio for changing of Open image in new window into Open image in new window . The ratio Open image in new window satisfies Open image in new window and the following equation:
where Open image in new window , and Open image in new window are obtained by the following equation:

Step 10.

The watermarked points Open image in new window are transformed into sRGB color space, where the transformation of real numbers into integers (round-up or round-down) is decided so that the influence on Open image in new window is minimized. Then we obtain the watermarked block Open image in new window .

We obtain the watermarked image after all watermark bits have been embedded.

4.2. Extracting Procedure

Firstly we obtain the blocks Open image in new window and the subblocks Open image in new window from a watermarked image in the same manner as embedding procedure.

The extracting procedure for a block Open image in new window is as follows.

Step 1.

The pixel values Open image in new window in Open image in new window are transformed into L Open image in new window a Open image in new window b Open image in new window color space and are regarded as the points Open image in new window in L Open image in new window a Open image in new window b Open image in new window color space.

Step 2.

The sum Open image in new window of Open image in new window in Open image in new window is obtained for each sub-block in the same manner as (12).

Step 3.

The mean Open image in new window of the sums Open image in new window of all subblocks in Open image in new window is obtained in the same manner as (13).

Step 4.

The quantized mean Open image in new window is obtained in the same manner as (14). Then we extract Open image in new window as follows:

We obtain all the watermark bits after extracting for all blocks.

5. Experiments

5.1. Environments

Firstly we investigated the image quality of watermarked images and lower limit images. Then we confirmed that embedded watermark bits were perfectly extracted from watermarked images. Next we investigated the available range of the embedding strength Open image in new window because the embedding strength should be decided so that the ratio Open image in new window can exist. Moreover we investigated the property of the proposed method when the embedding strength Open image in new window was variable for each block. The variable embedding strength was the maximum value for each block. Finally we investigated the robustness against JPEG compression and the comparison with an existing method in view of image quality and robustness.

As shown in Figure 6 we used twelve color images "aerial," "airplane," "balloon," "couple," "earth," "girl," "lena," "mandrill," "milkdrop," "parrots," "pepper", and "sailboat" as original images. They were standard images widely used for experiments. The size of all original images was 256 Open image in new window 256 pixels, that is, Open image in new window , and Open image in new window . We used Open image in new window and Open image in new window as the size of blocks and subblocks, respectively. All the watermark bits Open image in new window were decided so as to satisfy Open image in new window . Then the watermarked images that used such watermark bits were worst degraded among those that used any watermark bit. We used Open image in new window as the embedding strength so that the ratio Open image in new window in Step 9 in Section 4.1 could exist, where Open image in new window represents the minimum value of the larger ones of Open image in new window and Open image in new window in each block. The lower limit images consist of three types, that is, "MTF" which is described in Section 3.2, and "JPEG75" and "JPEG50" which are JPEG-compressed images of quality 75% and 50%. The quality 75% of JPEG compression is the standard quality.
Figure 6

Original images.

We used PSNR for the evaluation of image quality. PSNR was calculated by the following equation:
where Open image in new window and Open image in new window represent the pixels in one image and the other image, respectively. We also used mean structural similarity (MSSIM) index [10] to evaluating the similarity between watermarked images and lower limit images. MSSIM index is obtained by calculating the mean of SSIM indices of all windows on the images. SSIM index between two window Open image in new window and Open image in new window of size Open image in new window pixels was calculated by the following equation:

where Open image in new window and Open image in new window represent the means of Open image in new window and Open image in new window respectively, and Open image in new window and Open image in new window represent the variances of Open image in new window and Open image in new window respectively. The constant values Open image in new window and Open image in new window are defined as default values, that is, Open image in new window and Open image in new window respectively.

5.2. Results and Discussion

5.2.1. Image Quality

Figures 7~9 show the watermarked images using "MTF," "JPEG75", and "JPEG50" as the lower limit images, respectively. As shown in Figure 7~9, the degradation of all the watermarked images was imperceptible.
Figure 7

Watermarked images (MTF).

Figure 8

Watermarked images (JPEG75).

Figure 9

Watermarked images (JPEG50).

Table 2 shows the PSNRs of the watermarked images and the lower limit images against the original images. As shown in Table 2, the PSNRs of the watermarked images except for "milkdrop" and "sailboat" are the lowest when the type of the lower limit images is "MTF." The PSNRs of the watermarked images "milkdrop" and "sailboat" using "MTF" are higher than those using "JPEG50," although the PSNRs of the lower limit images of type "MTF" are less than those of type "JPEG50." This suggests that the arbitrariness of the type of lower limit images is useful. Although the PSNRs of the watermarked images "aerial" and "mandrill" using "MTF" were less than 37 [dB] and were relatively low, the degradation of these images was imperceptible because these images mainly consisted of texture-like or noisy regions as shown in Figure 7.
Table 2

PSNRs of watermarked images and lower limit images against original images.

 

Watermarked images

Lower limit images

 

MTF

JPEG75

JPEG50

MTF

JPEG75

JPEG50

aerial

34.9

37.5

35.8

26.1

28.3

26.9

airplane

39.5

45.8

43.6

25.7

30.2

28.5

balloon

43.6

48.9

46.8

32.9

34.9

33.3

couple

42.0

47.5

44.0

29.6

34.1

32.6

earth

41.0

48.0

44.6

31.4

33.7

32.0

girl

42.0

46.8

45.0

28.3

32.7

31.5

lena

42.2

45.1

44.2

26.6

32.4

30.6

mandrill

32.4

39.0

37.8

21.9

27.2

25.4

milkdrop

43.6

44.1

42.4

30.7

32.3

30.8

parrots

40.0

48.3

46.5

26.0

34.3

31.6

pepper

37.7

41.1

39.7

25.16

28.8

27.4

sailboat

43.9

44.9

43.0

27.8

31.0

29.4

5.2.2. Validity of Lower Limit Images

Figure 10 shows the lower limit images of type "MTF." As shown in Figure 10, the degradation of the lower limit images of type "MTF" appeared as emphasizing the difference of color, for example, the hair in "mandrill" or the profile of parrots in "parrots." Such degradation tends to be imperceptible. Therefore the images filtered by 2-dimensional human visual MTF model were appropriate for lower limit images in view of the direction of modification by embedding. However the lower limit images of type "MTF" were slightly inappropriate in view of the strength of modification by embedding because some degradation was perceptible as shown in Figure 10. Therefore one of the future works is the improvement of the decision of the embedding strength.
Figure 10

Lower limit images (MTF).

5.2.3. Flexibility of Embedding Strength

Table 3 shows the minimum and maximum of the embedding strength Open image in new window . The minimum values of Open image in new window of "JPEG75" and "JPEG50" are similar to those of "MTF." The minimum of the embedding strength was fixed so that the embedded watermark could be perfectly extracted from the watermarked image. The maximum of the embedding strength was fixed so that the ratio Open image in new window could exist (the maximum of Open image in new window is equal to Open image in new window ). As shown in Table 3, the range of available Open image in new window depended on images. In "balloon" and "parrots," the flexibility of Open image in new window was low because the maximum of Open image in new window is equal to the minimum of Open image in new window . It is the future work to investigate the relationship between the range of available embedding strengths and the robustness against attacks.
Table 3

The minimum and maximum of Open image in new window .

 

MTF

JPEG75

JPEG50

 

min

max

aerial

8

52

146

181

airplane

8

11

18

35

balloon

7

7

19

26

couple

6

66

42

89

earth

7

51

56

86

girl

6

37

44

56

lena

10

47

55

54

mandrill

6

101

136

154

milkdrop

10

44

78

89

parrots

7

7

38

43

pepper

8

81

123

162

sailboat

12

13

37

54

5.2.4. Performance Using the Maximum of Open image in new window of Each Block

We investigated the property of the proposed method when the maximum of Open image in new window ( Open image in new window ) of each block is used; that is, embedding strength Open image in new window is variable by a block. The demerit of using the maximum of Open image in new window of each block is the increase of the quantity of data saved for extracting. In the following, we call the methods using the same Open image in new window and the maximum of Open image in new window "sameQ" and "maxQ", respectively. Note that the high maximum of Open image in new window in Table 3 does not always cause the low PSNR of the watermarked image of "maxQ" with such Open image in new window because the PSNR does not depend on the maximum of Open image in new window among all blocks but on the distribution of Open image in new window for each block when the maximum of Open image in new window of each block is used.

Figures 11~13 show the watermarked images using "MTF," "JPEG75", and "JPEG50" as the lower limit image, respectively. The embedding strength of all the watermarked images is "maxQ". Table 4 shows the PSNRs of watermarked images using the maximum of Open image in new window of each block. The degradation of all the watermarked images using "JPEG75" and "JPEG50" was imperceptible. The degradation of the watermarked image "mandrill" using "MTF" was slightly perceptible as scattered green dots in the hair of mandrill. Table 4 shows the PSNRs of the watermarked images using "maxQ" as the embedding strength. Although PSNR of "airplane" using "MTF" is under 30 [dB], the degradation of "airplane" was imperceptible because the degradation was chromatic. On the other hand, although the degradation of "mandrill" was mainly texture-like chromatic noise on texture-like regions, the degradation of "mandrill" was slightly perceptible because the modification by embedding was large. We confirmed that the use of "MTF" caused not only the right direction of the modification by embedding but also too large modification by embedding. However we obtained practical results when we use "JPEG75" and "JPEG50" as the lower limit images.
Table 4

PSNRs of watermarked images using the maximum of Open image in new window of each block.

 

MTF

JPEG75

JPEG50

aerial

30.0

34.4

32.8

airplane

28.6

35.1

34.9

balloon

37.6

40.1

38.7

couple

33.5

40.0

38.3

earth

34.9

40.2

38.1

girl

32.8

38.1

37.2

lena

30.1

38.2

36.3

mandrill

25.0

33.4

31.5

milkdrop

34.9

37.7

36.5

parrots

29.5

40.4

37.1

pepper

29.9

34.1

31.8

sailboat

32.6

35.9

35.2

Figure 11

Watermarked images (MTF, maxQ).

5.2.5. Similarity between Watermarked Images and Lower Limit Images

Table 5 shows the MSSIMs between watermarked images and lower limit images. As shown in Table 5, we confirmed that all the watermarked images were similar to the lower limit images because all the MSSIMs were high. It is natural that the MSSIMs of "maxQ" are larger than those of "sameQ" because the use of larger Open image in new window yields the closer watermarked images to the lower limit images. It is the reason why the MSSIMs of "maxQ" are not 1.0 that there are some pixels of which Open image in new window are equal to 0 in (18) or (19).
Table 5

MSSIM between watermarked images and lower limit images.

 

MTF

JPEG50

JPEG75

 

sameQ

maxQ

sameQ

maxQ

sameQ

maxQ

aerial

0.9954

0.9972

0.9415

0.9514

0.9640

0.9702

airplane

0.9952

0.9980

0.9551

0.9710

0.9720

0.9841

balloon

0.9973

0.9986

0.9614

0.9765

0.9757

0.9855

couple

0.9881

0.9940

0.9603

0.9698

0.9713

0.9806

earth

0.9933

0.9965

0.9692

0.9776

0.9813

0.9868

girl

0.9739

0.9872

0.9550

0.9703

0.9684

0.9807

lenna

0.9750

0.9912

0.9562

0.9703

0.9738

0.9814

mandrill

0.9811

0.9911

0.9211

0.9449

0.9588

0.9696

milkdrop

0.9700

0.9840

0.9551

0.9650

0.9704

0.9768

parrots

0.9826

0.9933

0.9578

0.9733

0.9756

0.9831

pepper

0.9603

0.9782

0.9629

0.9733

0.9765

0.9819

sailboat

0.9889

0.9953

0.9662

0.9788

0.9802

0.9882

5.2.6. Robustness against JPEG Compression

We define the number of correctly extracted bits divided by the number of all embedded bits as extraction rate. Tables 6 and 7 show the extraction rates in JPEG compression of quality 75% and 90%, respectively.
Table 6

Extraction rates in JPEG compression of quality 75%.

 

sameQ

maxQ

 

MTF

JPEG75

JPEG50

MTF

JPEG75

JPEG50

aerial

50.00

48.44

56.25

56.25

96.88

95.31

airplane

37.50

50.00

48.44

23.44

82.81

62.50

balloon

53.13

50.00

65.63

34.36

87.50

90.63

couple

45.31

48.44

57.81

56.25

93.75

82.81

earth

57.81

46.88

53.13

45.31

93.75

87.50

girl

53.13

48.44

46.88

56.25

93.75

89.06

lenna

40.63

51.56

50.00

82.81

92.19

84.38

mandrill

43.75

70.31

50.00

15.63

93.75

96.88

milkdrop

50.00

64.06

45.31

59.38

82.81

67.19

parrots

54.69

59.38

45.31

48.44

81.25

93.75

pepper

39.06

65.63

67.19

43.75

75.00

71.88

sailboat

42.19

40.63

46.88

35.98

89.06

59.38

Table 7

Extraction rates in JPEG compression of quality 90%.

 

sameQ

maxQ

 

MTF

JPEG75

JPEG50

MTF

JPEG75

JPEG50

aerial

89.06

100.00

100.00

100.00

100.00

100.00

airplane

57.81

53.13

71.88

93.75

100.00

100.00

balloon

57.81

78.13

89.06

82.81

96.88

98.44

couple

59.38

42.19

89.06

100.00

98.44

100.00

earth

89.06

92.19

96.88

100.00

100.00

100.00

girl

23.44

28.13

54.69

79.69

100.00

100.00

lenna

98.44

98.44

95.31

100.00

100.00

100.00

mandrill

70.31

100.00

100.00

100.00

100.00

100.00

milkdrop

87.50

96.88

98.44

100.00

100.00

100.00

parrots

50.00

92.19

96.88

93.75

98.44

98.44

pepper

95.31

100.00

100.00

100.00

100.00

100.00

sailboat

68.75

81.25

98.44

100.00

100.00

100.00

As shown in Table 6, the proposed method using "sameQ" had no robustness against JPEG compression of quality 75%. Using "maxQ," some extraction rates of "JPEG75"and "JPEG50" against JPEG compression of quality 75% were larger than 90%. It was noticeable that some extraction rates of "JPEG75" were larger than those of "JPEG50" although the PSNRs of "JPEG75" were larger than those of "JPEG50." The investigation of the relationship between lower limit images and robustness is one of our future works.

As shown in Table 7, the proposed method using "sameQ" had partial robustness against JPEG compression of quality 90%. On the other hand, almost all the extraction rates using "maxQ" were equal to 100%. Therefore the proposed method using "maxQ" had the robustness against JPEG compression of quality 90%.

5.2.7. Comparison with Existing Method

We use the existing method proposed by Yoshiura and Echizen in the literature [2] for comparison. Yoshiura's method used the correlation of 2-dimensional random sequences which was one of popular watermarking procedures. Moreover Yoshiura's method took into consideration human visual system by using L Open image in new window u Open image in new window v Open image in new window color space which was one of uniform color spaces. Therefore Yoshiura's method was appropriate to the comparison.

Figure 14 shows the original image "mandrill" and the watermarked images of the existing method and the proposed methods using "sameQ-MTF" and "maxQ-JPEG50." The PSNRs of the watermarked images were approximately equalized as described in Figure 14. As shown in Figure 14, chromatic block noises were perceptible in the watermarked image of the existing method, while the degradation was imperceptible in the watermarked images of the proposed methods using "sameQ-MTF" and "maxQ-JPEG50" although the PSNRs of of the proposed methods were lower than the PSNR of the existing method. Figure 15 shows the enlarged partial regions of the images in Figure 14. As shown in Figure 15, the degradation of each watermarked image was able to be observed in detail. The degradation of the existing method was chromatic block noise. The degradation of the proposed method using "sameQ-MTF" was strong chromatic edge enhancement. The degradation of the proposed method using "maxQ-JPEG50" was imperceptible even if the partial region was enlarged. It was the reason why the degradation of the proposed method using "maxQ-JPEG50" was not block noise that the location of the pixels modified by embedding was scattered by (18) and (19).

Figures 16~19 show the comparison of Yoshiura's method and the proposed method using "MTF," "JPEG75" or "JPEG50" as the lower limit images and "sameQ", or "maxQ" as the embedding strength. The horizontal axis of the graphs in Figures 16~19 represents PSNR[dB] of watermarked images, while the vertical axis represents extraction rate[%]. In Figures 16~19, the performance of the proposed method is represented by the point for each condition, while that of the existing method is represented by the curve. We evaluated the superiority of the proposed method by checking whether the point of the proposed method was above the curve of the existing method or not. As shown in Figures 16 and 17, only the point corresponding to "maxQ-JPEG75" was above the curve of the existing method for the results of all test images. Therefore the proposed method using "maxQ-JPEG75" was superior to the existing method for all test images in view of the robustness against JPEG compression of quality 75%. In comparison with each parameter of the proposed method, in the results of "balloon," "mandrill," and "parrots," the point corresponding to "maxQ-JPEG50" was located on the upper-left of "maxQ-JPEG75." The superiority of "maxQ-JPEG50" on the above cases would be decided depending on the importance of an extraction rate and a PSNR. As shown in Figures 18 and 19, the points corresponding to "maxQ-JPEG75" and "maxQ-JPEG50" were above the curve of the existing method for the results of all test images. Therefore the proposed method using "maxQ-JPEG75" or "maxQ-JPEG50" was superior to the existing method for all test images in view of the robustness against JPEG compression of quality 90%. Moreover the extraction rates of "maxQ-JPEG75" and "maxQ-JPEG50" for all test images were over 95%, where the errors could be recovered by using error correcting codes. In comparison with each parameter of the proposed method, the PSNRs of "maxQ-JPEG75" were higher than those of "maxQ-JPEG50" for all test images. From above discussion, the performance of "maxQ-JPEG75" was totally the best because of the imperceptibility shown in Figure 12 and the robustness against JPEG compression.
Figure 12

Watermarked images (JPEG75, maxQ).

Figure 13

Watermarked images (JPEG50, maxQ).

Figure 14

Comparison in the image quality.

Figure 15

Comparison in the image quality by enlarged partial regions.

Figure 16

Comparison in the robustness against JPEG compression of quality 75% (1).

Figure 17

Comparison in the robustness against JPEG compression of quality 75% (2).

Figure 18

Comparison in the robustness against JPEG compression of quality 90% (1).

Figure 19

Comparison in the robustness against JPEG compression of quality 90% (2).

6. Conclusion

We have proposed a watermarking method warranting the lower limit of the image quality of watermarked images. The proposed method warrants the lower limit of the image quality of watermarked images by introducing lower limit images and equalizes the degradation by embedding on watermarked images by using L Open image in new window a Open image in new window b Open image in new window color space. We have investigated the image quality of watermarked images, the validity of the lower limit images filtered by mathematical 2-dimensional human visual MTF model, the flexibility of the embedding strength, the performance using the maximum of Open image in new window of each block, the similarity between watermarked images and lower limit images, the robustness against JPEG compression, and the comparison with the existing method. Our future works should be to investigate the relationship between the robustness against general image processing and lower limit images and to improve the decision of the embedding strength for each block so as to improve the tradeoff of PSNR and an extraction rate.

References

  1. 1.
    Matsui K: Fundamentals of Digital Watermarking. Morikita Shuppan; 1998.Google Scholar
  2. 2.
    Yoshiura H, Echizen I: Maintaining picture quality and improving robustness of color watermarking by using human vision models. IEICE Transactions on Information and Systems 2006, E89-D(1):256-270. 10.1093/ietisy/e89-d.1.256CrossRefGoogle Scholar
  3. 3.
    JIS handbook 61 Color 2007 Japanese Standards Association, 2007Google Scholar
  4. 4.
    Wikipedia : Lab color space. November 2009, http://en.wikipedia.org/wiki/Lab_color_space
  5. 5.
    Oyama T: Invitation to Visual Psycology. Saiensu-sha Co.; 2000.Google Scholar
  6. 6.
    Colors & Dyeing Club in Nagoya Osaka, November 2009, http://www005.upp.so-net.ne.jp/fumoto/
  7. 7.
    Ishihara T, Ohishi K, Tsumura N, Miyake Y: Dependence of directivity in spatial frequency responseof the human eye (1): measurement of modulation transfer function. Journal of the Society ofPhotographic Science and Technology of Japan 2002, 65(2):121-127.Google Scholar
  8. 8.
    Ishihara T, Ohishi K, Tsumura N, Miyake Y: Dependence of directivity in spatial frequency responseof the human eye (2): mathematical modeling of modulation transfer function. Journal of the Societyof Photographic Science and Technology of Japan 2002, 65(2):128-133.Google Scholar
  9. 9.
    Miyake Y, Ishihara T, Ohishi K, Tsumura N: Measurement and modeling of the two dimensionalMTF of human eye and its application for digital color reproduction. Proceedings of the 9th IS&T and SID Color Image Conference, 2001, Scottsdale, Ariz, USA 153-157.Google Scholar
  10. 10.
    Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13(4):600-612. 10.1109/TIP.2003.819861CrossRefGoogle Scholar

Copyright information

© Motoi Iwata et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  • Motoi Iwata
    • 1
  • Tomoo Kanaya
    • 1
  • Akira Shiozaki
    • 1
  • Akio Ogihara
    • 1
  1. 1.Graduate School of EngineeringOsaka Prefecture UniversitySakai-shiJapan

Personalised recommendations