3D color homography model for photo-realistic color transfer re-coding
- 1.2k Downloads
Abstract
Color transfer is an image editing process that naturally transfers the color theme of a source image to a target image. In this paper, we propose a 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping. A key advantage of our approach is that the re-coded color transfer algorithm is simple and accurate. Our evaluation demonstrates that our 3D color homography model delivers leading color transfer re-coding performance. In addition, we also show that our 3D color homography model can be applied to color transfer artifact fixing, complex color transfer acceleration, and color-robust image stitching.
Keywords
Color transfer Color grading Color homography Tone mapping1 Introduction
One of the first photo-realistic color transfer methods was introduced by Reinhard et al. [23]. Their method proposed that the mean and variance of the source image, in a specially chosen color space, should be manipulated to match those of a target. More recent methods [1, 16, 19, 20, 21] might adopt more aggressive color transfers—e.g., color distribution force matches [19, 20]—and yet, these aggressive changes often do not preserve the original intensity gradients and new spatial type artifacts may be introduced into an image (e.g., JPEG blocks become visible or there is false contouring). In addition, the complexity of a color transfer method usually leads to longer processing time. To address these issues, previous methods [11, 13, 18] were proposed to approximate the color change produced by a color transfer, such that an original complicated color transfer can be re-formulated as a simpler and faster algorithm with an acceptable level of accuracy and some introduced artifacts.
In this paper, we propose a simple and general model for re-coding (approximating) an unknown photo-realistic color transfer which provides leading accuracy and the color transfer algorithm can be decomposed into meaningful parts. Our model is extended from a recent planar color homography color transfer model [11] to the 3D domain, as opposed to the original 2D planar domain. In our improved model, we decompose an unknown color transfer into 3D color perspective transform and mean intensity mapping components. Based on [11], we make two new contributions: (1) a 3D color mapping model that better re-codes color change by relating two homogeneous color spaces and (2) a monotonic mean intensity mapping method that prevents artifacts without adding unwanted blur. Our experiments show significant improvements in color transfer re-coding accuracy. We demonstrate three applications of the proposed method for color transfer artifact fixing, color transfer acceleration, and color-robust image stitching.
Throughout the paper, we denote the source image by \(I_\mathrm{s}\) and the original color transfer result by \(I_\mathrm{t}\). Given \(I_\mathrm{s}\) and \(I_\mathrm{t}\), we re-code the color transfer with our color homography model which approximates the original color transfer from \(I_\mathrm{s}\) to \(I_\mathrm{t}\). Figure 1 shows the pipeline of our color transfer decomposition.
Our paper is organized as follows. We review the leading prior color transfer methods and the previous color transfer approximation methods in Sect. 2. Our color transfer decomposition model is described in Sect. 3. We present a color transfer re-coding method for two corresponding images in Sect. 4. In addition, we demonstrate its applications in Sect. 5. Finally, we conclude in Sect. 6.
2 Background
In this section, we briefly review the existing work on photo-realistic color transfer, the methods for re-coding such a color transfer, and the concept of Color Homography.
2.1 Photo-realistic color transfer
Example-based color transfer was first introduced by Reinhard et al. [23]. Their method aligns the color distributions of two images in a specially chosen color space via 3D scaling and shift. Pitie et al. [19, 20] proposed an iterative color transfer method that distorts the color distribution by random 3D rotation and per-channel histogram matching until the distributions of the two images are fully aligned. This method makes the output color distribution exactly the same as the target image’s color distribution. However, the method introduces spatial artifacts. By adding a gradient preservation constraint, these artifacts can be mitigated or removed at the cost of more blurry artifacts [20]. Pouli and Reinhard [21] adopted a progressive histogram matching in L*a*b* color space. Their method generates image histograms at different scales. From coarse to fine, the histogram is progressively reshaped to align the maxima and minima of the histogram, at each scale. Their algorithm also handles the difference in dynamic ranges between two images. Nguyen et al. [16] proposed an illuminant-aware and gamut-based color transfer. They first eliminate the color cast difference by a white-balancing operation for both images. A luminance alignment is later performed by histogram matching along the “gray” axis of RGB. They finally adopt a 3D convex hull mapping to limit the color-transferred RGBs to the gamut of the target RGBs. Other approaches (e.g., [1, 25, 28]) solve for several local color transfers rather than a single global color transfer. As most non-global color transfer methods are essentially a blend of several single color transfer steps, a global color transfer method is extendable for multi-transfer algorithms.
2.2 Photo-realistic color transfer re-coding
Various methods have been proposed for approximating an unknown photo-realistic color transfer for better speed and naturalness. Pitie et al. [18] proposed a color transfer approximation by a 3D similarity transform (\(\hbox {translation}+\hbox {rotation}+\hbox {scaling}\)) which implements a simplification of the earth mover’s distance. By restricting the form of a color transfer to a similarity transform model, some of the generality of the transfer can be lost such that the range of color changes it can account for is more limited. In addition, a color transfer looks satisfying only if the tonality looks natural and this is often not the case with the similarity transformation. Ilie and Welch [13] proposed a polynomial color transfer which introduces higher-degree terms of the RGB values. This encodes the non-linearity of color transfer better than a simple \(3\times 3\) linear transform. However, the nonlinear polynomial terms may over-manipulate a color change and introduce spatial gradient artifacts. Similarly, this method also does not address the tonal difference between the input and output images. Gong et al. [11] proposed a planar color homography model which re-codes a color transfer effect as a combination of 2D perspective transform of chromaticity and shading adjustments. Compared with [18], it requires less parameters to represent a non-tonal color change. The model’s tonal adjustment also further improves color transfer re-coding accuracy. However, the assumption of a 2D chromaticity distortion limits the range of color transfer it can represent. Their [11] tonal adjustment (mean intensity-to-shading mapping) also does not preserve image gradients and the original color rank. Another important work is probabilistic moving least squares [12] which calculates a largely parameterized transform of color space. Its accuracy is slightly better than [13]. However, due to its high complexity, it is unsuitable for real-time use. In this paper, we only benchmark against the color transfer re-coding methods with a real-time performance.
2.3 2D color homography
The 2D color homography model decomposes a color change into a 2D chromaticity distortion and a 1D tonal mapping, which successfully approximates a range of physical color changes. However, the degrees of freedom of a 2D chromaticity distortion may not accurately capture more complicated color changes applied in photograph editing.
3 3D color homography model for photo-realistic color transfer re-coding
4 Image color transfer re-coding
In this section, we describe the steps for decomposing a color transfer between two registered images into the 3D color homography model components.
4.1 Perspective color space mapping
4.2 Mean intensity mapping
4.3 Mean intensity mapping noise reduction
4.4 Results
In our experiments, we assume that we have an input image and an output produced by a color transfer algorithm. Because the input and output are matched (i.e., they are in perfect registration), we can apply Algorithm 1 directly.
Mean errors between the original color transfer result and its approximations by 4 popular color transfer methods
Nguyen [16] | Pitie [20] | Pouli [21] | Reinhard [23] | |
---|---|---|---|---|
PSNR (peak signal-to-noise ratio) | ||||
MK [18] | 23.24 | 22.76 | 22.41 | 25.21 |
Poly [13] | 25.54 | 25.08 | 27.17 | 28.27 |
2D-H [11] | 24.59 | 25.19 | 27.22 | 28.24 |
3D-H | 27.34 | 26.65 | 27.55 | 30.00 |
SSIM (structural similarity) | ||||
MK [18] | 0.88 | 0.85 | 0.81 | 0.85 |
Poly [13] | 0.91 | 0.89 | 0.85 | 0.88 |
2D-H [11] | 0.86 | 0.86 | 0.90 | 0.92 |
3D-H | 0.93 | 0.90 | 0.89 | 0.93 |
Post hoc tests for one-way ANOVA on errors between the original color transfer result and its approximations
Method A | Method B | p-value |
---|---|---|
PSNR overall p-value \(<~0.001\) | ||
MK [18] | Poly [13] | \(<~0.001\) |
MK [18] | 2D-H [11] | \(<~0.001\) |
MK [18] | 3D-H | \(<~0.001\) |
Poly [13] | 2D-H [11] | 0.95 |
Poly [13] | 3D-H | \(<~0.001\) |
2D-H [11] | 3D-H | \(<~0.001\) |
SSIM overall p-value \(<~0.001\) | ||
MK [18] | Poly [13] | \(<~0.001\) |
MK [18] | 2D-H [11] | \(<~0.001\) |
MK [18] | 3D-H | \(<~0.001\) |
Poly [13] | 2D-H [11] | 0.48 |
Poly [13] | 3D-H | \(<~0.001\) |
2D-H [11] | 3D-H | \(<~0.001\) |
5 Applications
5.1 Color transfer acceleration
More recent color transfer methods usually produce higher-quality outputs, however, at the cost of more processing time. Methods that produce high-quality images and are fast include the work of Gharbi et al. [10] who proposed a general image manipulation acceleration method—named transform recipe (TR)—designed for cloud applications. Based on a downsampled pair of input and output images, their method approximates the image manipulation effect according to changes in luminance, chrominance, and stack levels. Another fast method by Chen et al. [3] approximates the effect of many general image manipulation procedures by convolutional neural networks (CNNs). While their approach significantly reduces the computational time for some complex operations, it requires substantial amounts of samples for training a single image manipulation. In this subsection, we demonstrate that our re-coding method can be applied as an alternative to accelerate a complex color transfer by approximating its color transfer effect at a lower scale. We approximate the color transfer in the following steps: (1) We supply a thumbnail image (\(40 \times 60\) in our experiment) to the original color transfer method and obtain a thumbnail output; (2) Given the pair of lower-resolution input and output images, we estimate a color transfer model that approximates the color transfer effect; (3) We then process the higher-resolution input image by using the estimated color transfer model and obtain a higher-resolution output which looks very close to the original higher-resolution color transfer result without acceleration.
In our experiment, we choose two computationally expensive methods [16, 20] as the inputs and we compare our performance (MATLAB implementation) with a state-of-the-art method TR [10] (Python implementation). Figure 5 shows the output comparisons between the original color transfer results and the acceleration results. The results indicate that our re-coding method can significantly reduce the computational time (\(25\times \) to \(30\times \) faster depending on the speed of original color transfer algorithm and the input image resolution) for these complicated color transfer methods while preserving color transfer fidelity. Compared with TR [10], our method produces similar quality of output for global color transfer approximation, however, at a much reduced cost of computation (about \(10\times \) faster). Although TR [10] is less efficient, it is also worth noting that TR supports a wider range of image manipulation accelerations which include non-global color change.
5.2 Color transfer artifact reduction
5.3 Color-robust image stitching
The input images for image stitching are not always taken by the same camera or under the same illumination conditions. The camera’s automatic image processing pipeline also modifies the colors. Direct image stitching without color correction may therefore leave imperfections in the final blending result. Since the color change between images of different views is unknown but photo-realistic, our color transfer approximation model can be applied to address this color inconsistency problem. Figure 7 shows an example of a color-robust image stitching using our color transfer re-coding method where 2 input images taken by 2 different cameras and in different illuminations are supplied for image stitching. In our color-robust procedure, we first register these two images and find the overlapping pixels. With the per-pixel correspondences, we estimate a 3D color homography color transfer model that transfers the colors of the first image to the second image’s. We then apply the estimated color transfer model to correct the first image. Finally, the corrected first input image and the original second image are supplied to the image stitching software AutoStitch [2]. Although the multi-band blending proposed in [2] provides a smooth transition at the stitching boundary, the color difference between the two halves is still noticeable (especially for the sky and tree colors) in the original stitching result. After our color alignment process, the colors of the two halves look more homogeneous. We also compare our method with a local color correction method—gain compensation [2].
6 Conclusion
In this paper, we have shown that a global color transfer can be approximated by a combination of 3D color space mapping and mean intensity mapping. Our experiments show that the proposed color transfer model approximates well for many photo-realistic color transfer methods as well as unknown global color change in images. We have demonstrated three applications for color transfer acceleration, color transfer artifact reduction, and color-robust image stitching.
Footnotes
- 1.
The dataset will be made public for future comparisons. with a significant larger size—200 color transfer images—so that the quality of color transfer re-coding can be thoroughly evaluated. Each color transfer image pair also comes with the color transfer results of 4 popular methods [16, 20, 21, 23].
Notes
Acknowledgements
This work was supported by Engineering and Physical Sciences Research Council (EPSRC) (EP/M001768/1). We also acknowledge the constructive suggestions from all reviewers.
References
- 1.An, X., Pellacini, F.: User-controllable color transfer. Comput. Graph. Forum 29(2), 263–271 (2010)Google Scholar
- 2.Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007)CrossRefGoogle Scholar
- 3.Chen, Q., Xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. In: IEEE International Conference on Computer Vision (ICCV). (2017)Google Scholar
- 4.Coleman, T.F., Li, Y.: A reflective newton method for minimizing a quadratic function subject to bounds on some of the variables. SIAM J. Optim. 6(4), 1040–1058 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
- 5.Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Gr. 21, 257–266 (2002)Google Scholar
- 6.Fattal, R., Lischinski, D., Werman, M.: Gradient domain high dynamic range compression. ACM Trans. Gr. (TOG) 21, 249–256 (2002)Google Scholar
- 7.Finalyson, G.D., Gong, H., Fisher, R.B.: Color homography color correction. In: Color and Imaging Conference. Society for Imaging Science and Technology (2016)Google Scholar
- 8.Finlayson, G.D., Gong, H., Fisher, R.B.: Color homography: theory and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017). To appearGoogle Scholar
- 9.Finlayson, G.D., Mohammadzadeh Darrodi, M., Mackiewicz, M.: The alternating least squares technique for non-uniform intensity color correction. Color Res. Appl. 40(3), 232–242 (2015)CrossRefGoogle Scholar
- 10.Gharbi, M., Shih, Y., Chaurasia, G., Ragan-Kelley, J., Paris, S., Durand, F.: Transform recipes for efficient cloud photo enhancement. Trans. Gr. 34(6), 228 (2015)Google Scholar
- 11.Gong, H., Finlayson, G.D., Fisher, R.B.: Recoding color transfer as a color homography. In: British Machine Vision Conference. BMVA (2016)Google Scholar
- 12.Hwang, Y., Lee, J.Y., Kweon, I.S., Kim, S.J.: Color transfer using probabilistic moving least squares. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3342–3349. (2014)Google Scholar
- 13.Ilie, A., Welch, G.: Ensuring color consistency across multiple cameras. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1268–1275. (2005)Google Scholar
- 14.Maloney, L.: Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 3, 1673–1683 (1986)CrossRefGoogle Scholar
- 15.Marimont, D., Wandell, B.: Linear models of surface and illuminant spectra. J. Opt. Soc. Am. A 9(11), 1905–1913 (1992)CrossRefGoogle Scholar
- 16.Nguyen, R.M.H., Kim, S.J., Brown, M.S.: Illuminant aware gamut-based color transfer. Comput. Gr. Forum 33(7), 319–328 (2014)CrossRefGoogle Scholar
- 17.Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. In: European Conference on Computer Vision. pp. 568–580 (2006)Google Scholar
- 18.Pitié, F., Kokaram, A.: The linear monge-kantorovitch linear colour mapping for example-based colour transfer. In: European Conference on Visual Media Production (IET). pp. 1–9. (2007)Google Scholar
- 19.Pitie, F., Kokaram, A., Dahyot, R.: N-dimensional probability density function transfer and its application to color transfer. Int. Conf. Comput. Vis. 2, 1434–1439 (2005)Google Scholar
- 20.Pitié, F., Kokaram, A.C., Dahyot, R.: Automated colour grading using colour distribution transfer. Comput. Vis. Image Underst. 107(1–2), 123–137 (2007)CrossRefGoogle Scholar
- 21.Pouli, T., Reinhard, E.: Progressive histogram reshaping for creative color transfer and tone reproduction. In: ACM International Symposium on Non-Photorealistic Animation and Rendering, pp. 81–90. New York, USA (2010)Google Scholar
- 22.Rabin, J., Delon, J., Gousseau, Y.: Removing artefacts from color and contrast modifications. IEEE Trans. Image Process. 20(11), 3073–3085 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
- 23.Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Gr. Appl. 21(5), 34–41 (2001)CrossRefGoogle Scholar
- 24.Robertson, A.R.: The cie 1976 color-difference formulae. Color Res. Appl. 2(1), 7–11 (1977)CrossRefGoogle Scholar
- 25.Tai, Y.W., Jia, J., Tang, C.K.: Local color transfer via probabilistic segmentation by expectation-maximization. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 747–754. (2005)Google Scholar
- 26.Takane, Y., Young, F.W., De Leeuw, J.: Nonmetric individual differences multidimensional scaling: an alternating least squares method with optimal scaling features. Psychometrika 42(1), 7–67 (1977)CrossRefzbMATHGoogle Scholar
- 27.Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
- 28.Wu, F., Dong, W., Kong, Y., Mei, X., Paul, J.C., Zhang, X.: Content-based colour transfer. Comput. Graph. Forum 32(1), 190–203 (2013)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.