Advertisement

Multimedia Tools and Applications

, Volume 75, Issue 11, pp 6431–6443 | Cite as

Interactive image recoloring by combining global and local optimization

  • Xujie Li
  • Hanli Zhao
  • Hui Huang
  • Zhongyi Hu
  • Lei Xiao
Article

Abstract

We propose a novel interactive image recoloring method by combining global and local optimization. Our approach assumes that each pixel is a linear transform of its neighbors which can be in spatial or feature space. Corresponding, a new framework for combining global and local energy optimization is designed and derived. By taking advantage of global and local color propagation, our approach requires only a few user scribbles to produce the high-quality results. We show various experimental results and comparisons on image recoloring. Compared with the state-of-the-art methods, our approach produces higher-quality results with only a small amount of user interaction than those only consider local propagation or global propagation approaches.

Keywords

Image recoloring Global optimization Local optimization Color propagation 

1 Introduction

Image recoloring is the process of modification and adjustment of color appearance in images [2, 7, 18, 24, 25]. In recent years, image recoloring is one of the most popular photo editing tasks. The techniques of image recoloring can be broadly classified as example-based and scribble-based techniques [1, 19]. Although the use of a reference image can save considerable labor, the quality of the result depends heavily on the choice of the reference image for the example-based techniques. In particular, this techniques may produce incorrect results when the illumination condition of a reference image differs from that of the target image. In comparison, the scribble-based techniques can achieve a satisfactory result with a few user scribbles to indicate the desired color. Therefore, we focus on the scribble-based techniques in this paper.

The scribble-based image recoloring techniques are performed by propagating user-provided color scribbles to similar neighborhood pixels. According to the distribution of neighbors, the scribble-based image recoloring techniques can be divided into local image recoloring methods and global image recoloring methods. Local image recoloring methods provide the user with good local control, but such approaches perform poorly when recolorizing relatively far from the provided color constraints. Therefore, these approaches need to incorporate considerable user interaction to achieve a satisfactory result. In comparison, global image recoloring methods can propagate color cues in global manner which reduces the amount of necessary user inputs, but these methods lack local or direct selection control. They will suffer from mixing the two colors of scribbles when the similar neighborhood pixels are scribbled with two different colors.

Previous local and global approaches have their respective limitations to produce correct results. Therefore, in order to reduce the amount of necessary user inputs while producing high-quality results, we exploit a novel interactive image recoloring method by combining global and local optimization. This simple combination generates surprisingly good results as demonstrated by various experimental results.

Our main contribution is to combine global and local optimization to related pixels considering the user’s interactive scribbles. Our approach assumes that each pixel is a linear transform of its neighbors which can be in spatial or feature space. The distribution of neighbors is related to the user’s interactive scribbles in our approach. To our knowledge, this technique has seldom been reported in previous researches.

The remainder of this paper is organized as follows. Section 2 gives a brief survey of related work. In Section 3, we introduce the image recoloring framework of our algorithm including the energy optimization issues and implementation details. Applications and results are discussed in Section 4. Conclusions and future work conclude the paper.

2 Related work

The image recoloring techniques are related to image editing such as colorization, matting and tonal editing, etc. According to the distribution of neighbors, existing work on scribble-based recoloring techniques can be roughly divided into local image recoloring methods and global image recoloring methods.

The classical local image recoloring methods are proposed by Levin et al. [16]. They formulate the image recoloring method as a constraint quadratic optimization problem with the assumption that neighbors in spatial space with similar color should keep similar after image recoloring. This method can achieve a satisfactory result. However, it requires considerable user interaction. Yatziv et al. [26] and Criminisi et al. [8] both produce high-quality recoloring results by chrominance blending using geodesic distance. Fattal et al. [9] propose a family of second-generation wavelets constructed using a robust data-prediction lifting scheme and apply it to image recoloring. Bhat et al. [3] present an optimization framework for exploring gradient domain solutions for image recoloring. Farbman et al. [10] propose the use of diffusion distances to replace these Euclidean distances for calculating the affinity among pixels in image editing. Approaches based on spatial-intensity continuity require a large number of user inputs for disjoint regions with a similar texture. In order to propagate color in highly textured and fragmented regions, Qu et al. [27] and Sheng et al. [22, 23] both employ texture continuity to recolor pattern-intensive manga and natural images, respectively.

Recently, global image recoloring methods are also studied in the literature. Global image recoloring methods can propagate color cues relatively far from the provided color constraints, which is complementary to the local methods. An and Pellacini [1] develop a general and robust framework to image editing by efficiently approximating the all-pairs affinity matrix. Chen et al. [4] and Lee et al. [15] define a new Laplacian based on the nonlocal principle for image matting. Musialski et al. [20] and Chen et al. [5] propose a novel edit propagation algorithm for interactive image and video manipulations. These approaches use the locally linear embedding (LLE) to represent each pixel as a linear combination of its neighbors in a feature space.

Recently, Chen et al. [6] propose a novel alpha matting method with local and nonlocal smoothing priors. In our previous research [13], we propose a novel scribble-based colorization method which uses the neighbors in feature space combining color and spatial coordinate. However the distribution of neighbors restricted to the weight of image coordinate. On the one hand, usually, it is implicit that which is more important, local optimization or global optimization? Therefore it is nontrivial to set an appropriate range parameter which restricts the spatial range of searching nearest neighbors. On the other hand, the same strategy of the neighbors selection is used to the whole image region. This method cannot achieve global color propagation and provide the user with good local control at the same time.In contrast to all these techniques, we exploit a novel interactive image recoloring method by combining global and local optimization. The distribution of neighbors is related to the user’s interactive scribbles in our approach. The neighbors in spatial space yield the local optimization to color propagation, while the neighbors in nonlocal space yield the global optimization to color propagation. Because our approach considers the global and local color propagation for different image region which is related to user interaction, the accurate local control can achieve by our local linear model, while our global model can propagate color cues in global manner. With only a small amount of user interaction, our approach can produce higher-quality results than those only consider local propagation or global propagation approaches.

3 Algorithm

The general workflow of our image recoloring framework is as illustrated in Fig. 1. Our approach consists of four steps. (1) First, user’s interactive scribbles are added. We propose 3 types of tools so that the user can specify the color. The first tool is a color-brush which indicates the expectant color after completing image recoloring. The second tool is a color-keep brush. Regions of the image which are marked with color-keep brush will remain unchanged. The third tool is boundary-brush which is only used for extracting closed boundaries. Therefore, boundary-brush is an auxiliary tool which is not necessary in the process of image recoloring. (2) Next, the closed boundaries are extracting according to the user scribbles. (3) Then, the image is recolored by global and local color propagation. (4) Finally, the image is combined to get the final result.
Fig. 1

The general workflow of the proposed image recoloring framework

The key assumption of our algorithm is that the pixel F is a linear transform of its neighbors in YUV color space.
$$ F_{i}=\sum\limits_{j\in N_{i}}F_{j} $$
(1)
The neighbors N i can be in spatial or feature space. The quality of image color propagation depends primarily on the correctness of neighbors selection. The neighbors in spatial space yields the local optimization to color propagation, while the neighbors in nonlocal space yields the global optimization to color propagation. Figure 2 shows nonlocal neighbors and spatial neighbors. It can be seen that Levin et al.’s approach [16] typically uses windows of 3 × 3 spatial connectivity. Musialski et al.’s [20] approach searches the nonlocal nearest neighbors in the RGB feature space, which exploit long-range relationship among pixel. In order to enable long-range color propagation and local control, we combine nonlocal neighbors and spatial neighbors by user interaction. In this paper, we formalize image recoloring as a local and global optimization problem.
Fig. 2

Comparison on neighbors at pixel (60,30) (pink point) and pixel (100,91) (purple point) with previous work for the image of size 160 × 200

3.1 Local optimization

Levin et al. [16] have applied the model (1) to recolor image in local spatial neighbors. However, they only use intensity information to propagate color cues. More user interaction will be required to recolor images. Our local optimization model is inspired by He et al.’s [11] and Levin et al.’s [17] local linear model. It also has a theoretical connection with the matting Laplacian matrix [12]. Here, a variation of the linear model (1) is introduced.
$$ F_{i}=\sum\limits_{c}{\alpha^{c}_{j}}{I_{i}^{c}}+\beta_{j} ,\,\forall i\in N_{j} $$
(2)
Where (α j , β j ) are some linear coefficients assumed to be constant in N j . N j is the neighbor vector in a small image window around pixel j. I is the entire image and C is the color channel.
Using the linear model (2), we define the following cost function in YUV color space:
$$ J(F,\alpha,\beta)=\sum\limits_{j\in I}\left(\sum\limits_{i\in N_{j}}\left(\sum\limits_{c}{{\alpha^{c}_{j}}{I_{i}^{c}}}+\beta_{j}-F_{i}\right)^{2}+\varepsilon\sum\limits_{c} {\left({\alpha_{j}^{c}}\right)}^{2}\right) $$
(3)
Where N j is the local spatial neighbors which are in a small window around pixel j. we typically use windows of 3 × 3 pixels for all examples. ε is a regularizing constant and we choose ε small so that it has an influence only in ambiguous cases (ε = 10−6 in our implementation).
The linear coefficients (α, β) can be estimated by minimizing the cost function
$$ J(F)=\underset{\alpha ,\beta }{\mathop{\min }}J(F,\alpha,\beta) $$
(4)
Rewriting (3) using matrix notation we obtain
$$ J(F,\alpha,\beta)=\sum\limits_{k}{\left\|H_{k}\left[ \begin{array}{l} \alpha_{k} \\ \beta_{k} \end{array} \right]-\overline{F_{k}}\right\|}^{2} $$
(5)
Where H k is defined as a (|N k | + 1) × 2 matrix and contains a row of the form [I i , 1], and the last row of H k is of the form \([\sqrt {\varepsilon } , 0]\). \(\overline {F_{k}}\) is define as (|N k | + 1) × 1 vector, whose entries of \(\overline {F_{k}}\) are F k , and whose last entry is 0. |N k | is the number of neighborhood pixels.
The above energy optimization can be viewed as a quadratic equation which is based on the variables (α, β). After the first derivative with the above equation, we get
$$\begin{array}{@{}rcl@{}} \frac{\partial J(F,r)}{\partial r}&=&\frac{\partial \sum\limits_{k}{{\|H_{k}r-\overline{F_{k}}\|}^{2}}}{\partial r} =\frac{\partial \sum\limits_{k}{(H_{k}r-\overline{F_{k}})^{T}(H_{k}r-\overline{F_{k}})}}{\partial r} \\ &=&\sum\limits_{k}{2{H_{k}^{T}}H_{k}r-2{H_{k}^{T}}\overline{F_{k}}} \end{array} $$
(6)
Where r = [α k , β k ] T . After making J(F, r)/ r = 0, we get that \([\alpha _{k}, \beta _{k}]^{T}=({H_{k}^{T}}H_{k})^{-1}{H_{k}^{T}}\overline {F_{k}}\). Substitute the solution into (4), we obtain equation
$$ J(F)=\sum\limits_{k}{{\left\|(H_{k}({H_{k}^{T}}H_{k})^{-1}{H_{k}^{T}}-I)\overline{F_{k}}\right\|}^{2}}=\sum\limits_{k}{\overline{{F_{k}^{T}}}\overline{{H_{k}^{T}}}\overline{H_{k}}\overline{F_{k}}} $$
(7)
Where \(\overline {{H_{k}^{T}}}=H_{k}({H_{k}^{T}}H_{k})^{-1}{H_{k}^{T}}-I\). We define the matting Laplacian matrix \(L_{L}=\sum \limits _{k}{\overline {{H_{k}^{T}}}\overline {H_{k}}}\). Element (i, j) of L L is:
$$ \sum\limits_{k|(i,j)\in N_{k}}{\left(\delta_{ij}-\frac{1}{|N_{k}|}\left(1+(I_{i}-\mu_{k})^{T}\left(\sum\nolimits_{k}+\frac{\varepsilon}{|N_{k}|} I_{3}\right)^{-1}(I_{j}-\mu_{k})\right)\right)} $$
(8)
Where the parameter δ i j is the Kronecher delta, δ i j is 1 if i = j and 0 otherwise. μ k and \({\sum }_{k}\) are the mean and variance of pixel colors in window. I 3 is the 3 × 3 identity matrix.
Combining (7) and (8), we can obtain
$$ J(F)=\underset{\alpha ,\beta }{\mathop{\min }}J(F,\alpha,\beta)={F^{T}}{L_{L}}F $$
(9)
We combine local linear model with the user-driven to obtain the following optimization:
$$ J(F)=\lambda_{L}(F-G)^{T}{D_{S}}(F-G)+{F^{T}}{L_{L}}F $$
(10)
The parameter λ L is responsible for the balance between the two terms. D S is a diagonal matrix whose diagonal elements are one for constrained pixels and zero for all other pixels. G is the vector containing the specified color values for the scribbled constrained pixels and zero for all other pixels. The first term ensures the final results to be close to the user specified values G. The second term maintains the local smooth term. The energy function J(F) in (10) is quadratic in color component F. The global minimum J(F) can be obtained by partial derivatives of color component F to be zero. This amounts to solve the following sparse linear system:
$$ (L_{L}+{\lambda_{L}}{D_{S}})F={\lambda_{L}}{D_{S}}G $$
(11)

3.2 Global optimization

This study extends our previous research on the topic using the nonlocal neighbors to colorize image [13]. A new feature space is constructed. First, the K nonlocal nearest neighbors in the feature are searched. For image recoloring, a feature vector X(i) at a given pixel i that includes luminance channel Y and chrominance channels U and V can be defined as
$$ X(i)=(Y(i),U(i),V(i)) $$
(12)
Each pixel finds its neighbors in the YUV feature space. The implementation of searching K nearest neighbors is made easy by using FLANN or ANN library [21], which is proven to be very efficient in practice. According to the linear model (1), we minimize the difference between the color component F i at pixel i and the weighted average of the colors at the nonlocal neighborhood pixels:
$$ E(F)=\sum\limits_{i\in I}{\left(F_{i}-\sum\limits_{j\in N_{i}}{\alpha_{ij}F_{j}}\right)^{2}}+\lambda_{G}\sum\limits_{i\in S}(F_{i}-G_{i})^{2} $$
(13)
where, i, j are pixel indexes. Note that N i is the nonlocal nearest neighbors of pixel i in the RGB feature space of image I in (13). The subset of pixels S includes pixels of known color values from user scribbles. G i is the vector containing the specified color values for the scribbled constrained pixels and zero for all other pixels. λ G is responsible for the balance between the two terms.
The weighting function for soft segmentation is introduced.
$$ \alpha_{ij}=1-\left(\frac{\| X(i)-X(j)\|}{Y}\right) $$
(14)
where Y is the least upper bound of ∥X(i) − X(j)∥ to make α(i, j) ∈ [0, 1].
The energy function E(F) in (13) can be rewritten using the matrix notation as the following:
$$\begin{array}{@{}rcl@{}} E(F)&=&(F-AF)^{T}(F-AF)+\lambda_{G}(F-G)^{T}D_{S}(F-G)\\ &=&F^{T}(I-A)^{T}(I-A)F+\lambda_{G}(F-G)^{T}D_{S}(F-G) \end{array} $$
(15)
Here, D s is a diagonal matrix. We set D s is 1 if iS and 0 otherwise. I represents an N × N identity matrix. We define the global Laplacian L G = (IA) T (IA).
The energy function E(F) in (15) is quadratic in color component F. The global minimum min {E(F)} can be obtained by partial derivatives of color component F to be zero. This amounts to solve the following sparse linear system:
$$ (L_{G}+{\lambda_{G}}{D_{S}})F={\lambda_{G}}{D_{S}}G $$
(16)

It is notable the (11) and (16) are similar in form. However, the main distinction between the global optimization and local optimization is the distribution of neighbors. The user can utilize brush tools to explicitly select whether to use local optimization or global optimization for different image regions. Therefore, for each region, the strategy of the neighbors selection is explicit. The local optimization solver (11) and global optimization solver (16) are used in each region separately. Both of the (11) and (16) are a large sparse system. Finally, we use the Gaussian-Seidel method iteratively to optimize these equations and get the optimal solutions in our implementation.

4 Applications and results

Figure 3 shows the image recoloring results with various neighborhood size. When the neighborhood size K is set too small (K = 2), image local areas will not be recolored. However, as the neighborhood size K becomes large, the quality of results will almost not be affected. Typical neighborhood size K is from 5 to 15. It can be seen that the quality of results is not insensitive to the neighborhood size. Therefore, we fix the value of neighborhood size K = 10 in our experiments.
Fig. 3

The image recoloring results with various neighborhood size

Figure 4 shows the comparison on image recoloring with the color replacement tool in Photoshop CS5. Our approach produces a high-quality result as shown on the right. In comparison, the color replacement tool in Photoshop produces the unsatisfactory result in the middle. More user interaction will be required to achieve result with similar quality as ours. Our interactive image recoloring approach by combining global and local optimization is more competitive than the color replacement tool in Photoshop.
Fig. 4

Comparison on image recoloring. Left: the input image with user scribbles. Middle: the result from Photoshop CS5 (Replace Color). Right: our result

Figure 5 shows the comparisons of the amount of user strokes for different local recoloring approaches to achieve results with similar qualities as ours. Because our approach considers the global and local color propagation, our approach requires only a few user scribbles to produce the high-quality results. However, local recoloring approaches only consider local spatial neighbors propagation within a small local windows (3 × 3). Therefore, their approaches need to incorporate considerable user interaction to achieve results with similar qualities as ours.
Fig. 5

Comparisons of the amount of user strokes for different local recoloring approaches

In Fig. 6, we compare our results with previous work. Musialski et al.’s [20] global approach suffers from mixing the two colors of scribbles no matter how the scribbles are placed. Adding more scribbles will worsens this problem owing to the inherent global influence. The approach of Li et al. [13] uses nonlocal neighbors in feature space combining color and spatial coordinate. However, it is nontrivial to set an appropriate range parameter, which restricting the spatial range of searching nearest neighbors. Moreover, the parameter is fixed for every pixel. Therefore, the method of Li et al. balances the approaches of global and local color propagation by adjusting the spatial range parameter. Although this method can propagate color to relatively far from the provided color constraints, their method cannot achieve global color propagation and provide the user with good local control at the same time. Our approach achieves the pleasant results of image recoloring to only use a small amount of user interaction. The accurate local control can achieve by our local linear model, while our global model can propagate color cues in global manner. Note that such capabilities are unattainable for those only consider local propagation or global propagation approaches. It also can be seen that, the local recoloring approach of Levin et al. works well when the pixels are near the scribbles. But, their approach performs poorly when recolorizing relatively far from the provided color constrains. More user inputs, meaning additional color scribbles need be added to improve the quality of the recolored result. Compared to Musialski et al.’s and Levin et al.’s methods, our method can produce the pleasing visual effect with local and global control. Although there is no standardized benchmark database to evaluate the quality of image recoloring. Fortunately, the image recoloring results are easy to evaluate visually. In order to perform a quantitative comparison of image recoloring results, the simple synthetic images are used to replace the ground truth as shown in Fig. 7. Table 1 shows the MSE (mean squared error) of the recolored results in Fig. 7. It can be seen that our image recoloring method outperforms these state-of-the art methods.
Fig. 6

Comparisons of image recoloring with previous work

Fig. 7

Quantitative comparison of image recoloring results

Table 1

MSE (mean squared error) of the recolored results in Fig. 7

 

Ours

Levin’s

musialski’s

Li’s

 

algorithm

algorithm

algorithm

algorithm

mse (10−4)

3.3

65

8.9

7.5

Figure 8 shows one example of selective image decoloring compared to previous approaches. Note that Musialski et al.’s global approach fails to produce the visually plausible results no matter how the scribbles are placed.
Fig. 8

Comparisons of selective image decoloring

Examples of progressive image color editing with our approach are shown in Fig. 9. It demonstrates that our approach is a simple yet effective image recoloring method. While traditional recoloring methods are relatively difficult to produce the similar results with only a small amount of user interaction. A limitation of our approach is that the quality of image recoloring relies on the user scribbles. It is shown in Fig. 10, the uncompleted scribbles may incur color bleeding. In order to produce the visually plausible result, more scribbles are needed to draw as shown in Fig. 10, which may be tedious when the image is complex and contains many colors.
Fig. 9

Examples of progressive image color editing with our approach

Fig. 10

Examples of image recoloring with uncompleted scribbles

5 Conclusions

We have presented a novel interactive image recoloring framework only requiring a small number of user interaction. Experiments have demonstrated that our interactive image recoloring approach can produce higher-quality results than those only consider local propagation or global propagation with only using a small amount of user interaction. As our method can be performed in parallel, we plan to further accelerate the algorithm by employing the GPU implementation of KNN and PCG (Preconditioned Conjugate Gradient) [14] to solve the linear system, achieving the near real-time processing in the future.

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61100146), the Zhejiang Provincial Natural Science Foundation of China (Grant Nos. LQ14F020006, LQ12F02010, LY12F02015 and LY12F02014), and the Science and Technology Plan Program of Wenzhou, China (Grant Nos. G20130017 and No. S20100053).

References

  1. 1.
    An X, Pellacini F (2008) Appprop: All-pairs appearance-space edit propagation. ACM Trans Graph 27(3):40:1–40:9CrossRefGoogle Scholar
  2. 2.
    Beigpour S, van de Weijer J (2011) Object recoloring based on intrinsic image estimation. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV ’11, pp 327–334, Washington, DC, USA. IEEE Computer SocietyGoogle Scholar
  3. 3.
    Bhat P, Lawrence Zitnick C, Cohen M, Curless B (2010) Gradientshop: A gradient-domain optimization framework for image and video filtering. ACM Trans Graph 29(2):10:1–10:14CrossRefGoogle Scholar
  4. 4.
    Chen Q, Li D, Tang C-K (2013) Knn matting. IEEE Trans Pattern Anal Mach Intell 35(9):2175–2188CrossRefGoogle Scholar
  5. 5.
    Chen X, Zou D, Zhao Q, Tan P (2012) Manifold preserving edit propagation. ACM Trans Graph 31(6):132:1–132:7Google Scholar
  6. 6.
    Chen X, Zou D, Zhou SZ, Zhao Q, Tan P (2013) Image matting with local and nonlocal smooth priors. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’13, pp 1902–1907, Washington, DC, USA. IEEE Computer SocietyGoogle Scholar
  7. 7.
    Cohen-Or D, Sorkine O, Gal R, Leyvand T, Xu Y-Q (2006) Color harmonization. ACM Trans Graph 25(3):624–630CrossRefGoogle Scholar
  8. 8.
    Criminisi A, Sharp T, Rother C, P’erez P (2010) Geodesic image and video editing. ACM Trans Graph 29(5):134:1–134:15CrossRefGoogle Scholar
  9. 9.
    Fattal R (2009) Edge-avoiding wavelets and their applications. ACM Trans Graph 28(3):22:1–22:10CrossRefGoogle Scholar
  10. 10.
    Farbman Z, Fattal R, Lischinski D (2010) Diffusion maps for edge-aware image editing. ACM Trans Graph 29(6):145:1–145:10CrossRefGoogle Scholar
  11. 11.
    He K, Sun J, Tang X (2010) Guided image filtering. In: Proceedings of the 11th European Conference on Computer Vision: Part I, ECCV’10, pp 1–14. Springer-Verlag, Berlin, HeidelbergGoogle Scholar
  12. 12.
    Hsu E, Mertens T, Paris S, Avidan S, Durand F (2008) Light mixture estimation for spatially varying white balance. ACM Trans Graph 27(3):70:1–70:7CrossRefGoogle Scholar
  13. 13.
    Huang H, Li X, Zhao H, Nie G, Zhongyi H, Xiao L (2014) Manifold-preserving image colorization with nonlocal estimation. Multimedia Tools and Applications:1–14Google Scholar
  14. 14.
    Krishnan D, Szeliski R (2011) Multigrid and multilevel preconditioners for computational photography. ACM Trans Graph 30(6):177:1–177:10CrossRefGoogle Scholar
  15. 15.
    Lee P, Wu Y (2011) Nonlocal matting. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’11, pp 2193–2200, Washington, DC, USA. IEEE Computer SocietyGoogle Scholar
  16. 16.
    Levin A, Lischinski D, Weiss Y (2004) Colorization using optimization. ACM Trans Graph 23(3):689–694CrossRefGoogle Scholar
  17. 17.
    Levin A, Rav-Acha A, Lischinski D (2008) Spectral matting. IEEE Trans Pattern Anal Mach Intell 30(10):1699–1712CrossRefGoogle Scholar
  18. 18.
    Lin S, Ritchie D, Fisher M, Hanrahan P (2013) Probabilistic color-by-numbers: Suggesting pattern colorizations using factor graphs. ACM Trans Graph 32(4):37:1–37:12MATHGoogle Scholar
  19. 19.
    Liu X, Wan L, Yingge Q, Wong T-T, Lin S, Leung C-S, Heng P-A (2008) Intrinsic colorization. ACM Trans Graph 27(5):152:1–152:9CrossRefGoogle Scholar
  20. 20.
    Musialski P, Cui M, Ye J, Razdan A, Wonka P (2013) A framework for interactive image color editing. Vis Comput 29(11):1173–1186CrossRefGoogle Scholar
  21. 21.
    Olonetsky I, Avidan S (2012) Treecann - k-d tree coherence approximate nearest neighbor algorithm. In: Proceedings of the 12th European Conference on Computer Vision - Volume Part IV, ECCV’12, pp 602–615. Springer-Verlag, Berlin, HeidelbergGoogle Scholar
  22. 22.
    Sheng B, Sun H, Magnor M, Li P (2014) Video colorization using parallel optimization in feature space. IEEE Transactions on Circuits and Systems for Video Technology 24(3):407–417CrossRefGoogle Scholar
  23. 23.
    Sheng B, Sun H, Chen S, Liu X, Enhua W (2011) Colorization using the rotation-invariant feature space. IEEE Comput Graph Appl 31(2):24–35CrossRefGoogle Scholar
  24. 24.
    Seo S, Park Y, Ostromoukhov V (2013) Image recoloring using linear template mapping. Multimedia Tools Appl 64(2):293–308CrossRefGoogle Scholar
  25. 25.
    Wang B, Yizhou Y, Wong T-T, Chen C, Xu Y-Q (2010) Data-driven image color theme enhancement. ACM Trans Graph 29(6):146:1–146:10Google Scholar
  26. 26.
    Yatziv L, Sapiro G (2006) Fast image and video colorization using chrominance blending. Trans Img Proc 15(5):1120–1129CrossRefGoogle Scholar
  27. 27.
    Yingge Q, Wong T-T, Heng P-A (2006) Manga colorization. ACM Trans Graph 25(3):1214–1220CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Xujie Li
    • 1
  • Hanli Zhao
    • 1
  • Hui Huang
    • 1
  • Zhongyi Hu
    • 1
  • Lei Xiao
    • 1
  1. 1.Intelligent Information Systems InstituteWenzhou UniversityWenzhouChina

Personalised recommendations