An analysis of single image defogging methods using a color ellipsoid framework
 4.2k Downloads
 14 Citations
Abstract
Abstract
The goal of this article is to explain how several single image defogging methods work using a color ellipsoid framework. The foundation of the framework is the atmospheric dichromatic model which is analogous to the reflectance dichromatic model. A key step in single image defogging is the ability to estimate relative depth. Therefore, properties of the color ellipsoids are tied to depth cues within an image. This framework is then extended using a Gaussian mixture model to account for multiple mixtures which gives intuition in more complex observation windows, such as observations at depth discontinuities which is a common problem in single image defogging. A few single image defogging methods are analyzed within this framework and surprisingly tied together with a common approach in using a dark prior. A new single image defogging method based on the color ellipsoid framework is introduced and compared to existing methods.
Keywords
Gaussian Mixture Model Median Operator Sample Window Mixture Weight Depth Discontinuity1 Introduction
The phrase single image defogging is used to describe any method that removes atmospheric scattering (e.g., fog) from a single image. In general, the act of removing fog from an image increases the contrast. Thus, single image defogging is a special subset of contrast restoration techniques.
In this article, we refer to fog as the homogeneous scattering medium made up of molecules large enough to equally scatter all wavelengths as described in [1]. Thus, the fog we are referring to is evenly distributed and colorless.
The process of removing fog from an image (defogging) requires the knowledge on physical characteristics of the scene. One of these characteristics is the depth of the scene. This depth is measured from the camera sensor to the objects in the scene. If scene depth is known, then the problem of removing fog becomes much easier. Ideally, given a single image, two images are obtained: a scene depth image and a contrast restored image.
The essential problem that must be solved in most single image defogging methods is scene depth estimation. This is equivalent to converting a twodimensional image to a threedimensional image with only one image as the input. The approach to estimating the scene depth for the purpose of defogging is not trivial and requires prior knowledge such as depth cues from fog or atmospheric scattering.
where at pixel location i, the transmission t_{ i } is a function of the scattering β_{ i }(λ) and distance d_{ i }. The term λ is the specific wavelength.
Even though depth from scattering is a wellknown phenomenon, single image defogging is relatively new, and a growing number of methods exist. The first methods trying to achieve single image defogging were presented by Tan [7] and Fattal [8]. Both authors introduced unique methods that remove fog from a single image by inferring the transmission image or map. Soon afterwards, another unique method called the dark channel prior (DCP) by He et al. [9] supported the ability to infer a raw estimate of t using a single image with fog present. The DCP method has also influenced many more single image defogging methods (see [10, 11, 12, 13, 14, 15, 16]). Within the same time frame, Tarel and Hautière [17] introduced a fast single image defogging method that also estimates the transmission map.
where w is a scaling term, and θ is a ‘dark prior’. The DCP method by He et al. [9] was the first to explicitly use (2); however, we demonstrate that this is the prototype used also by other methods regardless of their approach. We find that the dark prior is dependent on properties from the proposed color ellipsoid framework. The following single image defogging methods are analyzed within the framework: Fattal [8], He et al. [9], Tarel and Hautière [17], and Gibson et al. [16].
The second key message in this article is that a new single image defogging method is proposed. This method is developed using a lemma from the color ellipsoid framework and also estimates the transmission with the same prototype in (2).
There are eight sections in this article including this section. Section 2 presents a detailed description of the atmospheric dichromatic model. Section 3 introduces the color ellipsoid framework. The framework is analyzed when fog is present, and our new defogging method is introduced in Section 4. We then unify four different single image defogging methods using the color ellipsoid model in Section 6. The discussion and conclusion are provided in Sections 7 and 8, respectively.
2 Atmospheric dichromatic model
is commonly used in single image defogging methods for characterizing the intensity of a foggy pixel.
In comparison to the dichromatic reflectance model [19], the diffuse and specular surface reflections are analogous to the direct transmission, t_{ i }(λ)x_{ i }(λ), and atmospheric veiling, (1t_{ i }(λ))a(λ), respectively. The atmospheric scattering causes the apparent radiance to have two chromatic artifacts caused by particles in the air that both attenuate direct transmission and add light induced by a diffuse light source.
For obtaining a defogged image, the goal is to estimate the pchannel color image ${\left(x\right(0),x(1),\dots ,x(p1\left)\right)}^{T}=\mathbf{x}\in {\mathbb{R}}^{p}$ using the dichromatic model (3). For most cases, p = 3 for color images. However, the problem with (3) is that it is underconstrained with one equation and four unknowns for each color channel. Note that there are two unknowns contained within the transmission, t(λ), in (1).
The first unknown is the desired defogged image x. The second unknown variable is the airlight color, ${\left(a\right(0),\dots ,a(p1\left)\right)}^{T}=\mathbf{a}\in {\mathbb{R}}^{p}$. This is the color and intensity observed from a target when the distance is infinite. A good example is the color of the horizon on a foggy or hazy day.
The third and fourth unknowns are from the transmission introduced in (1). The transmission, ${t}_{i}\left(\lambda \right)\in \mathbb{R}$, is the exponentially decaying function based on scattering, β_{ i }(λ), and distance d_{ i }.
bringing the unknown count down to a total of two for gray scale or four for redgreenblue (RGB) color excluding estimating x. The transmission t is the first unknown and airlight a is the second unknown for gray scale. For color (p = 3), transmission t is one unknown and airlight a has three unknowns.
The single image defogging problem is composed of two estimations using only the input image $\stackrel{~}{\mathbf{x}}$: the first is to estimate the airlight a and the second to estimate the transmission t.
There exists several methods for estimating a [7, 9, 18]. In this article, we will assume that the airlight has been estimated accurately in order to focus the analysis on how transmission is estimated (with possible need for refinement). Therefore, the key problem in single image defogging is estimating transmission given a foggy image.
3 Color ellipsoid framework without fog
The general color ellipsoid model and its application to single image defogging was introduced by Gibson and Nguyen in [20] and [21]. This work will be reproduced here to facilitate the development of additional properties of the model in this article.
The motivation for approximating a color cluster with an ellipsoid is attributed to the color line model in [22] which is heavily dependent on the work from [23]. The color line model exploits the complex structure of RGB histograms in natural images. This line is actually an approximation of an elongated cluster where Omer and Werman [22] model the cluster with a skeleton and a 2D Gaussian neighborhood. Likewise, truncated cylinders are used in [23].
We continue the thought presented by Omer and Werman [22] that subsets of these clusters are ellipsoidal in shape. We accomplish this by instead generating an RGB histogram using color pixels sampled from a small window within the image.
with the eigenvalues in ${\mathbf{D}}_{i}=\text{diag}({\sigma}_{i,1}^{2},\dots ,{\sigma}_{i,3}^{2})$ are sorted in decreasing order.
parameterized by the sample mean μ_{ i } and sample covariance Σ_{ i }. We will drop the parameters for clarity so that ${\mathcal{E}}_{c}\left({\mu}_{i},{\mathit{\Sigma}}_{i}\right)={\mathcal{E}}_{c}$.
It is common to assume that the distribution of the color values sampled within Ω_{ i } is normally distributed or can be modeled with an ellipsoid. The distribution for the tristimulus values of color textures was assumed to be normally distributed by Tan [24]. Even though Devaux et al. [25] do not state that the sample points are normally distributed, they model the color textures with a threedimensional ellipsoid using the KarhunenLoeve transform. Kuo and Chang [26] sample the entire image and characterize the distribution as a mixture of Gaussians with K clusters.
In Figure 1c, we approximated color ellipsoids to each cluster using principal component analysis, where the sample mean and sample covariances were used. In Figure 1b,c, the upper cluster is from the road and the lower cluster is from the tree trunk. Approximating the RGB clusters with an ellipsoidal shape does well in characterizing the threedimensional density of the cluster of points.
4 Color ellipsoid framework with fog
4.1 General properties
We derive in this section the constraints for color ellipsoids when fog is present. We first simplify the derivation by assuming that the surface of the radiant object within the sample window is flat with respect to the observation angle so that the transmission t_{ i } is the same within Ω_{ i } (t_{ i } = t).
Note that the transmission is the same within the patch because it is assumed that the depth is flat.
The RGB histogram of the surface and a foggy version of the surface should exhibit two main differences. The first is that the RGB cluster will translate along the convex set between μ_{ i } and a according to (10). Second, with 0≤t_{ i } ≤ 1, the size of the cluster will become smaller when fog is present according to (11). In this article, we present the following new lemmas.
Lemma 1.
Proof.
Let β > 0 since the scene is viewed within the fog. Then, t = e^{  β d} = 1 holds if and only if d = 0. However in real world images, the distance to the camera is never zero (d > 0), therefore 0 ≤ t < 1. □
Lemma 2.
If the parameters μ and $\stackrel{~}{\mu}$ are formed according to (10), and $\left\right\mu {}^{2},\left\stackrel{~}{\mu}\right{}^{2},\left\right\mathbf{a}{}^{2}\in {\mathbb{R}}_{>0}$, then the centroid of ${\mathcal{E}}_{c}$ is closer to the origin than the centroid of ${\stackrel{~}{\mathcal{E}}}_{c}$.
Proof.
□
Lemma 3.
The volume of the color ellipsoid ${\mathcal{E}}_{c}$ is larger than the foggy color ellipsoid $\stackrel{~}{{\mathcal{E}}_{c}}$.
Proof.
□
4.2 Color ellipsoid model with depth discontinuity
We have assumed in the previous section that the transmission within a sample window is constant. However, this is not always true. For example, the sample window may be centered on a depth discontinuity (e.g., edge of a building).
If depth discontinuities are not accounted for in transmission estimation, then undesired artifacts will be present in the contrast restored image. These artifacts are discussed in more detail in [9, 16, 17]. In summary, these artifacts appear to look like a halo at a depth edge.
To account for the possibility that the sample window is over a depth discontinuity, we characterize the pixels observed within Ω as a Gaussian mixture model [27]. The sample window may cover K different types of objects. This yields K clusters in the RGB histogram.
The parameter vector ${\Theta}_{K}=({\stackrel{~}{\mu}}_{1},\dots ,{\stackrel{~}{\mu}}_{K},{\stackrel{~}{\mathit{\Sigma}}}_{1},\dots ,{\stackrel{~}{\mathit{\Sigma}}}_{K})$ is a culmination of the K Gaussian mean and covariance parameters defined by Equations 10 and 11, respectively. The mixture weight π_{i,g} is Ω_{i,g}/Ω_{ i } with $\sum _{g=1}^{K}{\pi}_{i,g}=1$.
has a shape influenced by the mixture weights.
respectively. Instead of the transmission influencing the position $\stackrel{~}{\mu}$ of the ellipsoid, the mixture weight also has influence on the sample mean. Therefore, the problem of ambiguity exists because of the combination of the mixture weight and transmission π_{1}t_{1}. In order to use the sample mean to estimate the transmission value, the mixture weight must be considered.
5 Proposed ellipsoid prior method
Part of our key message in unifying existing defogging methods is that the transmission can be estimated using parameters from ${\stackrel{~}{\mathcal{E}}}_{c}$. As an introduction to this unification, we will use Lemma 2 to derive a new unique dark prior.
Similar to the nomenclature in [9], let the centroid prior, θ_{ C }, be the dark prior using Lemma 2.
where c is the color channel.
with t_{0} set to a low value for numerical conditioning (t_{0} = 0.001) (see the work by [9] for the recovery method and [17] for additional gamma corrections). For generating the defogged image using the centroid prior, ${\widehat{\mathbf{x}}}_{C}$, a gamma value of 1/2 was used for the examples in this article, e.g., ${\widehat{\mathbf{x}}}_{C}^{1/2}$. The complete algorithm for the ellipsoid prior defogging method is in Algorithm 5.
Algorithm 1 The ellipsoid prior defogging algorithm.
The transmission estimate in (26) is of the same prototype form in (2). Deriving a transmission estimate based on Lemma 2 results in creating a centroid prior that is a function of the ellipsoid parameters. In Section 6, we will show that other single image defogging methods also use the prototype in (2) where a dark prior is used. We will also show that the dark prior is a function of the color ellipsoid properties.
6 Unification of single image defogging methods
The color ellipsoid framework will now be used to analyze how four single image defogging methods (Fattal [8], He [9], Gibson [16], and Tarel [17]) estimate the transmission using properties of the color ellipsoids.
6.1 Dark channel prior
In [20], the dark channel prior (DCP) method [9] was explained using a minimum volume ellipsoid which we will reproduce here for completeness.
with w = 0.95 for most scenes. This DCP transmission estimate in (31) is of the same form as (2).
It was observed by He et al. [9] through an experiment that the DCP of nonfoggy outdoor natural scenes had 90% of the pixels below a tenth of the maximum possible value, hence the dark nomenclature in DCP. The ${\widehat{t}}_{D}$ is constructed in such a way that it assumes there is a pixel within the sample region centered at i that originally was black. This is a strong assumption, and there must be more to why this initial estimate works.
with equivalence when z_{ c } ∈ Ω_{ i } since a point from the set Ω_{ i } is selected instead of the estimated shell of the ellipsoid.
The right hand side of (36) was chosen by He et al. [9] to regularize the matting based on the DCP and to enforce smoothing weighted by λ.
and D and I_{3 × 3} in (38) is influenced by the properties of the color ellipsoid (μ_{ k } and Σ_{ k }) within the window Ω_{ k }. The ability of preserving depth discontinuity edges is afforded by the affinity matrix, W, which is effective in preserving edges and discontinuities because of its locally adaptive nature [28].
The DCP method estimates the transmission with the prototype in (2), just like the centroid prior. Additionally, the properties of the color ellipsoids play a key role in the DCP for initial estimation and Laplacian matting for refinement.
6.2 Fattal prior
The single image defogging method by Fattal [8] is a unique method that at first does not appear to be using the prototype in (2). However, we show that Fattal’s method does indeed indirectly develop a dark prior and estimates the transmission with the same prototype in (2).
Fattal developed a way to create a raw estimate of the transmission and then employed a refinement step to improve the transmission estimate. We will first investigate how the raw transmission estimate is constructed.
when the albedo r is constant.
with a = a^{⊥} and 〈a,a^{⊥} 〉 = 0.
The term $\left\right{\mathbf{r}}_{{a}^{\perp}}\left\right$ is the residual albedo projected onto a^{⊥}.
we see yet another prior, the Fattal prior θ_{ F }. The Fattal prior should behave similar to the DCP (θ_{ D }) and centroid prior (θ_{ C }) since it is also used to estimate the transmission. The term θ_{ F } should match the intuition that it becomes darker (close to zero) when radiant objects are closer to the camera when fog is present.
The Fattal prior utilizes Lemma 2. Note that in (4) as the transmission increases, t → 1, the foggy pixel moves farther away from the airlight vector, a, while staying on the convex set $\mathbf{a}\stackrel{~}{\mathbf{x}}$. This causes more energy to go to the residual, ${x}_{{a}^{\perp}}$, and less to x_{ a }. Therefore, according to (46), the Fattal prior decreases or becomes darker, θ_{ F } → 0, as the transmission increases regardless of the value of η.
The Fattal prior also utilizes Lemma 3. To observe this, we analyze the weight factor, η, in (46) which is a measure of ambiguity. It increases as the albedo color becomes parallel with the airlight or becomes more ambiguous. A low η value means that it is not known whether the pixel is covered by fog or if it is truly the same color as the airlight, but not covered by fog.
Since η is measured using a sample region Ω, we employ the color ellipsoid framework to show that the θ_{ F } is dependent on the color ellipsoid.
where ${\widehat{t}}_{\mathit{\text{FS}}}$ is the refinement of ${\widehat{t}}_{F}$, and $\mathcal{G}$ are the pixels in ${\widehat{t}}_{F}$ that are good. The transmission variance σ_{ t } is discussed in detail in [8] and is measured based on the noise in the image. The smoothing is controlled by the variance value ${\sigma}_{s}^{2}$.
The statistical prior on the right hand side of (57) not only enforces smoothness but also that the variation in the edges in transmission matches the edges in the original image projected onto airlight. Therefore, if there is a depth discontinuity, the variation will be large in ${({\stackrel{~}{x}}_{a,i}{\stackrel{~}{x}}_{a,j})}^{2}$ enforcing ${\widehat{t}}_{\mathit{\text{FS}}}$ to preserve depth discontinuity edges.
6.3 Tarel prior
In this section, we will explore the single image fog removal method presented by Tarel and Hautière [17] and relate their intuition with the properties of the color ellipsoids for foggy images. For this analysis, we will make the same assumption that Tarel makes where the foggy image, $\widehat{\mathbf{x}}$, has been white balanced such that the airlight component is pure white, a = (1,1,1)^{ T }.
(with a_{ s } = 1) which is a linear function of the transmission. Similar to the DCP, we call this term, θ_{ T }, the Tarel prior. We show that this prior is also dependent on the color ellipsoid properties.
The intuition in using the image whiteness is similar to the first step used in He’s method to obtain the DCP (30). The set of values w_{ i } within Ω_{ i } are the minimum distances from the points in the RGB cluster to either the RG, GB, or RB planes. The atmospheric veiling is estimated by measuring the local average of w, μ_{ w }, and subtracting it from the local standard deviation of w, σ_{ w }.
6.3.1 Analysis without median operator
where we assume just as Tarel does that the airlight is pure white with a magnitude 1 for each color channel. If the color in the patch is pure white, μ_{w,i} becomes 1, hence the name image of whiteness. Moreover, if the color within Ω_{ i } at least has one color component that is zero, then the local mean is only dependent on the atmospheric veiling, μ_{w,i} = 1  t.
Using the approximation with (63), it can be shown that θ_{ T } is dependent on the position and shape of the color ellipsoid. There are four different clusters in Figure 8 that exist from different sample patches, where three of the clusters have the true μ_{w,i} indicated with them. One can see that these local averages of the image whiteness for each cluster are essentially the minimum component value for the cluster centroid given that the orientation of the cluster is aligned to the gray color line. Assuming that the orientation is along the gray color line is not too strong of an assumption since the image itself has been whitebalanced and the dominant orientation is also along the gray color line due to shading or airlight influence. The fourth cluster, indicated with a dashed blue ellipse, is an example where this approximation is not valid due to the position and orientation of the cluster points.
Up to this point, the Tarel prior θ_{ T } is not a function of the mixture weights within the sample patch Ω_{ i } and thus will cause undesirable halo artifacts when removing fog from the image.
6.3.2 Analysis with median operator
The sample patch Ω_{ i } is chosen to be large (41×41) to enforce θ_{ T } to be smooth. Likewise, since the median operator works well with edge preservation [17], the edges are considered limiting halo artifacts from being present.
with Ω_{ i } odd. In addition to θ_{ T } being dependent on the size and position of the color ellipsoid from the sample patch Ω_{ i }, we also show in (67) that the mixture weights are employed by Tarel to infer the atmospheric veiling.
This is essentially a hybrid of both the DCP θ_{ D } and the Tarel prior θ_{ T } because of the use of the median operator. In the same fashion as the previous analysis for the DCP and Tarel priors, the MDCP is also a function of the color ellipsoid properties. It also accounts for depth discontinuities by being dependent on the mixture weights π_{ g }.
7 Discussion
We have found that we can unify single image defogging methods. The unification is that all of these single image defogging methods use the prototype in (2) to estimate transmission using a dark prior. Additionally, each of these dark priors use properties of the color ellipsoids with respect to Lemmas 2 and 3.
Summary of dark prior methods
Name  Dark prior  Estimate  Refinement step 

Centroid Prior  θ _{ C }  $\left({\mathbf{\text{a}}}^{T}\stackrel{~}{\mu}{\left\left\stackrel{~}{\mu}\right\right}_{2}^{2}\right)/\left({\left\left\mathbf{\text{a}}\right\right}_{2}^{2}{\mathbf{\text{a}}}^{T}\stackrel{~}{\mathit{\mu}}\right)$  Median operator to estimate $\stackrel{~}{\mathit{\mu}}$ 
DCP  θ _{ D }  $\underset{j\in {\Omega}_{i}}{min}\left(\underset{c\in \left\{r,g,b\right\}}{min}\frac{{\stackrel{~}{x}}_{j}\left(c\right)}{a\left(c\right)}\right)$  Spectral matting 
Fattal prior  θ _{ F }  $\frac{1}{\left\right\mathbf{\text{a}}\left\right}({\stackrel{~}{x}}_{a,i}{\eta}_{i}{\stackrel{~}{x}}_{{a}^{\perp},i})$  GaussMarkov random field 
model  
Tarel prior  θ _{ T }  ${\text{med}}_{j\in {\Omega}_{i}}{w}_{j}{\text{med}}_{j\in {\Omega}_{i}}\left{w}_{j}{\text{med}}_{k\in {\Omega}_{i}}{w}_{j}\right$  None 
MDCP  θ _{ M }  ${\text{med}}_{j\in {\Omega}_{i}}\left(\underset{c\in \left\{r,g,b\right\}}{min}\frac{{\stackrel{~}{x}}_{j}\left(c\right)}{a\left(c\right)}\right)$  None 
8 Conclusion
The development of the color ellipsoid framework is a contribution to the field of work in single image defogging because it brings a richer understanding to the problem of estimating the transmission. This article provides the tools necessary to clearly understand how transmission is estimated from a single foggy day image. We have introduced a new method that is visually more aggressive in removing fog which affords an image that is richer in color.
Future work will include the color ellipsoid framework in the development of a contrast enhancement metric. Additionally, the ambiguity problem when estimating the transmission will be addressed using the orientation of the color ellipsoid to develop a more accurate transmission mapping with respect to the depth of the scene.
We present a new way to model single image defogging methods using a color ellipsoid framework. Our discoveries are as follows:

We have discovered how depth cues from fog can be inferred using the color ellipsoid framework.

We unify single image defogging methods using the color ellipsoid framework.

A Gaussian mixture model is crucial to represent depth discontinuities which is a common issue in removing fog in natural scenes.

We discover that the ambiguity in measuring depth from fog is associated with the color ellipsoid orientation and shape.

A new defogging method is presented which is effective in contrast enhancement and based on the color ellipsoid properties.
This article is a contribution to the image processing community by providing strong intuition in single image defogging, particularly estimating depth from fog. This is useful in contrast enhancement, surveillance, tracking, and robotic applications.
Notes
Acknowledgments
This work is supported in part by the Space and Naval Warfare Systems Center Pacific (SSC Pacific) and by NSF under grant CCF1065305.
Supplementary material
References
 1.Narasimhan SG, Nayar SK: Contrast restoration of weather degraded images. IEEE Trans. Pattern. Anal. Mach. Intell 2003, 25(6):713724. 10.1109/TPAMI.2003.1201821CrossRefGoogle Scholar
 2.Summers D: Contrapposto: style and meaning in renaissance art. Art Bull 1977, 59(3):336361. 10.2307/3049668MathSciNetCrossRefGoogle Scholar
 3.Koschmieder H: Luftlicht und Sichtweite. Naturwissenschaften 1938, 26(32):521528. 10.1007/BF01774261CrossRefGoogle Scholar
 4.Middleton WEK: Vision Through the Atmosphere. Ontario: University of Toronto Press; 1952.Google Scholar
 5.Duntley SQ, Boileau AR, Preisendorfer RW: Image transmission by the troposphere I. JOSA 1957, 47(6):499506. 10.1364/JOSA.47.000499CrossRefGoogle Scholar
 6.McCartney EJ: Optics of the Atmosphere: Scattering by Molecules and Particles. New York: Wiley; 1976.Google Scholar
 7.Tan RT: Visibility in bad weather from a single image, 2008. In IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE; 2008:18.Google Scholar
 8.Fattal R: Single image dehazing. ACM Trans. Graph 2008, 27: 72.CrossRefGoogle Scholar
 9.He K, Sun J, Tang X: Single image haze removal using dark channel prior, 2009. In IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE; 2009:19561963.Google Scholar
 10.Kratz L, Nishino K: Factorizing scene albedo and depth from a single foggy image, 2009. In IEEE 12th International Conference on Computer Vision. Piscataway: IEEE; 2009:17011708.Google Scholar
 11.Fang F, Li F, Yang X, Shen C, Zhang G: Single image dehazing and denoising with variational method, 2010. In International Conference on Image Analysis and Signal Processing (IASP). Piscataway: IEEE; 2010:219222.Google Scholar
 12.Chao L, Wang M: Removal of water scattering. IEEE Comput. Eng. Technol. (ICCET) 2010, 2: V2—35.Google Scholar
 13.Yoon I, Jeon J, Lee J, Paik J: Weighted image defogging method using statistical RGB channel feature extraction, 2010. In International SoC Design Conference (ISOCC). Piscataway: IEEE; 2010:3435.Google Scholar
 14.Yu J, Xiao C, Li D: Physicsbased fast single image fog removal. In IEEE International Conference on Signal Processing (ICSP’10). Piscataway: IEEE; 2010:10481052.CrossRefGoogle Scholar
 15.Zou C, Chen J: Recovering depth from a single image using dark channel prior. In 11th ACIS International Conference on Software Engineering, Artificial Intelligence Networking and Parallel/Distributed Computing (SNPD 2010). Piscataway: IEEE; 2010:9396.CrossRefGoogle Scholar
 16.Gibson KB, Vo DT, Nguyen TQ: An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process 2012, 21(2):662673.MathSciNetCrossRefGoogle Scholar
 17.Tarel JP, Hautière N: Fast visibility restoration from a single color or gray level image, 2009. In IEEE 12th International Conference on Computer Vision. Piscataway: IEEE; 2009:22012208.Google Scholar
 18.Narasimhan SG, Nayar SK: Vision and the atmosphere. Int. J. Comput. Vis 2002, 48(3):233254. 10.1023/A:1016328200723CrossRefGoogle Scholar
 19.Shafer SA: Using color to separate reflection components. Color Res. Appl 1985, 10(4):210218. 10.1002/col.5080100409CrossRefGoogle Scholar
 20.Gibson KB, Nguyen TQ: On the effectiveness of the dark channel prior for single image dehazing by approximating with minimum volume ellipsoids. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2011 (ICASSP). Piscataway: IEEE; 2011:12531256.CrossRefGoogle Scholar
 21.Gibson KB, Nguyen TQ: Hazy image modeling using color ellipsoids, 2011. In 18th IEEE International Conference on Image Processing (ICIP). Piscataway: IEEE; 2011:18611864.Google Scholar
 22.Omer I, Werman M: Color lines: image specific color representation. Proc. 2004 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit 2004, 2: II946.Google Scholar
 23.Klinker GJ, Shafer SA, Kanade T: A physical approach to color image understanding. Int. J. Comput. Vis 1990, 4: 738. 10.1007/BF00137441CrossRefGoogle Scholar
 24.Tan T, Kittler J: Colour texture analysis using colour histogram. IEEE Proc. Vis., Image Signal Process 1994, 141: 403412. 10.1049/ipvis:19941420CrossRefGoogle Scholar
 25.Devaux J, Gouton P, Truchetet F: KarhunenLoeve transform applied to regionbased segmentation of color aerial images. Opt. Eng 2001, 40(7):13021308. 10.1117/1.1385166CrossRefGoogle Scholar
 26.Kuo WJ, Chang RF: Approximating the statistical distribution of color histogram for contentbased image retrieval. Proc. 2000 IEEE Int. Conf. Acoustics, Speech, Signal Process. ICASSP’00 2000, 6: 20072010.CrossRefGoogle Scholar
 27.Dharanipragada S, Visweswariah K: Gaussian mixture models with covariances or precisions in shared multiple subspaces. Audio, Speech, Lang. Proc. IEEE Trans 2006, 14(4):12551266.CrossRefGoogle Scholar
 28.Levin A, Lischinski D, Weiss Y: A closedform solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell 2008, 30(2):228242.CrossRefGoogle Scholar
Copyright information
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.