Advertisement

Cloud Detection Method in GaoFen-2 Multi-spectral Imagery

  • Zhaocong Wu
  • Lin HeEmail author
  • Yi Zhang
  • Jun Li
Conference paper
  • 10 Downloads
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 657)

Abstract

Cloud cover is one of the major factors which affect the application of GaoFen-2 imagery. Cloud detection in GaoFen-2 imagery is fairly difficult due to the lack of enough infrared bands. This paper presents a cloud detection method for GaoFen-2 multi-spectral imagery based on the radiation transmission model. The scattering coefficient of remote sensing image is estimated by using radiation transmission, and then the cloud mask was obtained by combining the geometric and texture features in high-resolution remote sensing images. Experiments on GaoFen-2 multi-spectral images show that the accuracy of cloud detection is above 94.70%. The method proposed in this paper can effectively reduce the influence of highlighted buildings during cloud detection and achieve a high accuracy for GaoFen-2 imagery cloud detection with less bands. In addition, this paper provides an alternative distinction method for the quantitative researches of thick and thin clouds in optical satellite imagery.

Keywords

Cloud detection GaoFen-2 Radiation transmission Thick cloud Thin cloud 

1 Introduction

Optical sensing satellites are usually subject to clouds. Thin clouds affect the true land surface brightness, and thick clouds even completely obscure the ground which cause problems for applications of the optical remote sensing imagery, such as land cover classification [1] and land-use change [2], especially for quantitative studies, such as vegetation monitoring [3] and water detection [4]. The distribution, thickness, and amount of cloud all restrict the use of satellite data, the accuracy of experiments, and the reliability of conclusions of researches in earth observation.

With the development of satellite sensors, various cloud detection methods have been developed for optical remote sensing images, such as cloud detection algorithms for NOAA advanced very high-resolution radiometer (AVHRR) satellite imagery [5]; for Terra/Aqua MODIS images [6, 7, 8]; for Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper (ETM)+ images [9, 10, 11]; for SPOT-5 high-resolution geometrical (HRG) imagery [12]; and for GaoFen-1 wide field of view (WFV) imagery [13].

Cloud detection methods can be divided into two categories: single-scene-based methods and multi-scene-based methods [14]. Traditional thresholding methods are generally employed in single-scene-based cloud detection methods [15]. The traditional thresholding approach can yield pleasing cloud detection results [16]. However, it is an empirical cloud detection algorithm, and many coefficients need to be given. Therefore, low adaptability and universality are inevitable. In recent years, cloud detection schemes based on machine learning methods performed well, such as Markov random field (MRF) framework [17], support vector machine (SVM) [18], and deep learning [19]. However, it is inevitable that computational complexity and time-consuming are high for machine learning approaches.

Multi-scene-based cloud detection methods usually utilize time-series images for comparison in short periods of time [20]. Based on the hypothesis of invariance of surface features, change detection is used to determine whether it is a cloud or not, according to pixel-to-pixel contrast of spectral values and correlation parameters between multi-temporal remote sensing images. Multi-scene-based cloud detection method can also achieve great performance [21], but it is closely related to the reference image, as a ground truth.

The existing cloud detection approaches take cloud as the detection object, and clouds are extracted by using spectral parameters and texture features, such as the top-of-atmosphere (TOA) reflectance of Landsat OLI cirrus band [22] and the local binary pattern (LBP) [13]. In this letter, the proposed method considers radiation features of the full scene, regards that clouds obscure the true ground and thus reduce the radiation transmission value of surface features. Based on the principle and methodology of image dehazing, the radiation transmission and scattering coefficient values are first estimated. Then, clustering method is used to extract the initial cloud pixels. The morphology method and texture feature analysis are combined to remove noises, and the final cloud mask is achieved.

The remaining of this paper is organized as follows: The proposed approach is presented in Sect. 2. The cloud detection process is introduced in Sect. 2.1. Error removal procedure is given in Sect. 2.2. The experimental results, analysis, and evaluation are presented in Sect. 3. Finally, the conclusion is given in Sect. 4.

2 Methodology

The radiation transmission on the remote sensing image can directly highlight the difference between the cloud and the surface features. The proposed method first estimates the radiation transmission and scattering coefficient value of each pixel; the potential cloud mask is further acquired by clustering of the scattering coefficient map. The dark channel prior method is first introduced, which is used to estimate radiation transmission value; then, the k-means clustering method clusters the scattering coefficient map converted by radiation transmission; some categories are finally selected as the rough cloud mask. Spectral features combined with geometric and texture features are employed to filter misclassified pixels (Fig. 1).
Fig. 1

Flow diagram of cloud detection approach

2.1 Cloud Detection

2.1.1 Estimating Radiation Transmission Map Using Dark Channel Prior Method

Due to the influence of absorption and scattering of medium, electromagnetic waves are attenuated on the path and the survived light is described as the transmission [23]. The observed intensity I(λ), which is band related, measured by the sensor contains the reflectance of the object and the scattering air light on the path from a surface point to sensor, which are signed as Io(λ) and IP(λ), respectively.
$$I(\lambda ) = I_{o} (\lambda ) + I_{p} (\lambda )$$
(1)
The radiation transmission t(λ), namely the transmission, expresses visibility in the daytime and is defined via Koschmieder’s law as follows:
$$t(\lambda ) = {\text{e}}^{ - \beta (\lambda )d}$$
(2)
where β(λ) is the scattering coefficient of atmosphere and d is the scene depth. The reflectance intensity of the object on the path from a surface point to sensor is given by
$$I_{o} (\lambda ) = J(\lambda )t(\lambda )$$
(3)
where J(λ) is the inherent object radiance. The scattering airlight is integrated over the path of length d:
$$I_{p} (\lambda ) = \int\limits_{0}^{d} {J(\lambda )B(\lambda ,\theta )t(\lambda )}$$
(4)
where θ is the angle between the incident light and the reflected light and B(λ, θ) is the angular scattering function to describe the scattering difference of points on the path. Based on four basic assumptions, labeling the global atmospheric light as A, the simpler observed intensity model is given by [24]:
$$I(\lambda ) = J(\lambda )t(\lambda ) + A(1 - t(\lambda ))$$
(5)
For images with three bands, the estimation of position-related radiation transmission \(\tilde{t}\left( x \right)\) is written as [25]:
$$\tilde{t}\left( x \right) = \tilde{t}\left( x \right)\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \left( {\mathop {\hbox{min} }\limits_{{c \in \{ r,g,b\} }} \frac{{J^{c} \left( y \right)}}{{A^{c} }}} \right) + 1 - \mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \left( {\mathop {\hbox{min} }\limits_{{c \in \{ r,g,b\} }} \frac{{I^{c} \left( y \right)}}{{A^{c} }}} \right)$$
(6)
where Jc is a color channel of J and Ω(x) is a local patch centered at x. The dark channel prior method makes a basic assumption that in most of the non-sky patches, at least one color channel has some pixels whose intensity is close to zero. Thus, the minimum intensity is as zero in a local patch. For an arbitrary image J, the dark channel Jdark is given by:
$$J^{\text{dark}} \left( x \right) = \mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \left( {\mathop {\hbox{min} }\limits_{{c \in \{ r,g,b\} }} J^{c} \left( y \right)} \right)$$
(7)
In non-sky patches, the dark channel Jdark is low and tends to be zero. The estimation formula of radiation transmission \(\tilde{t}\left( x \right)\) can be simplified as follows:
$$\tilde{t}\left( x \right) = 1 - \mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \left( {\mathop {\hbox{min} }\limits_{{c \in \{ r,g,b\} }} \frac{{I^{c} \left( y \right)}}{{A^{c} }}} \right)$$
(8)

In studies of image dehazing, the transmission is an important feature. The scene depth expresses the distance from an object to the camera. It is generally believed that the haze in the scene is uniformly covered, so the scattering coefficient can be assumed to be a constant. The greater the scene depth is, the thicker the haze is, and vice versa. Therefore, the estimation of scene depth can be used to restore haze-free images.

Unlike close range photogrammetry, the scene depth of remote sensing images can be considered as a constant, since the distance between the sensor and the ground is much larger than the height difference between the surface objects. Due to the presence of clouds, the scattering coefficient of light on the path is changed. Thinner cloud and haze correspond to lower scattering coefficients and vice versa. Through Koschmieder’s law, we have the scattering coefficient’s calculation formulation:
$$\beta (x) = \frac{{ - \ln \left( {\tilde{t}(x)} \right)}}{d}$$
(9)
A parameter which is linear with the scattering coefficient β(λ) is employed and signed as β(λ)′:
$$\beta (x)' = - \ln \left( {\tilde{t}(\lambda )} \right)$$
(10)

2.1.2 Generating a Rough Cloud Mask Using K-means Clustering Algorithm

This paper uses k-means clustering algorithm instead of thresholding approach to classify pixels on the scattering coefficient map to extract clouds. Clustering methods can be divided into hierarchical algorithms and partitional algorithms. K-means clustering is one of the most classical partitional algorithms which were proposed independently in different scientific fields. The advantages of simplicity and efficiency make k-means clustering algorithm widely utilized over 60 years [26]. The points and centroids of each cluster are updating based on Euclidean distance matrix till converged to local optimal in processing [27]. K-means algorithm allocates each point of the given data set to a corresponding cluster to satisfy the following condition:
$$\sum\limits_{i = 1}^{c} {\sum\limits_{{k \in A_{i} }} { | |x_{k} - v_{i} ||_{2} } } \to \hbox{min}$$
(11)
where Ai and vi represent to the data set and the mean of the points over cluster i, separately. || ||2 denotes the square of distance norms. Initial clustering center is random selected from the data set, and the other points are clustered according to the minimal distances.
The classified image generates multiple categories A1, A2, …. An, and will be further determined into two categories: cloud and non-cloud. It is very important to discuss the attribution of a category Ai which is generated as a “boundary category.” We signed the mean of Ai as vi. The condition, that Ai and Ai−1 are classified into one class rather than Ai+1, is that vi is closer to vi−1, or vi is close to vi+1 but not significantly close to vi · pi is the parameter which are decided by empirical approaches.
$$A_{i} = \left( {{\text{Diff}}_{{v_{i + 1} ,v_{i} }} - {\text{Diff}}_{{v_{i} ,v_{i - 1} }} } \right) > p_{1} \;{\text{or}}\;0 < \left( {{\text{Diff}}_{{v_{i} ,v_{i - 1} }} - {\text{Diff}}_{{v_{i + 1} ,v_{i} }} {\kern 1pt} } \right) < p_{2}$$
(12)

2.2 Error Removal

2.2.1 Scattering Coefficient Modification Using Spectral Features

The TOA reflectance ρλ is calculated, with parameters from the extraterrestrial band solar radiation of GaoFen-2 satellite and the absolute radiation calibration coefficient of domestic land observation satellite in 2016 from China Center for Resources Satellite Data and Application,
$$\rho_{\lambda } = \pi *L_{\lambda } *d^{2} / {\text{ESUN}}_{\lambda } *\cos \theta_{\text{s}}$$
(13)
where d is the astronomical unit (the distance between sun and earth), ESUNλ is solar irradiance, θs is solar zenith angle, and Lλ is the TOA radiance.
Water and whiteness masks are generated by using spectral features. Spectral test for water and whiteness is widely implemented and proved to be necessary [9, 11, 13]. Water test is estimated to remove the error pixels caused by water. This paper uses the normalized difference water index (NDWI) to extract water.
$${\text{Water Test}} = {\text{NDWI}} > i_{2}$$
(14)
NDWI is a valid parameter based on the spectral analysis of water and can help to delineate the features of water rapidly and efficiently [28].
$${\text{NDWI}} = \left( {\rho_{\text{g}} - \rho_{\text{nir}} } \right)/\left( {\rho_{\text{g}} + \rho_{\text{nir}} } \right)$$
(15)
Whiteness test is calculated to detect bright pixels with high TOA reflectance on single visible band or on the “MeanWhite” value [9] which is defined for bright white artificial structures. The TOA reflectance of visible bands designed as ρr, ρg, ρb for red, green, and blue bands, respectively. The purpose is to detect surface objects with high brightness in a single band.
$${\text{Whiteness Test}} = {\text{MeanWhite}} > p_{3} \,{\text{or}}\,\rho_{r} > p_{4} \,{\text{or}}\;\rho_{g} > p_{4} \;or\;\rho_{b} > p_{4}$$
(16)
where
$${\text{MeanWhite}} = \left( {\rho_{\text{r}} + \rho_{\text{g}} + \rho_{\text{b}} } \right) /3$$
(17)
The radiant transmission of bright surface objects should be similar to that of other landscapes and larger than that of clouds. While the transmission values of bright surface objects are calculated wrongly, because of the similarity of spectral values in visible bands. In this paper, whiteness test is used to modify radiation transmission and scattering coefficient values before clustering, and reduce the influence of bright surface objects, to a certain extent.
$${\text{Modified Transmission}} = {\text{Transmission/}}(1 - {\text{Whiteness Test}}*i_{5} )$$
(18)

After these steps, a rough cloud mask by using spectral information is achieved. The condition for a cloud pixel is that it belongs to clustering results, and it is not water.

2.2.2 Error Removal Using Geometric and Texture Features

Moderate-to-high spatial resolution remote sensing images provide a great amount of details of land surface [29]. Rich geographic objects increase the complexity of texture feature and thus generate many trivial errors. GaoFen-2 imagery lacks sufficient spectral information to use, and these errors are distinguishable in geometric and texture features from cloud patches. It is necessary to consider the texture and geometric morphometric information.

After spectral testing, the rough cloud mask is further processed by geometric approaches. Assuming that there are cloud regions with the number of n, the detection errors caused by small ground objects can be removed via judging the area of each cloud region and non-cloud region.
$${\text{Area }}{\kern 1pt} {\text{Test}} = {\text{RoughMask}}_{1 \sim n} > p_{6}$$
(19)

The strategy of geometric detection before texture detection can help to improve the operation efficiency since it is time-consuming for object-oriented texture features calculation, and geometric detection helps to significantly reduce the number of patches.

There are great differences on texture features between the cloudy region and the non-cloudy region in moderate to high-resolution remote sensing image. This paper applies graylevel and gradient information. Graylevel gradient co-occurrence matrix (GLGCM) is utilized to extract texture features comprehensively in this paper.

It is proved by experiments that the characteristics of the cloud on inhomogeneity of graylevel distribution Ggraylevel and gradient distribution Ggradient are more significant, compared with others statistical vectors. Because the differences on both of statistical vectors between non-cloudy regions and cloudy regions are in magnitude, the results can be easily separated by using thresholding method. The patches should be tested using texture information, named as “Texture Test.” In order to prevent the generation of the pseudo “cloud region” which is relatively bright compared with other regions, due to the clustering of non-cloudy images by the method. The pseudo “cloud region” has high texture complexity and can be distinguished via setting the upper bound.
$${\text{Texture Test}} = p_{ 8} < G_{\text{gray{-}level}} < p_{9} \;{\text{or}}\;p_{ 1 0} < G_{\text{gradient}} < p_{11} {\kern 1pt}$$
(20)

3 Experiments and Discussion

3.1 Data Set and Experimental Setup

To quantitatively evaluate the performance of the proposed method, ten GaoFen-2 high-resolution images were selected as experimental data, which were produced after relative radiometric correction and systematic geometric correction. The selected images, acquired from May 2015 to October 2018, have 7411 * 7025 pixels with multi-spectral (4 m/pixel) bands and four spectral channels (blue, green, red, and near-infrared).

Experiments on the GaoFen-2 image are designed to demonstrate the feasibility and advantages of the method for cloud detection. Reference objects were identified manually, and the precision rate (PR), recall rate (RR), and F-measure were taken as metrics. The formulas are as follows:
$${\text{PR}} = \frac{{{\text{CTC}}}}{{{\text{TC}}}}$$
(21)
$${\text{RR}} = \frac{{{\text{TC}}}}{{{\text{RC}}}}$$
(22)
$$F = \frac{{\left( {1 + \beta^{2} } \right) \cdot \left( {{\text{PR}} \cdot {\text{RR}}} \right)}}{{\left( {\beta^{2} \cdot {\text{PR}} + {\text{RR}}} \right)}}$$
(23)
where TC and CTC denote the number of cloud pixels which are tested and correctly tested, respectively. RC is the number of cloud pixels in reference map. The weights of precision and recall can be adjusted via regulating β index which is taken as 0.5 in this paper; the range of F-measure is [0, 1]; and the high value reflects high similarity.

3.2 Experiments and Evaluation

Test and parameter selections of the algorithm were carried out on local images. To test the effectiveness of the algorithm, we selected GaoFen-2 imageries which cover urban areas and rural areas and include thin and thick clouds with obvious or fuzzy boundaries. The cloud morphology includes planar thin clouds, massive thin clouds, planar thick clouds, massive thick clouds, and others.

Clustering method can help to divide modified scattering coefficient maps into different layers. Through choosing different layers, cloud regions can be determined. After geometry and texture test, the proposed method can effectively eliminate highlight surface objects, such as roads (the first line in Fig. 2), buildings (the third line in Fig. 2), and so on. Through clustering analysis, we can distinguish the thin cloud from the bright homogeneous surface (the second line and the fifth line in Fig. 2). Due to the consideration on bright areas, errors caused by snow can also be removed (the fourth line in Fig. 2).
Fig. 2

The results of each step. Columns 1–6 are GaoFen-2 images, scattering coefficient maps, clustering maps, rough masks, results after geometric processing, and cloud masks obtained after texture test

Ten GaoFen-2 images were used to evaluate the effect of cloud detection on panoramic high-resolution imagery. Reference cloud regions were identified and labeled manually combined with spectral feature extraction. Original images, reference images, and cloud detection results are shown in Fig. 3. The cloud detection algorithm can detect both thin and thick clouds on GaoFen-2 images. Thin clouds (a, b) can be easily extracted via the proposed method (the average F-measure is 96.94%), even if the brightness characteristics of thin clouds are not notable (c). For small thick cloud (d–f) and large bright thick cloud (g–j), the algorithm can also obtain satisfying results (the average F-measure is 92.86 and 96.59%).
Fig. 3

GF-2images (a1–j1), reference masks (a2–j2), and cloud masks (a3–j3)

In this paper, the size of the detected region was used as one of the methods to remove the highlighted surface objects which cause limitations. If the geometric threshold is large, the algorithm will eliminate more highlighted surface areas; but it will also eliminate some small bright clouds. Conversely, the algorithm cannot eliminate some highlighted artificial buildings (c in Fig. 3), which leads to an increase of errors (F-measure is 88.16%). The precision, recall, F-measure, and cloud coverage are shown in the Table 1.
Table 1

Precision, recall, F-measure, and cloud coverages of ten cloud masks

 

Precision

Recall

F-measure

Cloud cover

a

0.9998

0.9208

0.9829

0.5397

b

0.9457

0.9989

0.9559

0.4775

c

0.8627

0.9661

0.8816

0.3433

d

1.0000

0.6695

0.9102

0.0315

e

0.9964

0.7270

0.9277

0.0413

f

0.9674

0.8778

0.9481

0.8120

g

1.0000

0.7129

0.9255

0.5714

h

0.9995

0.8907

0.9757

0.7631

i

0.9993

0.9309

0.9848

0.7655

j

0.9988

0.9018

0.9778

0.9306

4 Summary

In this paper, the presented cloud detection method for GF-2 multi-spectral imagery estimates the radiation transmission and scattering coefficient values, based on the methodology of image dehazing, and combines clustering method, morphology method, and texture feature analysis to achieve a cloud mask. The effectiveness of the algorithm and its superiority in the detection of thin clouds are proved by experiments and evaluation. How to overcome the accuracy of the algorithm in identifying small bright areas and distinguish different cloud thickness quantitatively will be our future researches.

References

  1. 1.
    Chen B, Huang B, Xu B (2017) Multi-source remotely sensed data fusion for improving land cover classification. ISPRS J Photogramm 124:27–39CrossRefGoogle Scholar
  2. 2.
    Pellikka PKE, Heikinheimo V, Hietanen J, Schäfer E, Siljander M, Heiskanen J (2018) Impact of land cover change on aboveground carbon stocks in Afromontane landscape in Kenya. Appl Geogr 94:178–189CrossRefGoogle Scholar
  3. 3.
    Romijn E, Herold M, Kooistra L, Murdiyarso D, Verchot L (2012) Assessing capacities of non-annex I countries for national forest monitoring in the context of REDD+. Environ Sci Policy 19–20:33–48CrossRefGoogle Scholar
  4. 4.
    Duan H, Cao Z, Shen M, Liu D, Xiao Q (2019) Detection of illicit sand mining and the associated environmental effects in China’s fourth largest freshwater lake using daytime and nighttime satellite images. Sci Total Environ 647:606–618CrossRefGoogle Scholar
  5. 5.
    Turner J, Marshall GJ, Ladkin RS (2001) An operational, real-time cloud detection scheme for use in the Antarctic based on AVHRR dataGoogle Scholar
  6. 6.
    Zhang X, Tan S, Shi G, Wang H (2019) Improvement of MODIS cloud mask over severe polluted eastern China. Sci Total Environ 654:345–355CrossRefGoogle Scholar
  7. 7.
    Tang H, Yu K, Hagolle O, Jiang K, Geng X, Zhao Y (2013) A cloud detection method based on a time series of MODIS surface reflectance images. Int J Digit Earth 6:157–171CrossRefGoogle Scholar
  8. 8.
    Jouybari Moghaddam Y, Aghamohamadnia M (2013) A novel method for cloud detection in modis imageryGoogle Scholar
  9. 9.
    Zhu Z, Woodcock CE (2012) Object-based cloud and cloud shadow detection in landsat imagery. Remote Sens Environ 118:83–94CrossRefGoogle Scholar
  10. 10.
    Huang C, Thomas N, Goward SN, Masek JG, Zhu Z, Townshend JRG et al (2010) Automated masking of cloud and cloud shadow for forest change analysis using landsat images. Int J Remote Sens 31:5449–5464CrossRefGoogle Scholar
  11. 11.
    Richard RI, John LB, Samuel NG, Terry A (2006) Characterization of the Landsat-7 ETM automated cloud-cover assessment (ACCA) algorithmGoogle Scholar
  12. 12.
    Fisher A (2014) Cloud and cloud-shadow detection in SPOT5 HRG imagery with automated morphological feature extraction. Remote Sens-Basel 6:776–800CrossRefGoogle Scholar
  13. 13.
    Li Z, Shen H, Li H, Xia G, Gamba P, Zhang L (2017) Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery. Remote Sens Environ 191:342–358CrossRefGoogle Scholar
  14. 14.
    Zhai H, Zhang H, Zhang L, Li P (2018) Cloud/shadow detection based on spectral indices for multi/hyperspectral optical remote sensing imagery. ISPRS J Photogramm 144:235–253CrossRefGoogle Scholar
  15. 15.
    Zhu X, Helmer EH (2018) An automatic method for screening clouds and cloud shadows in optical satellite image time series in cloudy regions. Remote Sens Environ 214:135–153CrossRefGoogle Scholar
  16. 16.
    Foga S, Scaramuzza PL, Guo S, Zhu Z, Dilley RD, Beckmann T et al (2017) Cloud detection algorithm comparison and validation for operational landsat data products. Remote Sens Environ 194:379–390CrossRefGoogle Scholar
  17. 17.
    Le Hégarat-Mascle S, André C (2009) Use of Markov random fields for automatic cloud/shadow detection on high resolution optical images. ISPRS J Photogramm 64:351–366CrossRefGoogle Scholar
  18. 18.
    Ishida H, Oishi Y, Morita K, Moriwaki K, Nakajima TY (2018) Development of a support vector machine based cloud detection method for MODIS with the adjustability to various conditions. Remote Sens Environ 205:390–407CrossRefGoogle Scholar
  19. 19.
    Xie F, Shi M, Shi Z, Yin J, Zhao D (2017) Multilevel cloud detection in remote sensing images based on deep learning. IEEE J-Stars 10:3631–3640Google Scholar
  20. 20.
    Champion N (2012) Automatic cloud detection from multi-temporal satellite images: towards the use of Pléiades time seriesGoogle Scholar
  21. 21.
    Lin C, Lin B, Lee K, Chen Y (2015) Radiometric normalization and cloud detection of optical satellite images using invariant pixels. ISPRS J Photogramm 106:107–117CrossRefGoogle Scholar
  22. 22.
    Zhu Z, Wang S, Woodcock CE (2015) Improvement and expansion of the Fmask algorithm: cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens Environ 159:269–277CrossRefGoogle Scholar
  23. 23.
    Fattal R (2008) Single image dehazing. ACM Trans Graph (TOG) 27:1–9CrossRefGoogle Scholar
  24. 24.
    Fabio C, Eric K (1997) Depth from scatteringGoogle Scholar
  25. 25.
    He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33:2341–2353CrossRefGoogle Scholar
  26. 26.
    Jain AK (2010) Data clustering: 50 years beyond K-means. Pattern Recogn Lett 31:651–666CrossRefGoogle Scholar
  27. 27.
    Zahra S, Ghazanfar MA, Khalid A, Azam MA, Naeem U, Prugel-Bennett A (2015) Novel centroid selection approaches for K means-clustering based recommender systems. Inform Sci 320:156–189MathSciNetCrossRefGoogle Scholar
  28. 28.
    McFEETERS SK (1996) The use of the normalized difference water index (NDWI) in the delineation of open water features. Int J Remote Sens 17:1425–1432CrossRefGoogle Scholar
  29. 29.
    Zhang X, Xiao P, Feng X, Yuan M (2017) Separate segmentation of multi-temporal high-resolution remote sensing images for object-based change detection in urban area. Remote Sens Environ 201:243–255CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.School of Remote Sensing Information EngineeringWuhanChina

Personalised recommendations