Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

How to Specify an Electro-optical Earth Observation Camera? A Review of the Terminologies Used and its Interpretation

Abstract

Since the launch of Landsat-1 in 1972, several earth observation satellites have been launched by government and private agencies. To select the optimum data for a specific application and for combining data from different sources require good clarity and understanding of various specifications provided by the sensor manufacturers. Many of the sensor parameters are not unambiguously defined in the product data-sheet. The paper reviews how the specifications adopted to describe various performance parameters of earth observation electro-optical camera are understood, and addresses the need for consistency in defining various specifications. The paper also proposes a figure-of-merit to compare various cameras for its target discrimination capability. A possible standard set of specifications are suggested, to be provided by all camera developers which would benefit both developers and users of such instruments.

Introduction

The launch of LANDSAT-1 in 1972 has provided a unique opportunity to the scientific community to use space-borne multispectral data for monitoring and management of earth resources and environment. As the users became aware of the potential use of earth observation data both as a scientific tool to understand earth system and for practical applications in various disciplines, they started making demands for improved quality of the data. While earlier the data providers were national government agencies, now we have a large number of private agencies competing with each other to meet the user demands. Though one has a choice to use data available from various sources, the nuances of the earth observation camera specifications are not well understood by the end users, so that they can choose data from the best/optimum sensors for their specific application. Some of the issues are: are the terminologies used unambiguously defined, is there standardization in the test procedure so that the values provided by different sensor manufactures can be compared, can we define a set of parameters to be provided by all camera manufacturers and so on. These are more important now than a few decades back since now there are several earth observation cameras designed and developed by different manufacturers. Joseph (2000) brought out the need of standardization of the definition of electro-optical sensor specifications. Here, we want to elaborate some of the concerns pointed out earlier and would like to pose some additional issues of concern from an end users’ perspective. These issues are discussed under four domains—spatial, spectral, radiometric and temporal. The discussions are limited to electro-optical earth observation cameras. Since most of the modern sensors use an array of discrete detector elements, the explanation present in the paper assumes such a configuration.

Spatial Domain

Parameters of relevance here are spatial resolution, swath (or area covered in a frame), and at the product-level geometric fidelity, i.e., faithful reproduction of the relative dimensions on the ground in the imagery. Of the three parameters mentioned above, spatial resolution is talked about most, but understood least for what it implies to an end user. There are two ways in which spatial resolution is represented: (i) Instantaneous Field of View (IFOV) and (ii) Instantaneous Geometric Field of View (IGFOV).

  1. (i)

    Instantaneous Field of View (IFOV)

    The IFOV represents the cone angle from which one discrete detector element of the focal plane array receives the radiation from the ground (Fig. 1). Assuming a square detector element of size a × a, the IFOV β is given by

    Fig. 1
    figure1

    Schematics showing IFOV, IGFOV, FOV and swath

    $$\beta = \frac{a}{f}$$
    (1)

    where f is the effective focal length of the optical system; a and f are expressed in the same unit, say, mm. The IFOV is instrument-specific and independent of platform height.

  2. (ii)

    Instantaneous Geometric Field of View (IGFOV).

    What is of interest to the user is how much area on the ground is seen by a single detector element. This is given by the projection of the detector on to the ground through the optical system. This is referred to as Instantaneous Geometric Field of View (IGFOV) or Geometric Instantaneous Field of View (GIFOV).

    $${\text{IGFOV}} = \beta h$$
    (2)

    where h is the platform height.

Instrument manufacturers usually specify IGFOV as spatial resolution. That is, when electro-optical sensors with discrete detector arrays at the focal plane are used for generating imagery (such as LANDSAT MSS, SPOT HRV and IRS LISS) the spatial resolution stated represents the projection of the detector element on to the ground through the imaging optics. Therefore, the spatial resolution 73 m as stated for IRS LISS 1 camera only means that the projection of one CCD element onto the ground through the imaging optics from the satellite orbit is 73 m—that is the ‘footprint’ of the CCD detector element on the ground. This footprint depends on the IFOV of the camera and the platform height. In the example considered, for LISS 1 camera 73 m × 73 m is the smallest area on the ground from which the radiance is recorded as a separate unit. In remote sensing, we are interested in distinguishing different targets of finite size in the scene being imaged. From a user’s viewpoint, spatial resolution implies the sensor’s ability to image (record) closely spaced objects which have different reflectance/emittance as separate entities in the image (data), as it is in the scene. When one says the resolution of IRS LISS-1 camera is 73 meters, one intuitively assumes that the dimension of the smallest object one can detect in LISS-1 camera data is 73m. However, it does not guarantee that all footprints of 73 m × 73 m dimension are discernible in IRS LISS 1 data. It is also true that one may be able to detect a high-contrast object that is smaller than the IGFOV—one can see roads and even railway track in LISS-1 imagery. (In such situation though the object is detected, they are distinguished by the context-linear structure, the type curvature encountered can distinguish whether it is a road or a railway track.) Thus, IFOV/IGFOV alone does not adequately define the response of an imaging system to discriminate different objects. Since understanding this limitation is very important from a user’s viewpoint, we shall now discuss the concern in using only IGFOV as a measure of spatial resolution. The basic concept is similar to how we distinguish objects from the surroundings.

Modulation Transfer Function (MTF)

In visual perception, we can distinguish two objects only if there is a brightness/color difference between them, which is referred to as contrast (Fig. 2). Greater the contrast, it is easier to distinguish; that is why, a white ball in a lawn can be easily discriminated against a green ball in the same place. There has to be a minimum contrast at the eye of a given observer, below which an object cannot be detected. (There is a similar concept in digital analysis also which is discussed in later section.) This lower limit of contrast is called contrast threshold. To compare targets with different illumination levels, the contrast is normalized with total illumination and the term is called contrast modulation CM.

$$C_{\text{M}} = \frac{{L_{\hbox{max} } - L_{\hbox{min} } }}{{L_{\hbox{max} } + L_{\hbox{min} } }}$$
(3)
Fig. 2
figure2

Concept of contrast ratio a black and white target, and b the intensity distribution

The contrast modulation is always positive with values ranging between zero and one. When a target is imaged by a camera, the instrument further reduces the contrast so that the contrast in the image plane is less than what is in the object space. This reduction in contrast from object space to image space is represented by Modulation Transfer Function (MTF). That is

$${\text{MTF}} = \frac{{{\text{contrast}}\;{\text{modulation}}\;{\text{in}}\;{\text{image}}\;{\text{space}}}}{{{\text{contrast}}\;{\text{modulation}}\;{\text{in}}\;{\text{object}}\;{\text{space}}}}$$
(4)

The term MTF strictly applies to a sinusoidal input target. When a ‘square wave’ target is used as shown in Fig. 2, the term used is Contrast Transfer Function (CTF). However, for our discussion the term MTF is used irrespective of the nature of the target. Many factors contribute to the MTF of the imaging system such as optics quality, detector performance and platform stability. In addition, the scattering in the atmosphere increases the background radiance, thereby reducing the contrast. Thus, an object which is resolved in an imagery taken in a ‘clear’ atmosphere may not be resolved if taken in a hazy atmosphere, due to the reduction in contrast. A typical MTF curve is given in Fig. 3. The MTF is usually plotted against spatial frequency. In an image, spatial frequency describes the variation of the brightness (digital number—DN) values within an image distance. It is expressed as line pair/mm (lp/mm), when referred to focal plane. When the reflectance value varies smoothly over a distance (length), i.e., very few changes in brightness value over a given area in an image, the image has low-frequency content, whereas when one experiences frequent/sudden change(s) in the reflectance value the feature can be considered having high-frequency component. Large mono-cropped area is an example of low spatial frequency. Many natural and man-made features in imagery such as geological faults, land–water boundary, roads and urban areas have high-frequency component. The MTF acts as a low-pass filter on the scene spatial frequency content, passing low frequency and attenuating high frequency (Boreman 2001). The effect of this is to reduce the edge sharpness of the imagery.

Fig. 3
figure3

MTF plotted against spatial frequency. The shape varies depending on the aberration in the imaging chain. However, generally the trend remains the same—as the spatial frequency increases, the MTF decreases

Since the effect of MTF is to reduce the contrast in the image plane, it is a critical parameter in determining the ability to detect and identify objects from the image. Is it justifiable to specify the sensor quality just in terms of geometric projection of the detector without any consideration of the contrast reduction it produces? How does one judge image quality of different sensors with the same IGFOV, but different MTFs? In summary, spatial resolution—IGFOV—alone as stated by the sensor manufacturer does not reflect the ability of a camera to discriminate objects of different dimensions. One of the additional parameters required is MTF, and other parameters of importance for target discrimination are discussed in a later section.

Ground Sampling Distance (GSD)

Another terminology used by sensor manufacturers is ‘Ground Sampling Distance’ (GSD). Most remote sensing systems use the GSD as the measure for spatial resolution (Driggers 2003). To understand GSD, it is necessary to know how the data are collected. Let us consider a push-broom camera. In the across-track direction, data are sampled as per CCD element size. In the along-track direction normally, the radiation received is integrated for duration equal to the time taken by the satellite to move through one IGFOV, which is called integration time. (In principle, the data can be generated by sampling at certain specified ground distances, which is smaller than the IGFOV; however, it is not the usual mode.) Thus, in the output medium, each pixel corresponds to one IGFOV. However, it is possible to resample (interpolate) the data so that a pixel on the output media is less than the IGFOV. What manufacturer gives as GSD is the distance between pixel centers measured on the ground after resampling the original data to a value lower than the recorded IGFOV. Thus, GSD refers only to the detector sampling projected on the ground without considering the effects the optical system may have on the spatial resolution (Driggers 2003). For example, in an image with a 5 m GSD, the distance between two consecutive pixel centers measured on the ground is 5 m. But this is not necessarily IGFOV. For instance, a camera with IGFOV of 6.25 m collects four samples over a distance of 25 m, but in the data product if the GSD specified is 5 m, over the same distance we have five values. Can a 5 m IGFOV sensor at 1 m GSD have the same performance as a 1 m IGFOV sensor? What is the maximum allowable ratio of IGFOV over GSD? These are issues not well addressed either by sensor designers or users. From image simulations, Fiete (2007) has shown that different systems all designed to capture images at the same GSD show clear image quality differences between the systems. “If the image quality requirement was stated in GSD alone, then all of these systems would meet the same image quality”

MTF and Radiometry

MTF also affects the radiometric fidelity—that is how well the image preserves the radiance distribution compared to what is in the scene imaged. From a radiometric consideration, the effect of MTF is to ‘spill’ the energy to adjacent targets. If a black and white target is considered as in Fig. 2, due to the spillover of the radiance, MTF makes black strip ‘less black’ and white strip ‘less white.’ How the radiance of a target recorded by a remote sensing instrument is dependent on surrounding objects was pointed out by Norwood (1974). If the target under consideration is surrounded by objects with higher radiance, then the measured radiance of the target is higher than the actual target radiance. Here, the higher radiance from the surrounding ‘spills over’ to the target under consideration. The reverse can happen if the surrounding has a lower value compared to the target radiance. Thus, the remote sensor measures apparent radiance depending on the surroundings. This can affect the classification accuracy. To compare this effect for different sensors, Joseph (2000) has introduced a term Radiometrically accurate IFOV (RAIFOV)-IFOV at which MTF is 0.95. Radiometrically Accurate Minimum Target size is given by RAIFOV projected on to the ground, that is, targets with dimension larger than this can be considered to have minimum radiometric error introduced due to MTF. Since we are concerned in comparing different imaging systems, we can consider RAIFOV as a parameter to determine how large a target must be to ensure that the radiometric errors caused by MTF are less than a specified value.

To make an overall assessment of the sensor’s target discrimination capability, the shape of the system MTF curve also matters. Figure 4 gives the MTF curves for two imaging systems having different aberrations. Though at IGFOV both have the same MTF, at lower spatial frequency (that is higher IGFOV), ‘A’ can discriminate lower contrast objects better than ‘B.’ While MTF curve shape is not a convenient way to specify the system MTF, value at twice IGFOV can be an additional parameter to be specified as part of the system performance parameters.

Fig. 4
figure4

Effect of the shape of MTF curve on image radiometry. The curves A and B are for two optical systems, having different aberrations. Though at IGFOV both have the same MTF, for targets with dimension higher than IGFOV ‘A’ can discriminate lower contrast objects better than ‘B’

Radiometric Domain

The earth observation camera has to distinguish various objects based on the radiance differences between the objects in the scene being imaged. The smallest radiance difference that can be distinguished by the sensor is referred to as radiometric resolution. At the sensor level, it is represented as the noise equivalent radiance change—NEΔL. This is defined as the change in the input radiance which gives a signal output of the sensor equal to the Root Mean Square (RMS) noise at that signal level. (This is also represented in terms of reflectance change—NEΔρ or temperature change—NEΔT.) Radiometric resolution represents what is the smallest change in the radiance an imaging camera can measure; and finer the radiometric resolution of a sensor (lower the number), the more sensitive it is to detect small differences in reflected or emitted energy. It depends on several parameters such as the signal-to-noise ratio (S/N), the saturation radiance setting and the number of quantization bits.

In the literature, it is observed that the radiometric resolution is attributed to the number of digitations levels (Tedesco 2014; Gibson 2000) giving the impression that higher the number of bits, the system has a better radiometric resolution. In other words, such a statement implies a system with a higher number of bits can detect a smaller change in radiance entering the camera. However, this is not correct, since the number of quantization bits alone cannot represent radiometric resolution. To elucidate the point, let us go to the basics. Let us consider a camera generating an output voltage proportional to the input radiance, giving a maximum (at the saturation radiance) output analog signal of 10 V. Suppose this is digitized to 10 bits (1024 levels), each step (sampling interval) is 0.01 V (10/1024). Therefore, the sensor can theoretically detect a minimum change of voltage 0.01 V. In the parlance of metrology, this is the least count of the instrument. In a hypothetical situation if the instrument has no source of error (that is, the instrument has no noise), then of course, one can measure a change in one DN value, that is, 0.01 V. In such a case, the number of digitization bits reflects radiometric resolution. Any electronic instrument has a certain noise associated with it. The imaging camera output also has noise in addition to the signal we are interested in. Noise sources include the detectors and the electronics, including digitization (Morfitt et al. 2015). More information about the type of noise sources in a digital remote sensing system can be found in Fiete and Tantalo (2001). In the above sensor, let us assume a noise of 0.04 V. Therefore, even though the camera has a theoretical least count of 0.01 V, noise does not permit to detect 0.01 V change. Another way to understand the effect of noise is given below. The 0.04 V noise corresponds to four counts (steps) equal 2 bits (22 = 4); therefore, the 10 bits of resolution specified for the measurement system is diminished by two bits, so the A/D converter actually resolves only 8 bits, not 10 bits. Therefore, depending on the noise level the sensor actually distinguishes fewer than the number of bits to which the data are digitized. Hence, attributing the digitization bits to radiometric resolution, without consideration of the system noise is not correct. Though when fewer bits are used larger is the quantization error in the measured radiance, this uncertainty is much less compared to the error produced by other factors. However, the larger number of bits has the advantage of a large dynamic range, which helps if one wants to measure radiance from sea surface to snow. Joseph (2015) has shown that if the S/N and saturation setting are properly chosen, in principle, for a specific radiance, a 6-bit digitization system can have a better radiometric resolution compared to a 7-bit system. Therefore, the manufacturer should specify the number of bits and noise equivalent radiance (NEΔL) or the S/N at the reference radiance.

Other aspects in the radiometric domain include radiometric fidelity and calibration accuracy. As mentioned earlier, radiometric fidelity is the extent to which the image preserves the relative or absolute energy distribution in the scene (Leachtenauer and Driggers 2001). The radiometric fidelity can be affected due to the lens behavior and responsivity variation of the focal plane array. These are usually calibrated in the laboratory and correction applied during product generated.

In many classification algorithms, the radiance value (in terms of radiometric units, i.e., say mw/cm2/sr/μm) of the DN is not required. However, accurate radiometric calibration is critical for all quantitative applications of remotely sensed satellite data (Bhatt et al. 2018)-like extraction of the amount of chlorophyll in ocean remote sensing. The instrument is calibrated in the laboratory to generate a transfer function between raw counts (DN) recorded by the imaging system and the radiance arriving at the imaging sensor. Generally, the accuracy of calibration is not made available as part of the specification of the camera.

Spectral Domain

In the realm of spectral information, parameters of importance include the number of spectral bands, position of the band in the electromagnetic spectrum represented by the central wavelength (λc) and the bandwidth (Δλ)—wavelength interval in the electromagnetic spectrum within which the measurement is made. These are application driven, and the user chooses depending on the specific problem to be studied. However, the definition of λc and Δλ is not straightforward. The bandwidth is defined by a lower (λ1) and upper (λ2) cutoff wavelengths. The spectral resolution Δλ is given by (λ2λ1). Locating λ1 and λ2 in the spectral response of a practical system is difficult. The bandwidth is usually defined as the width at 50% of the maximum transmission of the filter—referred as full width at half of the maximum transmission (FWHM) (Fig. 5a). This is strictly valid only if the spectral response curve is Gaussian in shape or close to it. However, in a practical bandpass filter, the spectral response is not that smooth and could have some ‘ringing’ (Fig. 5b) and assigning the peak value is not straightforward. Also when the response is skewed, assigning FWHM to bandwidth may not be appropriate. Palmer and James (1984) suggested a technique called ‘moments’ method to compute λc and ∆λ, which avoids the above problem of identifying the peak value. Following Palmer and James (1984), the central wavelength λc and ∆λ are given as

$$\lambda_{\text{c}} = \frac{{\int_{{\lambda_{\hbox{min} } }}^{{\lambda_{\hbox{max} } }} {\lambda R\left( \lambda \right){\text{d}}\lambda } }}{{\int_{{\lambda_{\hbox{min} } }}^{{\lambda_{\hbox{max} } }} {R\left( \lambda \right){\text{d}}\lambda } }}$$
(5)
$$\lambda_{1} = \lambda_{\text{c}} - \sqrt {3\sigma } \;{\text{and}}\;\lambda_{2} = \lambda_{\text{c}} + \sqrt {3\sigma }$$
(6)
$$\Delta \lambda = 2\sqrt {3\sigma }$$
(7)

where σ is given by

$$\sigma^{2} = \frac{{\int_{{\lambda_{\hbox{min} } }}^{{\lambda_{\hbox{max} } }} {\lambda^{2} R\left( \lambda \right){\text{d}}\lambda } }}{{\int_{{\lambda_{\hbox{min} } }}^{{\lambda_{\hbox{max} } }} {R\left( \lambda \right){\text{d}}\lambda } }} - \lambda_{\text{c}}^{2}$$
(8)

where λmin and λmax are the minimum and maximum wavelengths beyond which the spectral response is zero. The merit of this method is that the values are not dependent on spectral response shapes. Pandya et al. (2013) have used this technique to compare the spectral characteristics of sensors onboard Resourcesat-1 and Resourcesat-2 satellites. Their study shows the central wavelengths by the two methods are very close, but the bandwidth by moment method is lower than the conventional FWHM method by about 5 to 13%. It is suggested the moment method be adopted by all the sensor manufacturers so that inter-comparison is meaningful.

Fig. 5
figure5

Spectral response curves a explains the definition of FWHM and b a practical filter spectral response-Resourcesat 1 AWiFS B2

Depending on the filter characteristics, the response for any band includes the contribution from outside the in-band response, which is referred to as out-of-band response (OOB). The out-of-band response is another parameter which is not addressed properly. For instruments using narrow spectral bandwidths such as SeaWiFS, the out-of-band contribution could affect the accuracy of the derived products (Cui et al. 2018).

Focal plane arrangement of the sensor can also affect the spectral characteristics of the multispectral data collected. With the advent of nano-satellites, all components of the satellite have to be optimized for low volume and weight. In the area of imaging instrument, to make a compact system a frame CCD equipped with Bayer mask filter is used in the focal plane to separate the color (Syafrudin et al. 2013; Triana et al. 2015; Hein 2017; Dove constellation 2019). It is important to know how this is different from conventional imaging focal plane arrangement. To obtain multispectral data, the normal arrangement is to use an independent detector system for each spectral band. This is achieved in different configurations such as—by using independent collecting optics for each band (as in IRS LISS), with a single collecting optics by suitably splitting the focal plane (as in SPOT) or by displacing the detector array in the focal plane (as in Landsat MSS). In front of each detector, array appropriate filter is used to select the spectral band. In many earth observation nano-satellites, to miniaturize a single CCD/MOS area array is used as detector with a Bayer mask filter placed in front—first introduced for visual observation. In a Bayer mask filter, each square of four pixels has filtered one red, one blue and two green colors (Fig. 6). It may be mentioned that many consumer color cameras use similar systems. They are designed to create a visually pleasing image and not intended to be a radiance measurement tool. When a Bayer mask filter is used, if we consider a CCD area array, 50% of the pixels are illuminated by green filter, 25% by blue and the rest 25% by red. Therefore, the pixel under green does not have the information of the other two colors; similar is the case with blue and red channels. Thus, for any pixel, only one color information is available. Therefore, for example, to generate a full green image, for those pixels which did not receive green data the value has to be interpolated using the neighborhood blue, green and red information. This process of interpolation of the missing color data is called de-mosaicing (Teubner and Brückner 2019). Though finally we get the full image in blue, green and red, its limitation should be understood. Does the interpolated pixel retain the relative radiometric values of the target imaged? How does this affect classification accuracy? These issues need to be understood by the end user.

Fig. 6
figure6

Schematics showing Bayer mask filter functioning. In a and b the colors represent those pixels which respond to that color only, c red, green and blue bands after filling the missing color information based on interpolation of data from the neighborhood pixels (color figure online)

Another issue with the use of Bayer filter-based imagery is the spectral response of the filter. Though the spectral response is centered in the wavelength of red (600 nm), green (550 nm) and blue (450 nm), it is very broad and overlapping (Fig. 7) compared to the well-defined spectral response of cameras using interference filter as in satellites such as IRS, SPOT and Landsat (Johnson et al. 2018). The effect of out-of-band signals, giving ‘cross-talk’ between the bands on classification accuracy should be understood. With the advent of nano-satellites, we can have a flock of satellites thereby improving temporal resolution. The users need to make their assessment of the limitations of such systems for their specific application while having the advantage of frequent revisit capability.

Fig. 7
figure7

Adapted from https://www.dxomark.com/glossary/color-depth/)

Typical Bayer filter spectral response. (

Temporal Resolution

One important aspect of satellite-based remote sensing is its ability to make repeat observation of a target with near-identical observation geometry. This is possible since in an ‘orbit cycle’ after a fixed number of days, the satellite retraces its path, passing over the same point on the Earth’s surface directly below the satellite. The interval of time required for the satellite to complete its orbit cycle will vary with each satellite depending on the swath of the imagery collected. For example, Landsat-1 with a swath of 185 km has a repeat cycle of 18 days, while SPOT-1 satellite with a swath of 117 km (for two combined HRV) has a repeat cycle of 26 days. How frequently a target can be observed with the same image acquisition conditions (look angle, sun illumination) is called temporal resolution—temporal resolution of Landsat-1 is 18 days. With the advancement of sensor and satellite technology, one can revisit a target by manipulating the sensor view direction or by changing the satellite orientation. Such a capability can reduce the time to image a target compared to temporal resolution. For example, SPOT-1 HRV has a temporal resolution of 26 days, but with the oblique viewing capability, it can have the revisit frequency at the equatorial regions up to 3 days (Baghdadi and Zribi 2016). How often a target can be visited using such technique is referred to as revisit time. In this case, though the imaging area is the same, the acquisition conditions are not the same. This aspect should be considered when multi-temporal data are used for classification.

Figure of Merit

It is useful to establish a ‘figure of merit’ for the imaging system so that users can make an informed choice which data are to be used that best meets their requirements. It is traditional to use GSD as the only figure of merit to convey the image quality of a remote sensing system (Driggers 2003; Fiete 2007). From what is discussed in this paper, one could understand that a single parameter like GSD alone does not tell us the real capability of an earth observation system. We shall now define a figure of merit for a fair comparison of the ability of various earth observation systems operating in the visible–IR region to separate different classes in the imaged scene. Users expect the imaging system to discriminate as small an object with the least difference in radiance emanating from them—that is target discrimination capability. The ability to distinguish two targets depends on—the object contrast, MTF, and the noise equivalent radiance. In order that the contrast in the object space is minimally reduced by the recording system, we should choose a system that gives the highest MTF. One would also like to have a system with a high radiometric resolution (lowest value for NEΔL), so that as small a change in radiance can be recorded. Therefore, we define a figure of merit (FOM) of a camera for feature separation as the ratio of MTF at IGFOV to noise equivalent radiation at a defined reference radiance level.

$${\text{FOM}} = \frac{{{\text{MTF}}_{\text{IGFOV}} }}{{{\text{NE}}\Delta L_{\text{Ref}} }}$$
(9)

Since S/N can be dependent on the input radiance, it is necessary to specify a reference level at which NE∆L is expressed. Reference radiance at 90% and 10% of saturation value could be adopted. For sensors operating in the thermal band, NEΔT is generally expressed at 300 K.

To make FOM independent of units, we may use the value of signal-to-noise ratio (SNR) at the reference radiance instead of radiometric resolution. Since the system with the highest SNR has a better performance, the FOM can be rewritten as

$${\text{FOM}} = \left[ {{\text{MTF}}_{\text{IGFOV}} } \right] \times \left[ {{\text{SNR}}_{\text{Ref}} } \right]$$
(10)

Recommended Set of Specifications

Since the earth observation data are now available from many sources, it is necessary to make the user community knowledgeable about the capability of the data so that they can make a proper decision which data suit best for their specific application. To assess the quality of an image, a set of instrument parameters are required. It is recommended all the data providers agree to provide a minimum set of specifications for their instrument. Joseph 2015 has recommended a set of instrument parameters that should be specified for every earth observation camera. A modified version is given below:

  1. A.

    Spatial domain

    (1) IFOV/IGFOV; (2) GSD (at product level); (3) RAIFOV;(4) FOV/Swath; (5). MTF at IFOV; (6) MTF at twice IFOV

  2. B.

    Spectral domain

    (1) Central wavelength;(2) Bandwidth (use moment method);(3) Out of band contribution

  3. C.

    Radiance domain

    (1) Saturation radiance (SR); (2) S/N at (a) at 90% SR (b) at 10% SR; (3) Number of digitization bits; (4) Calibration accuracy

  4. D.

    Temporal domain (At spacecraft level)

    (1) Temporal resolution; (2) Revisit capability

  5. E.

    Data integrity

    Compression details

Most of the above parameters are evaluated in the laboratory as part of the characterization and qualification of the camera system. It is also necessary to have a broad agreement among the manufacturers of the earth observation camera system regarding the procedures to be adopted to measure each of the above parameters. This will make the comparison between various sensors more meaningful.

Data Product-Level Information

What we discussed is the performance at the instrument level. However, the end user deals only with the data product supplied to him. The data from a satellite-borne instrument suffer degradations during the process of imaging due to various factors such as jitter and drift of the satellite, intervening atmosphere, degradation of any subsystem in the course of operation and data transmission-related issues, resulting in loss of image quality. The user is concerned with the quality of the data product received for a particular application. Several quality metrics are developed to assess the satellite data products (Qian 2013). While these are useful indicators for comparison of different systems, the end user is mainly concerned with (i) the smallest object that can be discerned with confidence, (ii) radiometric fidelity to ensure classification accuracy and (iii) geometric fidelity to have proper mensuration. The received data undergo different levels of processing (Joseph and Jeganathan 2018) before supplied to the users. To generate certain higher-level product, one needs to have information on the in-orbit performance of the imaging system. There are several methodologies developed for in-flight characterization of the camera such as MTF evaluation (Leger et al. 2004; Blanc and Wald 2009), radiometric calibration (Zhou et al. 2015; Pagnutti et al. 2003) and noise evaluation (Reulke and Weichelt 2012; Ren et al. 2014). Providing the figure-of-merit as discussed in “Figure of Merit” section at the product level will be very useful for assessing similar data from different data providers for a specific application. The set of specifications mentioned in “Recommended Set of Specifications” section also should be made available at the product level.

Conclusions

This paper brings out the need for consistency in terminology, definition of terms and measurement techniques for Earth-observing sensors. Various electro-optical sensor specifications are reviewed, and their implications are addressed from the end users’ perspective. The paper also proposes a figure-of-merit to compare various cameras for its target discrimination capability. Since the user deals with data products, it is suggested to provide this information at the product level also. A set of specifications is recommended to be provided by all camera developers which would benefit both manufacturers and users of such instruments. It is suggested to have a broad agreement among the manufacturers of the earth observation camera, regarding the procedures to be adopted to measure each of the specified sensor parameters. This will make the comparison between various sensors more meaningful.

References

  1. Baghdadi, N., & Zribi, M. (2016). Optical remote sensing of land surfaces: Techniques and methods. Amsterdam: ISTE Press - Elsevier.

  2. Bhatt, R., Doelling, D., Haney, C., Scarino, B., & Gopalan, A. (2018). Consideration of radiometric quantization error in satellite sensor cross-calibration. Remote Sensing,10(7), 1131.

  3. Blanc, P., & Wald, L. (2009). A review of earth-viewing methods for in-flight assessment of modulation transfer function and noise of optical space borne sensors. HAL Id: hal-00745076 https://hal-mines-paristech.archives-ouvertes.fr/hal-00745076. Accessed 14 October 2019.

  4. Boreman, G. D. (2001). Modulation transfer function in optical and electro-optical systems (Vol. 4). Bellingham, WA: SPIE Press.

  5. Cui, T., Ding, J., Jia, F., Mu, B., Liu, R., Xu, P., et al. (2018). Out-of-band response for the coastal zone imager (CZI) Onboard China’s ocean color satellite HY-1C: Effect on the observation just above the sea surface. Sensors,18(9), 3067.

  6. Driggers, R. G. (Ed.). (2003). Encyclopedia of optical engineering: Las-Pho, pages 1025–2048 (Vol. 2). Boca Raton: CRC Press.

  7. Dove Constellation. (2019). Dove satellite constellation. Retrieved September 12, 2019, from https://www.satimagingcorp.com/satellite-sensors/other-satellite-sensors/dove-3m/.

  8. Fiete, R. D. (2007). Image chain analysis for space imaging systems. Journal of Imaging Science and Technology,51(2), 103–109.

  9. Fiete, R. D., & Tantalo, T. A. (2001). Comparison of SNR image quality metrics for remote sensing systems. Optical Engineering, 40, 574–585.

  10. Gibson, P. (2000). Introductory remote sensing principles and concepts. Abingdon: Routledge.

  11. Hein, A. G. A. G. I. (2017). A systems analysis of cubesat constellations with distributed sensors (Doctoral dissertation, Massachusetts Institute of Technology).

  12. Johnson, B. R., McGlinchy, J., Cattau, M., Joseph, M., & Scholl, V. (2018). Harnessing commercial satellite technologies to monitor our forests. In W. Gao, et al. (Eds.), Remote sensing and modeling of ecosystems for sustainability XV (Vol. 10767). Bellingham: SPIE - International Society for Optics and Photonics.

  13. Joseph, G. (2000). How well do we understand Earth observation electro-optical sensor parameters? ISPRS Journal of Photogrammetry and Remote Sensing,55(1), 9–12.

  14. Joseph, G. (2015). Building earth observation cameras. Boca Raton: CRC Press.

  15. Joseph, G., & Jeganathan, C. (2018). Fundamentals of remote sensing. Hyderabad: Universities Press (India) Private Limited.

  16. Leachtenauer, J. C., & Driggers, R. G. (2001). Surveillance and reconnaissance imaging systems: modeling and performance prediction. Artech House

  17. Leger, D., Viallefont, F., Deliot, P., & Valorge, C. (2004). On-orbit MTF assessment of satellite cameras (pp. 67–76). London: Taylor & Francis Group.

  18. Morfitt, R., Barsi, J., Levy, R., Markham, B., Micijevic, E., Ong, L., et al. (2015). Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit. Remote Sensing,7(2), 2208–2237.

  19. Norwood, V. T. (1974). Balance between resolution and signal-to-noise ratio in scanner design for earth resources systems. Proceedings of SPIE,51, 37–42.

  20. Pagnutti, M., Ryan, R. E., Kelly, M., Holekamp, K., Zanoni, V., Thome, K., et al. (2003). Radiometric characterization of IKONOS multispectral imagery. Remote Sensing of Environment,88(1–2), 53–68.

  21. Palmer, J. M. (1984). Effective bandwidths for LANDSAT-4 and LANDSAT-D’ multispectral scanner and thematic mapper subsystems. IEEE Transactions on Geoscience and Remote Sensing,22(3), 336–338.

  22. Pandya, M. R., Murali, K. R., & Kirankumar, A. S. (2013). Quantification and comparison of spectral characteristics of sensors on board Resourcesat-1 and Resourcesat-2 satellites. Remote Sensing Letters,4(3), 306–314.

  23. Qian, S. E. (2013). Optical satellite signal processing and enhancement. Bellingham, WA: SPIE Press.

  24. Ren, H., Du, C., Liu, R., Qin, Q., Yan, G., Li, Z. L., et al. (2014). Noise evaluation of early images for Landsat 8 Operational Land Imager. Optics Express,22(22), 27270–27280.

  25. Reulke, R., & Weichelt, H. (2012). SNR evaluation of the RapidEye space-borne cameras. Photogrammetrie-Fernerkundung-Geoinformation,2012(1), 29–38.

  26. Syafrudin, H., Hasbi, W., & Rahman, A. (2013). Camera payload systems for LAPAN-A2 experimental microsatellite. In Proceedings of 34th ACRS, Bali, Indonesia.

  27. Tedesco, M. (2014). Remote sensing of the cryosphere. Hoboken: Wiley.

  28. Teubner, U., & Brückner, H. J. (2019). Optical imaging and photography. Berlin: Walter de Gruyter GmbH.

  29. Triana, J. S., Bautista, S., & González, F. A. D. (2015). Identification of design considerations for small satellite remote sensing systems in low earth orbit. Journal of Aerospace Technology and Management,7(1), 121–134.

  30. Zhou, G., Li, C., Yue, T., Jiang, L., Liu, N., Sun, Y., et al. (2015). An overview of in-orbit radiometric calibration of typical satellite sensors. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, 40, 235–240.

Download references

Acknowledgements

I am thankful to a number of my colleagues in ISRO for critically going through the paper and giving valuable suggestions.

Author information

Correspondence to George Joseph.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Verify currency and authenticity via CrossMark

Cite this article

Joseph, G. How to Specify an Electro-optical Earth Observation Camera? A Review of the Terminologies Used and its Interpretation. J Indian Soc Remote Sens 48, 171–180 (2020). https://doi.org/10.1007/s12524-020-01105-8

Download citation

Keywords

  • Resolution
  • MTF
  • Signal-to-noise ratio
  • Revisit period
  • Bayer filter
  • Figure of merit
  • GSD