Linear Color Processing

With deconvolution complete, we now turn our attention to linear color processing. If you’ve been working with an OSC RGB master , you likely noticed that background modelization greatly improved color balance (Chap. 9). With nasty pink and cyan light pollution gradients gone, you’d be closer to attaining true color. An OSC master would now be ready for three more steps toward great color balance. First, let’s allow monochrome processors to catch up.

ChannelCombination

Previously, you used the ColorSpaces submenu’s ChannelCombination to recombine channels (Chap. 9). This is also the way to combine monochromatic R, G, and B masters into a chrominance image . As before, with RGB selected as the Color Space, click each channel under Channels/Source Images, assigning the appropriate master to its corresponding channel. You can alternatively type in the file name or drag and drop the image’s View Identifier tab in the corresponding field. Apply Global (blue circle or F6) to create the chrominance image.

BackgroundModelization

Run ABE or DB on the combined RGB chrominance image (Chap. 9).

At this juncture, imagers using a monochromatic camera would be caught up to OSC imagers, and together we can proceed.

BackgroundNeutralization

The BackgroundNeutralization (BN) process is found in the Process menu’s ColorCalibration submenu and is profiled in the help file. Easy to use, BN is recommended after background modelization, but before color calibration. With gradients removed, you’re better able to provide BN with a sample of neutral background sky (Fig. 12.1).

Fig. 12.1
figure 1

A preview of featureless background sky is defined as the Reference Image, enabling BackgroundNeutralization to balance the red, green, and blue channels of an RGB image to one another

BN uses this neutral sample to compute an initial mean background level for each color channel, and then applies a per-channel linear transformation to equalize the red, green, and blue components. This results in a color-balanced histogram.

If the Reference Image isn’t specified (<target image>), BN will use the entire image for background reference, and in the absence of any truly neutral sky, this may be the best method. In those difficult cases, you might alternatively choose to forego BN, balancing the histogram manually (Chap. 13).

For most images however, you’ll want to provide the process with a good sample of neutral sky, and this is done via a preview. Using the New Preview Mode (Alt+N), define a sample in an area devoid of extended nebulosity. It can be quite small, and it’s OK if the field contains some very small, dim stars. In BN , select the preview you just created as the Reference Image. Alternatively, you could enable Region of Interest (ROI ) and choose the preview from there.

If you felt that the sample was not representative of all of the relatively neutral areas in the image, you could define an additional preview(s) in another region.

Like all processes that accept a reference image, BN can only use one reference at a time. It’s for this reason that we take a quick detour to learn about a useful script .

PreviewAggregator Script

With multiple previews defined, click on PreviewAggregator in the Utilities submenu of the Script menu . The dialog will open with all available previews listed and enabled. Remove the check mark from any preview that you don’t want combined, then click OK. The script will aggregate (combine) all of your previews into one. At this point, you may set ‘Aggregated’ as the BN Reference Image (Fig. 12.2).

Fig. 12.2
figure 2

Since only one sample can be listed for a given reference image, the PreviewAggregator script (Utilities) allows you to combine the statistics contained in multiple previews into a single sample

BN’s default Working Mode of Rescale As Needed is recommended. The process will consider those pixels whose values fall between the Lower and Upper Limit sliders as background. Pixels under the Lower setting or over the Upper setting are ignored. In most cases, the default positions (0.0/0.1) are fine.

To be sure, you can enable Readout Mode, available via the keyboard shortcut Alt+R, or the context menu (right-click in image), or the appropriate icon on the Mode Tool Bar. With the left mouse button held down, move the cursor within the preview, noting the average RGB values in the pop-up Readout Preview or on the Information Tool Bar (View/Tool Bars) at the bottom of the workspace. If the Readout Preview doesn’t appear, refer to Chap. 14. For optimal results, consider setting the Upper Limit slider to just above the preview’s highest channel background value. For example, if R = 0.0250, set the slider to 0.0300. You should not need to change the Lower Limit.

When ready, apply BN to the image. As with background modelization, color balance may improve subtly yet again. You can use the same preview to expedite the next step. If you used an aggregated preview, be sure to apply BN to it as well before proceeding.

Helpful Hint

Previews can be left in place, but hidden via the context menu. Toggle previews on an off with a right-click of Show Previews .

ColorCalibration

The ColorCalibration (CC) process is found alongside BN and is profiled in the help files. Along with background modelization, color calibration is considered to be one of PixInsight’s most powerful abilities (Fig. 12.3). Attaining accurate color balance for broadband images is one of the most difficult tasks astro-image processors face, but CC makes it easy. For it to work, three conditions must be met:

Fig. 12.3
figure 3

The ColorCalibration process offers three different methods of correcting the color balance of an RGB image

  1. 1.

    The image must have even illumination. This was achieved via flat fielding and background modelization.

  2. 2.

    The mean background must be neutral. That was accomplished with background modelization and BN .

  3. 3.

    The image must be linear. Nonlinear stretches have yet to be applied.

Spectral Agnosticism

By default, PI’s ‘spectrum-agnostic’ CC tool doesn’t favor any specific color or spectral type star for the white reference, as does the G2V star calibration method. Debate may continue, but many feel that PixInsight’s unbiased approach, basing calibration on all represented star colors, wins the day. Despite its effectiveness, CC isn’t magic, and it requires valid input from the user.

As with BN , you’ll need to pick a good Background Reference to represent neutral background sky. That is why keeping BN’s background preview was suggested. Near the bottom of the dialog, you can set the preview as the Background Reference Image, or alternatively choose it via the ROI feature. Should you wish to reuse the aggregated preview from the last step, remember that you must apply BN directly to it beforehand. In practice however, when the majority of the backgound is neutral, leaving the Background Reference Image field at default (<target image>) should be fine. For a word on the Upper and Lower Limit sliders, refer back to BN .

This time, you also need a White Reference, and CC offers three different modes to provide it.

1. Structure Detection Mode

Enabling Structure Detection causes a multiscale, wavelet-based routine to isolate and sample a set of unsaturated stars to be used as the White Reference. Color calibration factors are then computed for each of the R, G, and B channels. If you leave the White Reference Image field at default (<target image>), the entire image will be used as the white reference, and this is fine, because Structure Detection will only sample the stars.

Reference Masks

To convince yourself , enable Output White Reference Mask to see the actual pixels used for the calculations. You can also request a Background Reference mask. As with other applications, ‘Light Selects and Dark Protects’ (Fig. 12.4).

Fig. 12.4
figure 4

Here, ColorCalibration ’s Structure Detection mode is used. The white regions of the generated masks demonstrate which pixels were sampled for the White Reference (stars), and Background Reference

2. Manual White Balance Mode

CC allows you to enter your own color ratios if you wish. Disabling Structure Detection and enabling Manual White Balance lets you use the Red, Green, and Blue sliders to force factors determined by an outside method (eXcalibrator, etc.). The process would apply these factors without performing any additional calculations.

3. Galaxy Mode

This technique is perfect for images with a galaxy, specifically, a relatively nearby, face-on spiral (Sa, Sb, Sc, and Sd-type). As proposed by Vicent Peris, their integrated light is a good white reference, providing a sampling of a large number of stars of all spectral classes. Although you’ll get good results using many spiral galaxies as the White Reference, here are Vicent’s exacting criteria for the best ones:

  • Closer than 50 Mpc (Megaparsecs)

  • Hubble classifications: Sa, Sb, Sc, Scd, SBa, SBb, SBc, or SBcd

  • Inclination less than 60° (face-on = 0°)

  • Integrated intrinsic intergalactic and galactic reddening <0.5 mag. in Johnson B

Define a preview around the galaxy, with the ‘X’ centered on its nucleus. Making sure that both Manual White Balance and Structure Detection are disabled, choose the galaxy preview as the White Reference (Fig. 12.5). As before, you could aggregate more than one qualified galaxy’s preview.

Fig. 12.5
figure 5

Since it provides a good sampling of all spectral classes, a ‘nearby’ face-on spiral galaxy is used as the White Reference in this example. Two previews of neutral background sky have been aggregated into one sample, to be used as the Background Reference

Whichever mode you choose, apply CC to the image (not the preview) for a dramatic improvement in overall color balance. While CC generally does an excellent job, remember to check the alignment of the histogram’s red, green, and blue channels once the image has been delinearized (Chap. 13).

PhotometricColorCalibration

Along with CC, the newer PhotometricColorCalibration (PCC) process is found in the Process menu’s ColorCalibration submenu (Fig. 12.6). While its documentation is not yet added to the help files, a thorough explanation can be found online at PI’s website. Introduced in version 1.8.5, this alternate method of calibrating color takes considerably longer to set-up and run than does standard CC. While CC typically takes under 15 seconds to apply, PCC can take 3 to 4 minutes. This is partly because it calls out to the internet to reference a star catalog database (APASS) to plate solve the image. While the older CC performs a ‘local’ calibration, PCC actually knows what the spectral classes of the stars are, and can perform an ‘absolute’ calibration. This, in theory, produces a more accurate result. In most cases, you may find PCC slightly superior to CC. Exceptions where CC may be preferable to PCC are in fields reddened by interstellar dust, or where significant atmospheric extinction of blue wavelengths has occurred.

  1. 1.

    Beginning at the top of the dialog, choose Average Spiral Galaxy (ASG) as the White Reference. While other choices exist, you will likely find that this is the most useful. As explained earlier in this chapter, by nature, an ASG (Sa, Sb, Sc, and Sd-type galaxies) is comprised of stars of all spectral classes . This is used as the white reference only; it has nothing to do with what type of object you actually imaged. In other words, there need not be an ASG in the image.

  2. 2.

    Choose the Database Server closest to your location. If one server doesn’t work well, try another.

  3. 3.

    Be sure Apply Color Calibration is checked.

  4. 4.

    Image Parameters. If your acquisition software is set-up to record an image’s RA (right ascension) and Dec (declination) coordinates to an image’s FITS header, open a single FITS subexposure and click Acquire from Image. This saves the time of having to look up and manually enter the coordinates of the image’s center. Because this information is likely missing from the header of a chrominance image, you can’t use the working color image for this purpose. Once the coordinates are captured, you can close the subexposure. You can alternatively click Search Coordinates, enter the name or designation of your featured object, Search, and then Get. Whichever method you use, also enter your focal length in millimeters, and the size of your sensor’s pixels in microns to help the astrometric solution along. Remember to account for any binning (or Drizzle integration) when entering the pixel size. For example, if you have 9 micron pixels and had binned 2x2, enter 18 microns as the pixel size.

  5. 5.

    You should be able to leave Plate Solving Parameters at default settings, but for camera lenses and wide field telescopes with focal lengths under 400 mm, choose Distortion Correction. If the plate solve fails, try deselecting Automatic Limit Magnitude, and change the limit from its default of 12, to 16 or 18.

  6. 6.

    The Advanced section often works well at defaults. If the process is having difficulty finding the required number of star matches, try lowering the Log slider to include more stars. If the image is especially noisy, increase Noise Reduction to 1 or 2.

  7. 7.

    Note the option to Generate Graphs. Unless deselected, PCC will display an informational graph at the end of the run. Unless you have a reason to change the other Photometry Parameters, leave them at defaults.

  8. 8.

    By default, PCC offers a Background Neutralization option. This works identically to the way the BN process does. As you saw earlier, this will require a small preview of a color-neutral area of background sky. If you planned on using PCC rather than CC , you do not need to apply BN beforehand.

  9. 9.

    When ready, apply PCC to the color master. As with CC, remember to check the resulting histogram’s RGB channel alignment.

Fig. 12.6
figure 6

PhotometricColorCalibration (PCC) is the latest tool for achieving accurate color

Although PCC generally does an excellent job of calibrating color, remember to check the alignment of the histogram’s red, green, and blue channels once the image has been delinearized (Chap. 13).

SCNR

Though it’s not a traditional noise filter, SCNR (Subtractive Chromatic Noise Reduction) is found in the Process menu’s NoiseReduction submenu. Rather than smoothing granular noise, it’s used to eliminate a color bias. As you know, the heavens are generally not green. The emission line of OIII (Doubly-Ionized Oxygen) is as close as it gets, and its beautiful teal color is both green and blue. Typically, where green occurs, it’s spurious and attributable to noise or man-made light. If an image suffers from a residual color cast, even after background modelization and the previous steps, SCNR will help (Fig. 12.7).

Fig. 12.7
figure 7

SCNR will eliminate a color cast (bias), most typically green, the result of chrominance noise

With Green chosen as the Color To Remove, click Apply. In cases where the authentic teal color of comets, supernovae remnants and planetary nebulae does exist, change the default Protection Method from Average Neutral to Maximum Neutral. Whichever method you use, you can reduce the effect by lowering the Amount slider from the default of 1.00. If for example, the process overcorrected green , creating a red bias, try a setting of 0.5.

SCNR can also be used to eliminate a red or blue color bias by changing the Color To Remove field.

Linear Noise Reduction

Although you may choose to reserve smoothing until after delinearization, many processors apply an initial dose of noise reduction in the linear state. With the grainy, salt and pepper look of background noise in mind, think of it as ‘knocking the fizz off.’

PI offers several noise reduction tools, some of which are well suited for linear images, while others are better reserved for after delinearization. When working with linear data, you may find that the best noise reduction tools are wavelet-based processes such as MultiscaleLinearTransform (MLT). For nonlinear images, TGVDenoise (Total Generalized Variation), implemented by Carlos Milovic, is PI’s latest and most sophisticated noise reduction process. TGVDenoise will therefore be profiled later in the nonlinear space (Chap. 15).

Earlier, you learned that wavelets are a powerful signal-processing tool (Chap. 10). Wavelets can decompose an image into a series of wavelet layers (aka planes) with each layer containing structures within a defined range of dimensional scales (sizes). These scales are comparable to Photoshop’s pixel radii settings. Once structures of various sizes are isolated within separate layers, noise reduction (or sharpening for that matter) can be accomplished with relatively little impact upon structures of sizes you don’t want affected. While you learned that gradients were large-scale structures, noise is the opposite. In keeping with the old adage ‘One picture is worth a thousand words,’ let’s see this illustrated.

ExtractWaveletLayers Script

The ExtractWaveletLayers script in the Script menu’s Image Analysis submenu, offers a convenient way to break up an image into wavelet planes for examination. With the working image chosen as the Target Image, apply the script at the default setting of 5 by clicking OK. After several seconds, individual, stretched wavelet layer images, grouped by scale, are produced. To better visualize them, try an additional STF Auto Stretch and for color images, consider applying the Process menu’s ColorSpaceConversion /ConvertToGrayscale as well (Fig. 12.8).

Fig. 12.8
figure 8

The ExtractWaveletLayers script (under Image Analysis) is useful for visualizing the structures contained within different wavelet layers (planes)

As you’ll see, ‘layer00’ contains the smallest structures, with layer01 through layer04 containing structures of ever-increasing size. Zoom deeply into the images to examine their backgrounds (Fig. 12.9). The noise component of digital images is generally quite small in size, most prevalent in the first two layers with a diminishing residual in the third through fifth layers. With this knowledge to guide you, you may delete the wavelet images.

Fig. 12.9
figure 9

Three of the extracted wavelet layers are shown, containing the smallest to the largest structures. Note that the objectionable, granular noise lives primarily in the first layer pictured at top-left

MultiscaleLinearTransform Noise Reduction

MLT, found in the NoiseReduction (also MultiscaleProcessing and Wavelets) submenu, is the newest wavelet-based process , superseding the ATrousWaveletTransform (ATWT), which has been deprecated to the Compatibility submenu. Pictured in Fig. 12.10, MLT looks and functions nearly identically to its predecessor and continues to offer the ATWT algorithm, now called the Starlet Transform. Understanding the MLT dialog would enable you to experiment with the similarly designed ATWT, and MultiscaleMedianTransform (MMT) processes if you desired.

Fig. 12.10
figure 10

The MultiscaleLinearTransform process is a powerful wavelet-based tool for both noise reduction and sharpening. Here, it’s used for reducing noise in the linear state

While the MMT algorithm is good at isolating small structures for removing high-frequency noise, it’s also prone to splotchy artifacts . Although these can be controlled using the Adaptive slider, you may find MLT both easier to use and more efficient.

At this juncture, followers of OSC workflow would have a color master, while users of monochromatic cameras would have a chrominance master and perhaps, a luminance master. All three types of images can benefit from linear smoothing, so let’s prepare the tool for use.

  • Set the Algorithm to Multiscale Linear Transform. The Starlet Transform may also be OK.

  • With a Dyadic scaling sequence selected by default, for this example let’s increase the default number of layers from 4 to 5.

Note that Layers 1 through 5, as well as a residual layer (R) are enabled, as indicated by a green check mark at left . Double-clicking a layer disables it as indicated by a red ‘X.’ This is an advanced feature which you’ll see later (Chap. 22).

Here’s where we draw upon the lessons learned from ExtractWaveletLayers. Recall that noise was most prevalent in layers containing small-scale structures. The script’s layer00 closely corresponds to MLT’s Layer 1, while the layer04 image matches MLT’s Layer 5. Layers 1 and 2 with an approximate scale of 1 to 2 pixels should therefore contain the most noise, with a tapering off in the next three layers.

  • With Layer 1 clicked upon and highlighted in orange, enable Noise Reduction below. A typical noise reduction scheme is to attack the small-scale noise of Layer 1 most aggressively with a Threshold of 3.0 to 5.0 sigma units. Read the mouseover to guide you.

  • Determining the Amount setting will require experimentation. As you should strive to acquire sufficient signal, you may find a value less than the default of 1.0 to be appropriate . Too high a threshold or amount setting could damage legitimate structures, so be only as aggressive as necessary to achieve a smooth result. Bear in mind that oversmoothing will result in an unnatural, plastic appearance.

  • For even more control, consider increasing the number of Iterations before you increase the amount. Instead of an amount of 1.0 applied once, try two iterations of 0.5. To combat stronger noise however, you may need, say two iterations of 1.0.

  • With Fig. 12.9 to guide you, highlight each successive layer, enabling Noise Reduction and adjusting the Threshold, Amount, and number of Iterations for Layers 2 through 5.

To find the ideal settings for a particular image, create several matching previews containing background sky and dim features (Chap. 11). You can either apply different settings of the process directly to the previews to compare your results , or engage a Real-Time Preview of a preview to view relatively instantaneous updates of your adjustments.

Masking

While MLT can isolate the small-scale noise of weak signal regions in individual wavelet layers, it’s advisable to protect the legitimate features of strong signal areas from being smoothed away. For this purpose, you could use a nonlinear luminance mask that was the inverse of the one you created for Deconvolution (Chap. 11). This time you want to select dark regions and protect bright ones. Recall that there are many ways to invert a mask. Refer back if needed (Chap. 10).

Linear Mask

While it was important to learn the concept of masks for Deconvolution, MLT is the first process we’ve seen that includes a convenient, internal masking feature. If you intend on using MLT’s internal masking, enable Linear Mask. If MLT were applied, the default mask settings would now be used (Fig. 12.11).

Fig. 12.11
figure 11

MLT’s Linear Mask feature is previewed via a Real-Time Preview at upper-left. Cancel the STF Auto Stretch (F12) for accurate visualization. When Linear Mask is chosen, a separate applied mask isn’t used. Just be sure to deselect Preview Mask before applying the process

Preview Mask

To see what the default settings looks like, enable Preview Mask, and open a Real-Time Preview . Be sure to cancel the STF Auto Stretch (F12) to view an accurate representation of the mask. By default, Inverted Mask is enabled – this produces the type of mask needed.

Mask Parameters

The two other mask controls are the Amplification and Smoothness sliders. Amplification is a multiplicative factor that controls how much of the image’s brightness range is selected for noise reduction. Moving the slider left from the default of 100 selects a wider range. Sliding right narrows the range, protecting stars and the strong signal of brighter elements. A setting of between 50 and 200 should be appropriate for selecting the noisier regions only, with the default 100 often being about right.

Smoothness performs a convolution to soften the edges of the mask. As you learned, this will prevent harsh transitions between adjacent smoothed and unsmoothed regions. Try a sigma setting of between 1 and 3 for a bit of feathering (tapering).

When satisfied with the mask, be sure to uncheck Preview Mask, and to reapply the STF Auto Stretch , which was disabled to visualize the internal mask. So long as Linear Mask remains checked, MLT would be applied at your desired settings, but only to the areas of the image which are selected in white by the internal mask. The default Target: RGB/K Components is fine for either a luminance or chrominance image.

Luminance and Chrominance

As the luminance image of a monochromatic workflow provides our eyes with image detail, take care to target noisy areas only and not oversmooth regions of stronger signal. As chrominance largely provides the color information only, you could be more aggressive while smoothing it if required. If the chrominance is especially noisy, you could consider applying noise reduction to it without using a mask.

OSC

Since an OSC RGB master contains both the color and the lightness component, a two-pass approach may be best. First, choose Chrominance as the Target. This will apply MLT to the RGB components without altering the lightness component. If the image is still noisy, select Luminance as the Target and apply MLT to the lightness component with less aggressive settings. Alternatively, try the default RGB/K Components setting.

MureDenoise Script

Another linear noise reduction tool that may be worth investigating is the MureDenoise script by Mike Schuster. While the initial results produced by this newer algorithm are promising, it’s time consuming, requiring quite a bit of user input, and its use is limited to linear, monochromatic masters only. An acronym for Mixed Noise Unbiased Risk Estimator (MURE), MureDenoise is fully documented and can be found in the Script menu’s Noise Reduction submenu.

With noise diminished, it’s time to go nonlinear!