Adding Luminance to SHO Narrowband [Deep Sky] Processing techniques · Craig Dixon · ... · 9 · 338 · 0

craigdixon1986 3.01
...
· 
·  Share link
If adding luminance data to RGB images improves the sharpness and reduces noise, can the same technique be used for narrowband imaging? So LSHO?
Like
jsrothstein 0.90
...
· 
·  Share link
Adding a broadband signal to narrowband would undercut the whole point of taking narrowband images, wouldn’t it?
Like
craigdixon1986 3.01
Topic starter
...
· 
·  Share link
This is what I initially thought but isn't it the same thing when shooting RGB?
Like
jsrothstein 0.90
...
· 
·  Share link
No, because RGB is broadband.  With SHO you are strictly limiting the wavelengths you capture.
Like
craigdixon1986 3.01
Topic starter
...
· 
·  Share link
but if the luminance data is just layered on top to add sharpness & reduce noise, aren't the pixels of the luminance layer ignored when there is no signal in the RGB channels below? Maybe I'm misunderstanding something here.
Like
jsrothstein 0.90
...
· 
·  Share link
There are many ways to blend in the data.  Could be replacement, could be overlay, could be average. Think of all the blending modes in Photoshop.  

I think the Ha channel will capture plenty of detail, considering that many imagers use Ha instead of L for RGB images.  

But don’t take my word for it—give it a try and see how it goes!
Like
TimH
...
· 
·  1 like
·  Share link
Luminance is essentially just the sum of the RGB broad bands and so can faithfully represent the overall brightness in visible light of  objects like galaxies for example.    Narrow band on the other hand  - after subtracting the underlying broad band component - just comes  from very particular transitions of specific excited gas ions  -  usually just HII and SII at very narrow wavelengths in the far red and OIII in the blue/ green.   Luminance would not faithfully reflect the sum of these specific signals because it is broadband and 99.99% of luminance comprises all the wavelengths other than those few narrow lines of interest that you are trying to image.  So it would just bury the NB signal.  

Sometimes –because for many NB objects the hydrogen alpha is by far the strongest narrow band signal, shows more interesting  structural detail and is often spatially coincident with the other signals – hydrogen alpha is used as a sort of 'luminance'  in images combining H alpha with the other NB signals.

Tim
Like
jhayes_tucson 26.84
...
· 
·  3 likes
·  Share link
Craig Dixon:
If adding luminance data to RGB images improves the sharpness and reduces noise, can the same technique be used for narrowband imaging? So LSHO?

You are misunderstanding how LRGB imaging works.  LRGB imaging works by multiplying the RGB signal by the L channel so that the overall brightness is modulated by the L signal.  Color comes from the RGB contributions.

If you want to understand it better, just try it.  You’ll quickly see that adding luminance to a NB image does the opposite of making it sharper.  Luminance is the sum of all visible wavelengths (limited by the bandpass of the clear Lum filter), which of course includes the NB data but also sky glow and broadband emissions from the object itself.   All of that extra light layered on top of the NB data will greatly reduce the contrast of the NB signals.  It will also lose the spatial information contained in the NB data.    The interesting details will be converted from spatial information into color in the final result with image structure driven by the Lum channel only.  

If you want to accomplish the same effect as adding Lum to RGB with NB data, you need to create a pseudo Lum channel by adding color balanced Ha+O3+S2 signals (assuming that those are your filters).  That will preserve detail and contrast in the final result.  

John
Like
HegAstro 14.24
...
· 
·  1 like
·  Share link
At least in PixInsight, LRGB combination is not a simple multiplication of RGB by L. This is described by Juan here:
"The "intrinsic" L in your RGB image will basically be replaced with the L image. You should stretch L first, then stretch RGB to achieve similar levels in its luminance (watch the L display channel, or select the RGB+L pixel readout mode and compare readouts, or extract L from the stretched RGB and compare statistics). The LRGBCombination tool also has a luminance midtones transfer function to fine tune the adaptation. The goal is to achieve an optimal adaptation between luminance and chrominance. Too much luminance means more chrominance noise to achieve the required color saturation. Too much chrominance means more noise in the luminance."

The pixels on our screens are basically RGB. The RGB image is converted to L*a*b*, and L* in the CIELAB space basically approximates human perception of lightness. L* is simply a nonlinear mathematical combination of RGB  (through an intermediate linear conversion to XYZ space).

Replacing the L* in the converted image with a version of L* calculated from the luminance data and then converting back to RGB space simply means that high SNR data collected using the luminance filter is selectively distributed between the RGB channels in a way that maximizes our perception of noise and detail. Note that this "replacement" means that the intrinsic L from the RGB image and the L from the luminance data must be matched and correlated. This is at least possible in broadband imaging because the luminance filter is effectively spanning the bandwidth of the RGB filters, but will not be the case, usually, with narrow band data. You could do an LRGB combine in PixInsight with any L mathematically, but the results are only meaningful if the RGB and L are correlated.
Edited ...
Like
craigdixon1986 3.01
Topic starter
...
· 
·  Share link
Thanks for all of the replies. I appreciate the time taken
Like
 
Register or login to create to post a reply.