LRGB exposure ratios [Deep Sky] Acquisition techniques · Tony Gondola · ... · 42 · 1807 · 0

Gondola 8.11
Topic starter
...
· 
·  Share link
Tony,

I have the following saved in my Zettelkasten notes:
- For Limited Imaging Time (2-4 hours total): Use traditional LRGB ratios like 2:1:1:1 or 3:1:1:1. The efficiency advantage is real and significant when noise dominates the images.
- For Moderate Imaging Time (4-8 hours total): Consider a balanced 1:1 ratio - equal time on luminance and RGB combined. Many experienced imagers now shoot 50% L and 50% color, such as 30 minutes total L and 10 minutes each of RGB LRGB imaging.
- For Extended Imaging Sessions (8+ hours total): Consider pure RGB at 1:1:1 ratios. We can always create a synthetic luminance channel from RGB data during processing, giving us the best of both worlds.
- Alternative "Super-Luminance" Approach: Some imagers create an optimized super-luminance by integrating L, R, G, and B together, then use this combined with the RGB data LRGB imaging. This approach maximizes signal-to-noise while preserving color resolution.

I hope this to be useful.

Yes, that's a great overview and starting point.
Like
Gondola 8.11
Topic starter
...
· 
·  4 likes
·  Share link
As an aside, I did try a sanity check. I took an OSC landscape image, extracted the channels, applied a gaussian blur to R,G and B and let L alone. Did a recombination and the result was very close to the original,  just a slight saturation shift in the light blues. There was no change in the apparent sharpness of the image. This isn't a scientific test but it does suggest that the idea of concentrating the time on the L frame as some validity. Meaning that you can degrade the quality of the color frames without having a large effect on the final image in color or sharpness.
Like
jhayes_tucson 26.84
...
· 
·  Share link
Tony Gondola:
As an aside, I did try a sanity check. I took an OSC landscape image, extracted the channels, applied a gaussian blur to R,G and B and let L alone. Did a recombination and the result was very close to the original,  just a slight saturation shift in the light blues. There was no change in the apparent sharpness of the image. This isn't a scientific test but it does suggest that the idea of concentrating the time on the L frame as some validity. Meaning that you can degrade the quality of the color frames without having a large effect on the final image in color or sharpness.

Nicely done Tony.  That's a great demo!

John
Like
danwatt 3.31
...
· 
·  1 like
·  Share link
Tony Gondola:
As an aside, I did try a sanity check. I took an OSC landscape image, extracted the channels, applied a gaussian blur to R,G and B and let L alone. Did a recombination and the result was very close to the original,  just a slight saturation shift in the light blues. There was no change in the apparent sharpness of the image. This isn't a scientific test but it does suggest that the idea of concentrating the time on the L frame as some validity. Meaning that you can degrade the quality of the color frames without having a large effect on the final image in color or sharpness.

Chroma subsampling has been an effective part of bandwith limited video signals for a long time and is the basis for the reasoning behind LRGB with ratios different then 1:1:1:1. Works out pretty well. 

That said, I adjust my strategy to the target. If I'm going for something bright in the milky way, I can save myself a lot of trouble and go straight 1:1:1 RGB. And to borrow a film term, I have a thicker negative with the color signal letting me bring out more interest variances and nuance in the color. 

Other targets like galaxies or faint fields of IFN greatly benefit from a 3:1:1:1 ratio or higher. Oftentimes these projects will take multiple nights/trips out to dark sites. I'll keep stacking my subs until I feel pretty good about the amount of RGB data. Once there I might spend a few nights extra just grabbing L for that faint stuff deep in the shadows.
Like
mc0676 1.91
...
· 
·  1 like
·  Share link
Jure Menart:
Michele Campini:
Tony Gondola:
andrea tasselli:
1:1:1:1 - The Golden Ratio. Anything else is NONONO. Besides, aren't you in B8? If it is then no LRGB for you.

Now you're just making me sad...

Actually though, I've had some success shooting OSC under bortel 8 so why would LRGB be any different as long as I keep the sub exposures short?

I've had some minor success with RGB L in Bortle 8+ using my Edge 14 (note that the "L" is an Optolong "L Pro) - I try for three or four times L to RGB, which although not suggested by leading Astrobin members (nor is RGB L in Bortle 8 in any case!), I think the L Pro has some positive affect when applied to the RGB.  My M51 is probably the one that I'm most satisfied with. But in general, here in Bortle 8+, the RGB data is frustrating to process; lots of garbage to deal with which is why I usually lean heavily towards NB these days.

I shoot from a bortle 4/5, better than yours but not great.
I use an IDAS P3 and keep it fixed in my optical train after the filter wheel.
At first I used it only for luminance but then I tried it on RGB and I was thrilled with the result and left it.
In pixinsight with the SPCC I have my calibration curve (IDAS P3 + Antilia LRGB) and everything works as it should.
Since it is fixed it is also in use with Ha O3 and S2 but obviously it does not create any problems.

Hi Michele

Interesting idea to put two filters in series. Did you notice much lower signals (i.e. having much more time on data collection) after you added second one?
I am also on bortle 5 and I am struggling with RGB acquisitions.

Are any others here also using this solution to fight pollution?

Hi,
I honestly have not noticed any signal loss, if you think about it the IDAS P3 (or L-Pro or other similar filters) only cut certain areas while everything else is passed.
This image is from last week, it is 20 exposures of 120 seconds for R G and B, so an hour of integration, without redoing the focus, with ASI2600mm and 102 triplet reduced to 570 mm F5.6.
The IDAS P3 filter is screwed directly into the focal reducer and is downstream of the filter wheel.

https://img8.juzaphoto.com/001/shared_files/uploads_hr/5086157_large74700.jpg
Edited ...
Like
mrkhagol 2.71
...
· 
·  Share link
So does mono imaging(LRGB) pull in all of pollution gradients out there? So then does it turn into just getting less data as otherwise it’d get difficult to remove out gradients later on?
Edited ...
Like
gnnyman 6.04
...
· 
·  1 like
·  Share link
I did not read those many posts, but nevertheless, here my view:

1:1:1:1 is most of the time fine for me (B3-4) unless I go for BIN2 for RGB and BIN1 for L - in that case, I do 4:1:1:1.
The reason - to get the resolution and details from L and the color from RGB.

Shooting LRGB in B8  - that´s quite a challenge! Probably if the moon is not an additional factor, then it could be done with bright enough targets. 

Maybe you should think about trying contast enhancement filters for light polluted areas - they are available for example from Astronomik!


CS
Georg
Like
ScottBadger 7.63
...
· 
·  Share link
Georg N. Nyman:
1:1:1:1 is most of the time fine for me (B3-4)

I don't understand the benefit of shooting Lum at 1:1:1:1 (except as a hedge against variable seeing)? Aren't you replacing the RGB luminance with one that's no better signal-wise?

Cheers,
Scott
Like
HegAstro 14.24
...
· 
·  3 likes
·  Share link
Scott Badger:
Georg N. Nyman:
1:1:1:1 is most of the time fine for me (B3-4)

I don't understand the benefit of shooting Lum at 1:1:1:1 (except as a hedge against variable seeing)? Aren't you replacing the RGB luminance with one that's no better signal-wise?

Cheers,
Scott

No - the LRGB process converts RGB to L*a*b* through an intermediate conversion to CIE XYZ. CIE Y which is most closely related to L*, is a linear combination of RGB., predominantly weighted to G. You can think of this as only using part of the signal to construct L* and the other parts to construct the chroma coordinates. Hence it will be that even at 1:1:1:1, straight luminance will have better SNR than derived L*.
Like
ScottBadger 7.63
...
· 
·  Share link
Arun H:
Scott Badger:
Georg N. Nyman:
1:1:1:1 is most of the time fine for me (B3-4)

I don't understand the benefit of shooting Lum at 1:1:1:1 (except as a hedge against variable seeing)? Aren't you replacing the RGB luminance with one that's no better signal-wise?

Cheers,
Scott

No - the LRGB process converts RGB to L*a*b* through an intermediate conversion to CIE XYZ. CIE Y which is most closely related to L*, is a linear combination of RGB., predominantly weighted to G. You can think of this as only using part of the signal to construct L* and the other parts to construct the chroma coordinates. Hence it will be that even at 1:1:1:1, straight luminance will have better SNR than derived L*.

Thanks Arun! Would that still hold true if the time spent on L was spent on (33%) more R, G, and B?

Cheers,
Scott
Like
HegAstro 14.24
...
· 
·  1 like
·  Share link
Scott Badger:
Thanks Arun! Would that still hold true if the time spent on L was spent on (33%) more R, G, and B?

Cheers,
Scott


Scott, the exact calculation would depend on how the RGB data are scaled as part of color calibration, but here is one estimate  if one uses the formula

Y=0.29*R+0.62*G+0.075*B

Assuming that the RGB filters each collect 33% of the photons of the L filter, you can see that the net signal for Y will be (0.29+0.62+0.075)/3=0.34, compared to 1 for the L filter. So even increasing the collection time for RGB by 33% will not make the derived L* match the luminance SNR. 

You can see from a formula like the above why OSCs have two green pixels in the Bayer matrix, since the L* perceived lightness value depends so heavily on G, that is where you want to  collect most photons. Note that this is for a white source that radiates uniformly across the visual spectrum. Things like sensor QEs and the exact spectral distribution of the source will affect the calculation.
Edited ...
Like
AccidentalAstronomers 18.64
...
· 
·  Share link
I generally shoot 1.5:1:1:1 to 2:1:1:1 (I prefer the latter if circumstances allow). I also create a super luminance. I apply more noise reduction to RGB and less to L. Sometimes I go for more L after I reach a point where I think I have enough RGB if time, scheduling, and weather allow. I tend to find that L produces considerably more detail as well as far less noise with this approach, which lessens my need for things like NXT. I don't know whether any of this is optimal, but it seems to work for me. I've often got four scopes cranking and I've always got a 66-year-old brain (at least, until December). So I try to keep things as simple and uniform as possible.
Like
Gondola 8.11
Topic starter
...
· 
·  1 like
·  Share link
Arun H:
Scott Badger:
Georg N. Nyman:
1:1:1:1 is most of the time fine for me (B3-4)

I don't understand the benefit of shooting Lum at 1:1:1:1 (except as a hedge against variable seeing)? Aren't you replacing the RGB luminance with one that's no better signal-wise?

Cheers,
Scott

No - the LRGB process converts RGB to L*a*b* through an intermediate conversion to CIE XYZ. CIE Y which is most closely related to L*, is a linear combination of RGB., predominantly weighted to G. You can think of this as only using part of the signal to construct L* and the other parts to construct the chroma coordinates. Hence it will be that even at 1:1:1:1, straight luminance will have better SNR than derived L*.

This is gold Arun and explains a lot.
Like
HegAstro 14.24
...
· 
·  Share link
Tony Gondola:
This is gold Arun and explains a lot.


This is the exact discussion in the PI forums that touches on how LRGB combination is achieved:

https://pixinsight.com/forum/index.php?threads/lrgb-comb-on-linear-or-stretched-data.18885/

They don't explain the actual algorithm, but you get the idea.

Conceptually, conversion of RGB to L*a*b* is a coordinate transform. In such a transform, it cannot be that the L* holds all the Signal and noise; some of it gets to a* and b* as well. This part of the signal and noise  largely remains after the LRGB combination. 

The point in all this is that I think you need a sufficient base level of signal/SNR in a* and b*, and that can only be achieved through RGB filters (since L contains no color information). Once that base level of SNR is achieved, you can very efficiently raise SNR through luminance gathering.  And of course, you get to that base level of SNR much faster with darker skies, faster scopes, aperture, etc.

Hence my comment that a base level of integration time is likely needed with RGB filters with a good image rather than worrying about ratios.
Edited ...
Like
ScottBadger 7.63
...
· 
·  Share link
Arun H:
Scott, the exact calculation would depend on how the RGB data are scaled as part of color calibration, but here is one estimate  if one uses the formula

Y=0.29*R+0.62*G+0.075*B

Assuming that the RGB filters each collect 33% of the photons of the L filter, you can see that the net signal for Y will be (0.29+0.62+0.075)/3=0.34, compared to 1 for the L filter. So even increasing the collection time for RGB by 33% will not make the derived L* match the luminance SNR. 

You can see from a formula like the above why OSCs have two green pixels in the Bayer matrix, since the L* perceived lightness value depends so heavily on G, that is where you want to  collect most photons. Note that this is for a white source that radiates uniformly across the visual spectrum. Things like sensor QEs and the exact spectral distribution of the source will affect the calculation.

Arun, sorry if I'm just being dense, but even if each of the RGB filters collects 33% of the photons that the L filter does, don't they collect ~100% of the photons in their bandpass such that the combination of all three color filters equals the L filter in photon collection (assuming equal integration times for each). So, in calculating Y for 1 hour of each filter (3 hours total) and comparing it to 1 hour of L, I'm not understanding why you're dividing by 3.

Cheers,
Scott
Like
HegAstro 14.24
...
· 
·  Share link
Scott Badger:
Arun, sorry if I'm just being dense, but even if each of the RGB filters collects 33% of the photons that the L filter does, don't they collect ~100% of the photons in their bandpass such that the combination of all three color filters equals the L filter in photon collection (assuming equal integration times for each). So, in calculating Y for 1 hour of each filter (3 hours total) and comparing it to 1 hour of L, I'm not understanding why you're dividing by 3.


Signal is total photons collected. So in one hour, let us say the L filter collects 1 unit of photons. The RGB filters would each collect 0.33 photon units.

So the signal used to compute Y would be 

Y=0.29*R+0.62*G+0.075*B

Since RGB each collect 1/3 unit, the next signal for Y is simply 1/3*(0.29+0.62+0.075).

The net signal for L is 1.

In total, the RGB filters do collect the same number of photons as the L filter does, for a 1:1:1:1 ratio, but only a portion of those go into generating L*. The remaining photons are used to generate chroma information - it is a bit more complex than this, since chroma also depends on Y, but this is the way I conceptualize it.
Like
ScottBadger 7.63
...
· 
·  Share link
Arun H:
Signal is total photons collected. So in one hour, let us say the L filter collects 1 unit of photons. The RGB filters would each collect 0.33 photon units.

So the signal used to compute Y would be 

Y=0.29*R+0.62*G+0.075*B

Since RGB each collect 1/3 unit, the next signal for Y is simply 1/3*(0.29+0.62+0.075).

The net signal for L is 1.

So....a 1 hour L integration is equivalent to the Y of a 9.13 hour RGB integration (3.046 hrs per channel)??

Cheers,
Scott
Edited ...
Like
HegAstro 14.24
...
· 
·  Share link
Scott Badger:
So....a 1 hour L integration is equivalent to the Y of a 9.13 hour RGB integration (3.046 hrs per channel)??

Cheers,
Scott


Probably a better option is to focus on the green channel for a 1:5:1 ratio if that is your aim. I should clarify I absolutely would not recommend this. If there is a lesson from the math, it is that you should gather a good baseline of color data and then use luminance in small amounts to improve it. That seems an effective strategy.

Remember that if you do end up spending 9 hours on R G B, you have collected in that time 3x the total photons as you would in your L filter in 1 x time, so your X and Z have significantly increased in SNR as well. You are only comparing 33% of the total gathered with L, so it seems lopsided when in reality it is not.

I think the overall lesson here is that small amounts of L can make a significant difference to an image provided you have a good baseline of color data. That would support ratios like 1:1:1:1 or 2:1:1:1 that imagers like John and Ani use.
Edited ...
Like
 
Register or login to create to post a reply.