Howdy y'all -- I wanted to demonstrate how much of a difference luminance can make to a broadband image. Most comparisons that I've seen will use X hours of RGB compared against (X + Y) hours of LRGB (where Y is the number of additional hours in L). I don't think that makes for a very fair comparison. This comparison uses the same total integration time for each image: 5.5 hours.In the RGB image, this is 5.5 hours split roughly evenly between R, G, and B. In the LRGB image, 2.5 hours is L and 3 hours is split roughly evenly between R, G, and B. Both images were stacked and processed using the same parameters and techniques. The light frames used in the LRGB integration were selected at random -- I didn't pick the best frames out of the bunch. Stars have been removed to allow for easier comparison. Data acquired during a new moon in bortle 4. Details about the gear are on my astrobin post: https://app.astrobin.com/u/ThisIsntRealWakeUp?i=3i5tlsLRGB (click to open the full size): RGB (click to open the full size): If you ask me, the LRGB image is clearly superior to the RGB image. This demonstrates the benefits of a mono camera vs OSC, even for broadband imaging. (Though do note that an OSC camera also has the downside of bayer interpolation, which reduces sharpness. This is not represented here because both images were taken with a monochromatic camera).
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
For dusty targets like this one using luminance is always a good idea. Some other objects like globs or galaxies can at times reduce the value from luminance by their nature.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Noah Tingey: Howdy y'all -- I wanted to demonstrate how much of a difference luminance can make to a broadband image. Most comparisons that I've seen will use X hours of RGB compared against (X + Y) hours of LRGB (where Y is the number of additional hours in L). I don't think that makes for a very fair comparison. This comparison uses the same total integration time for each image: 5.5 hours.
In the RGB image, this is 5.5 hours split roughly evenly between R, G, and B. In the LRGB image, 2.5 hours is L and 3 hours is split roughly evenly between R, G, and B.
Both images were stacked and processed using the same parameters and techniques. The light frames used in the LRGB integration were selected at random -- I didn't pick the best frames out of the bunch. Stars have been removed to allow for easier comparison.
Data acquired during a new moon in bortle 4. Details about the gear are on my astrobin post: https://app.astrobin.com/u/ThisIsntRealWakeUp?i=3i5tls
LRGB (click to open the full size):

RGB (click to open the full size):

If you ask me, the LRGB image is clearly superior to the RGB image. This demonstrates the benefits of a mono camera vs OSC, even for broadband imaging. (Though do note that an OSC camera also has the downside of bayer interpolation, which reduces sharpness. This is not represented here because both images were taken with a monochromatic camera). i agree the lrgb looks more detailed. by quite a bit.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Thanks for the comparison! Really good to see as a confirmation!  Is there also a consensus on how much RGB is optimal, before only adding more Lum? E.g.: I have so far 8h in Lum and 3h in each channel of RGB. How should I go forward if I want to capture 12 more hours? Is it more optimal to just shoot 12h Lum, or 6h Lum / 6h RGB? My guess would be: It depends on the target / goal as gathering more integrarion time will mostly benefit revealing more, faint details. If you go for colorless details (IFN, ..), only Lum is sufficient. If you want e.g. more background stars, more RGB is better. If you want more faint details that also have color (also IFN as an example), you will need more Lum and more RGB. Is that assumption - more or less - correct?
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Nice comparison. Regardless of the endless debates, this easily becomes apparent to those who have done both. In the same way, L/mono has just a big of difference in sharpness&contrast
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
It seems like demonstrating the obvious.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
You can also combine L + RGB as a synthetic L to enhance snr even more. It doesn't work well on every target but for dust or galaxies it helps. Here's an exemple on how to do it with Pixinsight: https://www.youtube.com/watch?v=Q2PLUI2hBvQ |
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Agree with your conclusion. Just wondered if you had tried a CFA drizzle integration of the RGB - as it would not have suffered from the blurring inherent in debayering. So maybe a slightly fairer comparison with the LRGB?
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Thanks for the comparison! Really good to see as a confirmation! 
Is there also a consensus on how much RGB is optimal, before only adding more Lum? E.g.: I have so far 8h in Lum and 3h in each channel of RGB. How should I go forward if I want to capture 12 more hours? Is it more optimal to just shoot 12h Lum, or 6h Lum / 6h RGB?
My guess would be: It depends on the target / goal as gathering more integrarion time will mostly benefit revealing more, faint details. If you go for colorless details (IFN, ..), only Lum is sufficient. If you want e.g. more background stars, more RGB is better. If you want more faint details that also have color (also IFN as an example), you will need more Lum and more RGB.
Is that assumption - more or less - correct? The go-to ratio that I see is 4:1:1:1 L:R:G:B. But I've never seen someone experiment with this nor do I know where this rule of thumb comes from. In the future, I plan to experiment with ~25 each of R, G, and B (or however many is needed to ensure I have no walking noise or dry pixels after a drizzle integration) and then the rest will be only L. This is because you can aggressively denoise your RGB data before applying your luminance with very, very little noticeable degradation from denoising your RGB data. To demonstrate this, here is a comparison where I applied L to a blurred RGB image versus applying the same L to the unblurred RGB image: Blurred RGB, to give you a sense of how much I Gaussian blurred my RGB before applying L: L + Blurred RGB: L + RGB that was never blurred: Really the only difference that I can easily spot is that there's no chrominance noise in the L + blurred RGB image and that smaller stars have lost their color. So I think that you can safely denoise your RGB stack pretty aggressively before applying L. Which means you don't need to spend a lot of time gathering RGB to get a high SNR in the first place. But I'll play around with this some more and see how it turns out in my future images. Though like you said, I bet there are some targets where it's important to not skimp on your RGB integration. Like if you want fine color detail in small galaxies, perhaps. Editing to add: To be clear, I don't think you should denoise your images by using a Gaussian blur. This is just to demonstrate how sloppily you can denoise an image while still being able to pull back a lot of the detail with luminance.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Tim Hawkes: Agree with your conclusion. Just wondered if you had tried a CFA drizzle integration of the RGB - as it would not have suffered from the blurring inherent in debayering. So maybe a slightly fairer comparison with the LRGB? My apologies if I've misunderstood what you're saying, but just to be clear: both the images in my comparison were taken on a mono camera -- neither one had a CFA. It's just that the RGB-only image did not use L whereas the LRGB image did.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Noah Tingey: My apologies if I've misunderstood what you're saying, but just to be clear: both the images in my comparison were taken on a mono camera -- neither one had a CFA. It's just that the RGB-only image did not use L whereas the LRGB image did. No I'm the one that has it wrong. Thanks.. Didn't read your original post properly
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Noah Tingey: Editing to add: To be clear, I don't think you should denoise your images by using a Gaussian blur. This is just to demonstrate how sloppily you can denoise an image while still being able to pull back a lot of the detail with luminance. In addition to de-noising, I actually also blur the RGB a bit (not as much as your example), and to counter poor seeing I also collect Luminance (or Ha for emission nebulae) when seeing is best and RGB otherwise. Cheers, Scott
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Noah Tingey: ...
Though like you said, I bet there are some targets where it's important to not skimp on your RGB integration. Like if you want fine color detail in small galaxies, perhaps.
Editing to add: To be clear, I don't think you should denoise your images by using a Gaussian blur. This is just to demonstrate how sloppily you can denoise an image while still being able to pull back a lot of the detail with luminance. Like you said, for most galaxies, I tend not to denoise RGB as much since you obliterate fine color details in the galaxy. I tend to denoise and lower the saturation of the sky background areas more in most of my images, and minimally denoise the brighter, higher S/N areas to preserve fine color details there. Also, the latest Noise XTerminator allows selective color noise reduction, which is great for RGB components of an image.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
@Noah Tingey, did you try creating a synthetic luminance (using the ImageIntegration tool in PixInsight) from the RGB-only dataset and doing an LRGB combination with that synthetic luminance similar to how you combined "real" luminance with RGB?
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Ani Shastry: @Noah Tingey, did you try creating a synthetic luminance (using the ImageIntegration tool in PixInsight) from the RGB-only dataset and doing an LRGB combination with that synthetic luminance similar to how you combined "real" luminance with RGB? To cut down on chrominance noise without losing much luminance detail in the RGB image, I extracted the luminance of the RGB image after channel-combining and doing an initial stretch. I then processed it the same way I processed the real Luminance for the LRGB image. I did it this way for two reasons: 1. I wanted to give the RGB-only image its best chance, editing it as I would if I were an OSC camera user or someone without real luminance data to work with. 2. Because I knew that I would be doing some processes specific to the luminance of the LRGB image, and I wanted to keep things as apples-to-apples as possible. So I didn't want to do some edits to the real luminance without doing the same edits to the extracted luminance. So not quite the same as making a synthetic luminance with the ImageIntegration tool. But I did extract the luminance from the RGB image and process it as if it were real luminance data.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
I have had some success combining lum data from my mono camera with OSC data I captured last year - as has been outlined in prior posts, the lum data for an LRGB image is VASTLY more important than the colour, so I can take my 11mp OSC data from my 294 colour, and use lum from the ToupTek ATR2600M and create L-OSC images that benefit from the detail the 26mp sensor provides, with colour data that is FAR less detailed. Here's a crop from the OSC data, then the OSC data with the 2600M lum utilised.   Worth noting - that's the same colour image used - but the Luminosity drove the brightness and sharpness up rather dramatically...
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
hi
sometime i recommend a super luminance , better SNR just using image integration tool ,add LRGB masters only( never narrowband data) and select SNR from weights window
the result will be a LUM with better SNR ,mean more details and less noise
and maybe in some case you will need to control your lum with Linear fit CS Brian
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Sure, you will always get a higher signal with an LRGB approach than with just RGB, even if the total exposure time is the same, simply because L is equivalent to capturing light in R, G, and B channels at the same time. If you look at the transmittance curve of an L filter, you’ll see that it's practically the same as having R+G+B in a single filter. That’s why, despite having the same exposure, you’re actually capturing photons from the R, G, and B channels simultaneously. For SNR purposes, it will always be higher because you have more signal.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Noah Tingey: To cut down on chrominance noise without losing much luminance detail in the RGB image, I extracted the luminance of the RGB image after channel-combining and doing an initial stretch. I then processed it the same way I processed the real Luminance for the LRGB image. Instead of extracting luminance from RGB, what does the comparison look like if, as suggested above, you create a synthetic luminance by integrating RGB channels and then create a new LRGB with the synthetic luminance and the original RGB. The synthetic luminance should have higher SN than the luminance extracted from the RGB channels, and in my opinion would be a better comparison of what can be done with only RGB data as opposed to LRGB data.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
One subtle and yet important thing:
Luminance of an RGB color image is not R+G+B. Luminance contains more contribution from G than R and B. An L-filter mono image is closer (but not identical) to R+G+B than the luminance of RGB.
From S/N ratio's point of view, R+G+B could be slightly better than the luminance of RGB. But you will have some loss of color fidelity after you use this R+G+B to replace luminance, because as I said above, R+G+B is not true luminance. In some sense, you are sacrificing the color fidelity by gaining the small amount of S/N.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
andrea tasselli: It seems like demonstrating the obvious. Certainly in some sense it is, but there are many debates in places like CloudyNights and possibly elsewhere, people love to be contrarian.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
andrea tasselli: It seems like demonstrating the obvious. You'd be surprised how many people are preaching that luminance is pointless, and the subsequent cult following they're developing.
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Brian Puhl:
andrea tasselli: It seems like demonstrating the obvious.
You'd be surprised how many people are preaching that luminance is pointless, and the subsequent cult following they're developing. The simplest way I can think of this question is "do I want to throw away 2/3s of my photons on all of my subs or some of my subs?"
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Wei-Hao Wang: One subtle and yet important thing:
Luminance of an RGB color image is not R+G+B. Luminance contains more contribution from G than R and B. An L-filter mono image is closer (but not identical) to R+G+B than the luminance of RGB.
From S/N ratio's point of view, R+G+B could be slightly better than the luminance of RGB. But you will have some loss of color fidelity after you use this R+G+B to replace luminance, because as I said above, R+G+B is not true luminance. In some sense, you are sacrificing the color fidelity by gaining the small amount of S/N. Wei-Hao, I completely agree. I just want to add that the best way to extract an L channel from RGB data is to convert it to LAB space. Then you can either substitute the true Lum data to recombine the image to get an LRGB result -or- combine the syn-Lum data with the true Lum data to substitute back into the LAB data for conversion back to LRGB. The tricky thing about combining the syn-Lum with tru-Lum is that you want to do a weighted average to get the statistics right. I've done that by making a copy of each so that I can use the ImageIntegration tool along with it's weighting functions. The image copies are necessary because the ImageIntegration tool requires at least three images. By making copies, you don't disturb the statistics and you get around that minimum number of images requirement. John
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.
Kevin Morefield:
Brian Puhl:
andrea tasselli: It seems like demonstrating the obvious.
You'd be surprised how many people are preaching that luminance is pointless, and the subsequent cult following they're developing. The simplest way I can think of this question is "do I want to throw away 2/3s of my photons on all of my subs or some of my subs?" I'm sorry Kevin, you lost me. Why are we throwing away 2/3 of the photons anywhere? John
|
You cannot like this item. Reason: "ANONYMOUS".
You cannot remove your like from this item.
Editing a post is only allowed within 24 hours after creating it.
You cannot Like this post because the topic is closed.
Copy the URL below to share a direct link to this post.
This post cannot be edited using the classic forums editor.
To edit this post, please enable the "New forums experience" in your settings.