![]() ...
·
![]()
·
1
like
|
---|
I have the original version and agree with you Herbert, it's a big hammer. I have also heard that the new Studio version isn't as good but I haven't verified that.
|
![]() ...
·
![]()
·
11
likes
|
---|
What missing here is the great painting technique in PS for these narrowband signals hidden behind broadband. Lol
|
![]() ...
·
![]() |
---|
thx, Noah I guess I'll stick to DeepSNR for now Russ Croman's next update is for NXT, so maybe he'll improve it idk what to think about Bray, the Topaz results don't match his, and he is using Topaz, I know it, but I give up trying to form any conclusion about him (besides the fact that he uses something related to Topaz) but at least I found out about DeepSNR I found out recently, you can see some images from Marcel showing that he uses Topaz stuff in the equipment info section |
![]() ...
·
![]()
·
1
like
|
---|
well this is interesting Wolfgang Promper's latest image uses a software called Imagenomic Noiseware it seems to be free and, it seems to be a plugin for Photoshop |
![]() ...
·
![]()
·
2
likes
|
---|
![]() ...
·
![]()
·
6
likes
|
---|
Yes I've been using Noisware for years. it is a Photoshop plug-in but not free, I think it is about 60 euros or so. It gives you a lot of control over frequency and tonal range. Wolfgang |
![]() ...
·
![]() |
---|
@Wolfgang Promper thank you for showing what denoise software you use, Wolfgang! and somehow I did get it for free... EDIT: I checked, it's free (if I remember correctly) only for the first 14 days after downloading, and it can be bought as a one time purchase for $56 (as of this moment) |
![]() ...
·
![]()
·
1
like
|
---|
From looking at the example posted, the Noise Reduction is believable, meaning it seems to be doing actual noise reduction, not inventing detail. I will have to try it too. Appears to be a yearly fee if $70 for the PS or Lightroom plugin. Thank you, Wolfgang, for sharing your methods and helping the community.
|
![]() ...
·
![]()
·
2
likes
|
---|
I've only perused through this thread, so perhaps I have missed any reference .... I don't think there is any magic elixir to overcoming noise in our images ~ a challenge in processing we all face. And before we think that Bray Falls and friends have discovered some double-top-secret method of removing noise, we should take note that many of his / their images are the product of many, many hours of integration. This ~ of course ~ renders far stronger Signal-to-Noise Ratios : more detail (especially in the otherwise dimmer areas of any target) and cleaner, less noisy backgrounds. I would think that any of the typical Noise Reduction tools we use would therein have an easier time removing whatever little noise exists in such deep integration images. In other words, Noise Reduction cannot produce detail that is not there .... |
![]() ...
·
![]()
·
3
likes
|
---|
@Bray Falls has a comprehensive processing course on his astrofalls.com website. I invested in his course as well as Adam Block’s courses. It’s an investment in terms of money and time but I’ve found both to be very helpful in navigating the processing learning curve. Adam Block is hands down the authoritative source for all things Pixinsight. On the other hand, Bray is not a Pixinsight “purist”. His courses cover tools like Astro Pixel Processor, Pixinsight and Photoshop. His approach mirrors my workflow that uses all of these tools. Bray really leverages a lot of Photoshop tools for his images and I’ve barely scratched the surface on learning his techniques. Surprisingly, I’m getting the hang of Pixinsight but most of Bray’s Photoshop processes are still a challenge. His courses do a good job breaking things down so eventually, I’m hoping this old dog can still learn a few new tricks. |
![]() ...
·
![]()
·
3
likes
|
---|
Yuexiao Shen: If you check the details in the Ha and OIII channels, you will find too many differences between someone and others ![]() |
![]() ...
·
![]()
·
4
likes
|
---|
Supeng Liu:Yuexiao Shen: Perhaps you should try shooting these nebulae for yourself to let us know what's real ;) but I think you wouldn't want to even try to deal with the noisy mess you know you will get for all your effort. |
![]() ...
·
![]()
·
2
likes
|
---|
Bray Falls:Supeng Liu:Yuexiao Shen: Take more data, theres the big secret everyone should know. Only way around the noise is data. Eventually you'll get enough signal. Eventually is the problem, though. That or get an even more powerful camera... |
![]() ...
·
![]()
·
3
likes
|
---|
Bray Falls:Supeng Liu:Yuexiao Shen: This is exactly why I set this thing up... but here is the thing, sometimes even 500hrs of exposure from B1 is not enough! This is why in recent years you see lots of collaborative groups starting, it is because one person is often not enough to get the capture done. Even so, the end result is often way too faint for traditional NR methods, and so one might need to get creative to make an aesthetically pleasing result. I always try to keep things true to my raw data, but as you can see from the Goblet of Fire Nebula, even with hundreds of hours of exposure, there is not much to go off of. If someone wanted to get a ground-truth image of that object with the same SNR as the final image without noise reduction, it would probably take 3000+hrs of exposure time with this setup I used. There is no way I'm doing that. ![]() |
![]() ...
·
![]()
·
1
like
|
---|
It is a rather obvious thing to state that the solution is to take more data. I think the vast majority of us know this. It isn’t as big a secret as people think. The trouble is when you do the math about how much data is needed. If, after 40 hours of integration, you get a noisy mess, doubling your SNR will take 120 more hours. The math rapidly gets unrealistic. This is where noise reduction methods come in and the big debate about what is and isn’t real. As more powerful cameras - with modern CMOS cameras, you are already very close to the limits of what QE may be achievable. And since the primary source of noise is, in almost all cases, shot noise, less noisy cameras are not helpful either. I recall a few years ago, an imager was attacked for using Topaz AI to the extent where a panel of judges was asked to look at his raw subs and decide whether he could realistically achieve his results. So the debate about these methods is not new. Multiple threads and debates exist about now well established Croman methods too. I think perhaps the solution is for those using methods not well established, or cases where it is completely unknown whether a derived feature is real or not, to be upfront and say - “I used an experimental noise reduction method here to generate this image. I cannot speak to how real these features are.” Other solutions include Collaborative images, which are more realistic solutions to the SNR problem. |
![]() ...
·
![]() |
---|
Bray Falls:Bray Falls:Supeng Liu:Yuexiao Shen: And I don't blame you for not wanting to do that, Anything over 25 hours is unimaginable for me, I can't even imagine even with a remote setup, exceeding 100-200 hours. The fact people do that and still get noisy data is kind of offputting. |
![]() ...
·
![]() |
---|
The problem arises if there is an implicit assumption or representation that what is shown is real. If it is clearly stated that the methods used are experimental, and there is no way to know if the features are connected to reality or not - there really is no problem. TBH, I do not know what representation was or was not made in the image in question. I state this simply as a possible means to settle the debate.
|
![]() ...
·
![]()
·
2
likes
|
---|
@Bray Falls you wont drop us a good large hint on you denoise? or just say the essential parts? anything? I'm sure others don't want to take 3000 hours of exposure either, and NXT/DeepSNR/whatever else that is well known sometimes don't give such smooth-looking nebulae as yours or are we forced to learn anything at all through paying for an expensive course? ![]() |
![]() ...
·
![]()
·
11
likes
|
---|
This is why in recent years you see lots of collaborative groups starting, it is because one person is often not enough to get the capture done As much as I like the images and discoveries that these groups can create, and they are really spectacular, I do feel that they should be in a totally different category as far as awards are concerned. Yes, I know that there is a symbol that denotes a collaboration but to put them in the same competition for awards is really pretty unfair to the vast majority of people who just want to do things themselves to produce "their own image". I think most would agree that many, if not most, of us do this because we can be proud that WE did the whole thing from start to finish and one just cannot get that from a collaboration. I would argue for a separate category for images done by collaborations or with non-owned professional level equipment such as rented mega-scopes or purchased data. The way it is, awards are clearly becoming increasingly less fair to the lone imager. |
![]() ...
·
![]()
·
18
likes
|
---|
Bray Falls: Bray Falls: Most people understand that faint targets are hard - the problem is giving people unrealistic expectations about the results they can expect with a given amount of time or effort put in. There are a select fewwho seem to take this to the extreme. Being closed and opaque about your 'trade secret' processing methods only exacerbates the problem. I know many people who have imaged some very faint targets and put a ton of time in just to realize they don't look like what they expected because of exactly this trend. Its certainly not their fault they were lead to believe something was how it isn't. We should expect better, especially from those in positions of authority in the community. |
![]() ...
·
![]() |
---|
Bill McLaughlin:This is why in recent years you see lots of collaborative groups starting, it is because one person is often not enough to get the capture done This debate has been done to death. In other threads, I and many others have made this case, that the playing field when it comes to awards is not level. The “haves” - those who benefit from the current system - will fight to the death to preserve it. They will see any change it as diluting their accomplishments and (often rather large) investments. You will get nowhere. |
![]() ...
·
![]()
·
1
like
|
---|
Oscar: No hints, other than stay creative and don't be afraid to try something weird. I don't discuss any weird noise reduction in my course, that guide is mostly dealing with classical objects that don't need it. Deep SNR can help a lot sometimes, depending on the object it can be really good. It totally folds on the goblet of fire though. I also don't use NXT at all because it leaves the image feeling wormy/crunchy. Topaz can be good if done carefully and you manage artifacts. For your WR16 dataset, I think it is just a case of needing more data. There won't be any secret NR technique that will make it perfectly clean in a natural way. |
![]() ...
·
![]()
·
1
like
|
---|
Charles Hagen:Bray Falls: I showed the raw data so I think it is pretty transparent regarding expectations |
![]() ...
·
![]() |
---|
Charles Hagen: True enough. OTOH sometimes better processing is just experience. After doing imaging for 30+ years I have found that one needs to be creative and draw from your "experience toolbox". That allows you to apply different variations on the techniques you have learned and use them in different combinations for different images as required. Maybe that comes from my former occupation (now retired) where one learned the basics and the academic background and then learned new techniques with time and what works best and when and then tailored each procedure individually, drawing from that "toolbox". Every (body) is different as is every image. Having said that, the images we are talking about have a lot of commonalities so I do suspect that there is at least some significant "secret sauce" involved as well. Finally, one thing not mentioned but obvious is location. Dark sites are clearly better for S/N but also sites with better seeing are better for detail. For that reason remote scopes are always going to do better overall. All of those have dark skies although only a few, mostly in California for US imagers, have great seeing as well. |
![]() ...
·
![]()
·
2
likes
|
---|
As far as I can tell, a lot of photographers expect a one-stop-shop noise reduction and there's no such thing. Noise in images of very faint targets should be managed throughout the whole processing. For example, in my latest image I did the following noise reduction steps: - drizzle x2 and then resample to original sensor resolution with custom settings in Pix, - NoiseXterminator before stretching, - Neat Image, with dialed in noise reduction levels in different frequencies for Ha and OIII separately (tweaking it took me 1/2h), - CameraRaw noise reduction in PS, - TopazDenoise (without the so called "sharpening"), - another round of CameraRaw noise reduction in some spots, with masks. Anybody that's going after really faint targets should have not one tool but a toolbox and know well when and how to apply each process and practice- a lot! That said, there's no going around long integration time. |