AI in processing [Deep Sky] Processing techniques · James · ... · 36 · 1308 · 8

JamesCoates 0.00
Topic starter
...
· 
·  Share link
James:
James,

you mention anything about your processing tools/workflow.

Different tools do provide different results. But different tools do not make a really huge difference if you know the basics of these tools. However really knowing your tools in detail and (!!) knowing also more than basics regarding colour imaging is making a huge difference. I have studied really many processing workflows which people provided in various internet resources until I came to a result that finally pleased me. I ended up regarding tools with PixInsight and Adobe Lightroom / Photoshop to get the results I am liking. What I learned is processing of astrophotography is really different. If you want to compare your processing results you can go to get Hubble raw images and process them. You can then compare your results with what was published. In the following please find my result of ARP 273, based on data from Hubble.ARP_273_Hubble_2024_07_klein.JPG

My workflow consists of DSS (mainly) or ZWO Studio for stacking, then Photoshop to process. I do use StarExterminator, Gradient Exterminator and HLVG. But, I've also taught myself to do the jobs AI does so as to not become dependent and have a bit more control.

I think I get good result, but not exceptional.

I think my real concern is that I am not beating myself trying to achieve a result sampled in someone else's image because it can only be done with AI, manipulation of the image with details that look amazing but are not there or unachievable through photography. Maybe that's where the line is between astrophotography and artistic rendering?

my personal experience is that there are two real critical steps in processing to get really the maximum out of the raw data:  the move from linear to non-linear and, after being non-linear trying to get maximum details using curves transformation (or similar). If this is well done, you do not need anything like AI. Another experience I made personally: I keep my workflow as simple as possible, but when processing nebula I only do it with a starless image and I am adding stars at the end of the starless process.

After a few tries, I've started doing this too. I'm continuing to reprocess the photo to try to get down an ideal workflow pattern that I like. Each time I do it, I'm seeing improvement.

I'll be the first to admit, this is a real challenge for me. That's why I'm so attracted to astrophotography!
Like
JanvalFoto 4.51
...
· 
·  1 like
·  Share link
I think it's also important to remember that the processing isn't really linear. Settings that work with one image might be terrible on your next. So don't expect to get a 100% step by step process guide that is applicable each time. There are a lot of choices to make.  

Which is why certain educators talk a lot about understanding what happens "under the hood", in order to use the tools more efficiently.

That being said there are general guidelines for when, and in which order each process/script works and/or works best.  

If we all sat down and processed the same data I'd guarantee that each would come out very different.  

An experienced imager could most likely make more with less data, than an inexperienced imager could do with more. It's all part of the learning curve. Some spend years, other months, but keep at it!
Edited ...
Like
Bobinius 10.32
...
· 
·  4 likes
·  Share link
James:
I'm sure it's a loaded question as to some extent most astrophotographers use moderate AI tools like Star exterminator etc.

But, I've spent 3 nights photographing and more nights processing the Cygnus Loop. I came across some images that strike me odd. For example, one person had a stunning image of the nebula. Standing beside myself, I wondered how they got that image with the same equipment that I had. I'd like to produce images with such detail so I was hoping to learn how.

I'm in a bortle 5 and their image was taken in a bortle 9 location. They had 34 frames, I had 121. Both theirs and my images taken for 300" and same gain. They didn't mention a filter but I used my Optolong L-Quad.

So, either I'm terrible at processing (I don't think I am) or they are using AI. However since I'm relatively new at astrophotography (not regular photography though), I was wondering...

How common is AI used to produce such stunning images in this hobby? Am I wrong to compare my work, hoping to glean knowledge? And, to what extent do you think AI used is acceptable to improve an image?


​​​

Hi James,


Yes, it is a loaded question, susceptible of triggering some sensitivities. We had a very long thread here when Blur Xterminator was released, you can find it in the forum. The majority of arguments are driven by the motivation of obtaining much better details than were possible before, while minimizing any questionable aspects of the truth preserving function of the AI model. Sometimes peppered with an arrogant attitude of "knowing" that it works or that it is scientific, ignoring that it suffers from all the limitation of a statistical model plus being a black box. Funny enough, it was discovered that early Bxt messed up star colours, while being praised for its infallible deconvolution capacities. We never saw details about its training sets, if it used online astrobin photos or infringed copyright, or what was the validation accuracy of the model. When the objective is to sell licences, you're not going to disclose these.  

The third possibility is that your processing is less than optimal and that the other image is also using AI. It is AI we are using. People are confused by all this vocabulary of machine learning, deep learning, AI, neural networks etc. AI is the broadest concept, the wider category, machine learning is narrower and what we are using is based on deep learning via neural networks, a sub-category of machine learning. There are other methods different from neural networks, but these dominate the field. 

AI clearly offers an advantage, so the majority of astrophotographers use it.  Just check on Astrobin IOTD images of the same objects before the AI era compared to now. Compare the images produced by the same authors before and now, with the same equipment. You can also compare IOTD images before the AI era produced with high-end equipment,  to smaller telescopes, shorter focal length and even from backyard. The difference is not due to some magic improvement in quality of the primary mirror of the smaller scope.

Another advantage of the AI product is that minimal effort is needed to use it. Minimal knowledge too. No knowledge of PSF, noise production or reduction, layers, algorithms, multiple parameters interaction, complicated mask production etc. You just push the button. For illustration, I applied directly 3 processes to your initial Veil image (hopefully you don't mind, let me know) : https://astrob.in/4ejwhs/0/ Gradient Correction (not AI), BlurXT and NoiseXT (AI) with their default settings. I don't need to know how they work or what they do. The result looks better. Stars are rounder, the noise is gone, chrominance noise especially. The problem is that I would never have been able to produce the same thing with the classical tools in PI on a non-linear jpeg (very likely even on a 32 bit tiff or xisf). Classical noise reduction tools cannot do miracles, they transform the initial image and generate uneven sky backgrounds at different scales. The AI  replaces or mixes  the initial background with the solution produced by its model. Before AI, I would have to integrate more, discard images presenting gradients, work a lot on producing a background model. One click can replace all that. Just like one click can correct my tilt problem or imperfect collimation, replacing lost nights of searching for a solution with the newtonian.

As Steeve said, it is perfectly fine to compare your work to others. But it is going to be difficult to draw conclusions about your system alone, since the results are highly likely to include AI contributions. I would recommend focusing on acquiring high quality data, calibration files and robust pre-processing routine. You don't use Pixinsight, which is very powerful at all stages. This could help, especially with the pre-processing, frame selection and calibration. Get to know what your system can produce before using AI, more integration can always help to improve SNR, select your raw frames. AI tools are really helpful in managing stars, just be careful not to lose information in the bright areas when recomposing images. I think we are all biased in overestimating our competences, AI does not explain everything. Personally I think its use should be parsimonious and carefully controlled, especially when trying to enhance details. 

All best,

Bogdan
Edited ...
Like
jonpauls 1.51
...
· 
·  3 likes
·  Share link
The fact that you're using a one shot color camera jumps out at me as being an important factor. I used a OSC camera when I first started out in astrophotography. But after I switched to a mono  camera there was a significant jump in photo quality. In my personal experience it took close to two years of imaging as much as clear skies allowed to produce anything I could really be proud of, and I know I have a lot left to learn. As a beginner, you should probably be concerned less about AI (however that term is used in this hobby) and more on learning how to capture the best possible data. Go mono, would be my advice here.
Like
JamesCoates 0.00
Topic starter
...
· 
·  2 likes
·  Share link
Bogdan Borz:
James:
I'm sure it's a loaded question as to some extent most astrophotographers use moderate AI tools like Star exterminator etc.

But, I've spent 3 nights photographing and more nights processing the Cygnus Loop. I came across some images that strike me odd. For example, one person had a stunning image of the nebula. Standing beside myself, I wondered how they got that image with the same equipment that I had. I'd like to produce images with such detail so I was hoping to learn how.

I'm in a bortle 5 and their image was taken in a bortle 9 location. They had 34 frames, I had 121. Both theirs and my images taken for 300" and same gain. They didn't mention a filter but I used my Optolong L-Quad.

So, either I'm terrible at processing (I don't think I am) or they are using AI. However since I'm relatively new at astrophotography (not regular photography though), I was wondering...

How common is AI used to produce such stunning images in this hobby? Am I wrong to compare my work, hoping to glean knowledge? And, to what extent do you think AI used is acceptable to improve an image?


​​​

Hi James,


Yes, it is a loaded question, susceptible of triggering some sensitivities. We had a very long thread here when Blur Xterminator was released, you can find it in the forum. The majority of arguments are driven by the motivation of obtaining much better details than were possible before, while minimizing any questionable aspects of the truth preserving function of the AI model. Sometimes peppered with an arrogant attitude of "knowing" that it works or that it is scientific, ignoring that it suffers from all the limitation of a statistical model plus being a black box. Funny enough, it was discovered that early Bxt messed up star colours, while being praised for its infallible deconvolution capacities. We never saw details about its training sets, if it used online astrobin photos or infringed copyright, or what was the validation accuracy of the model. When the objective is to sell licences, you're not going to disclose these.  

The third possibility is that your processing is less than optimal and that the other image is also using AI. It is AI we are using. People are confused by all this vocabulary of machine learning, deep learning, AI, neural networks etc. AI is the broadest concept, the wider category, machine learning is narrower and what we are using is based on deep learning via neural networks, a sub-category of machine learning. There are other methods different from neural networks, but these dominate the field. 

AI clearly offers an advantage, so the majority of astrophotographers use it.  Just check on Astrobin IOTD images of the same objects before the AI era compared to now. Compare the images produced by the same authors before and now, with the same equipment. You can also compare IOTD images before the AI era produced with high-end equipment,  to smaller telescopes, shorter focal length and even from backyard. The difference is not due to some magic improvement in quality of the primary mirror of the smaller scope.

Another advantage of the AI product is that minimal effort is needed to use it. Minimal knowledge too. No knowledge of PSF, noise production or reduction, layers, algorithms, multiple parameters interaction, complicated mask production etc. You just push the button. For illustration, I applied directly 3 processes to your initial Veil image (hopefully you don't mind, let me know) : https://astrob.in/4ejwhs/0/ Gradient Correction (not AI), BlurXT and NoiseXT (AI) with their default settings. I don't need to know how they work or what they do. The result looks better. Stars are rounder, the noise is gone, chrominance noise especially. The problem is that I would never have been able to produce the same thing with the classical tools in PI on a non-linear jpeg (very likely even on a 32 bit tiff or xisf). Classical noise reduction tools cannot do miracles, they transform the initial image and generate uneven sky backgrounds at different scales. The AI  replaces or mixes  the initial background with the solution produced by its model. Before AI, I would have to integrate more, discard images presenting gradients, work a lot on producing a background model. One click can replace all that. Just like one click can correct my tilt problem or imperfect collimation, replacing lost nights of searching for a solution with the newtonian.

As Steeve said, it is perfectly fine to compare your work to others. But it is going to be difficult to draw conclusions about your system alone, since the results are highly likely to include AI contributions. I would recommend focusing on acquiring high quality data, calibration files and robust pre-processing routine. You don't use Pixinsight, which is very powerful at all stages. This could help, especially with the pre-processing, frame selection and calibration. Get to know what your system can produce before using AI, more integration can always help to improve SNR, select your raw frames. AI tools are really helpful in managing stars, just be careful not to lose information in the bright areas when recomposing images. I think we are all biased in overestimating our competences, AI does not explain everything. Personally I think its use should be parsimonious and carefully controlled, especially when trying to enhance details. 

All best,

Bogdan

Thanks for that insight! I know, I have a lot to learn. Eventually, I'll start learning PI. I wanted to use the tools that I already have first, then move on from there. I've seen some good photos processes in PI, at least in part.
Like
JamesCoates 0.00
Topic starter
...
· 
·  Share link
Jonathan Paul:
The fact that you're using a one shot color camera jumps out at me as being an important factor. I used a OSC camera when I first started out in astrophotography. But after I switched to a mono  camera there was a significant jump in photo quality. In my personal experience it took close to two years of imaging as much as clear skies allowed to produce anything I could really be proud of, and I know I have a lot left to learn. As a beginner, you should probably be concerned less about AI (however that term is used in this hobby) and more on learning how to capture the best possible data. Go mono, would be my advice here.

Def a mono cam will produce a better result. I had planned to use a OSC cam as a stepping stone to learn the art and perhaps calibrate my expectations first.
Like
HegAstro 14.24
...
· 
·  2 likes
·  Share link
James:
I'm in a bortle 5 and their image was taken in a bortle 9 location.


This has been a discussion about AI, but you have an advantage in the most important area, which is in light pollution.

I live in B6, but the difference between even B5 and B6 is significant, let alone B5 and B9! 

Assuming the other imager's integration times are correctly reported, the difference simply comes down to experience and making sure you are executing things like calibration correctly. The use of AI (here referred to things like StarX and BlurX) is to make things easier, not magically make a bad image great. As an example, the use of StarX allows easier and better nebula processing in dense star fields.


My recommendation  is simply to take advantage of your skies, and gain some experience. With a little bit of practice, you'll be able to create great images if you live in a B5 location.
Like
CCDnOES 8.34
...
· 
·  2 likes
·  Share link
Brian Valente:
However, none of these are going to make images better. They may technically improve them, but a poorly rendered image that BlurXT is used on is still a poorly rendered image that is more clear.


Exactly. Another way of saying this is that GIGO (Garbage In Garbage Out) still applies.  Using AI/NN on poorer data results in more artifacts and less accurate results. Systems like BlurX work best on  images that have all the other quality boxes ticked first. Good seeing, good optics, good tracking all still matter a great deal.  Good sampling is especially important since the more information that is there the more there is to work with.

So using AI systems to improve good data works well, using it to Band-Aid less than good data may seem attractive but is a bad idea, as are most shortcuts and money/time savers.
Edited ...
Like
jhayes_tucson 26.84
...
· 
·  Share link
Miguel T.:
As for BlurXTerminator, the AI part is most likely guessing the optimal PSF function to use. Deconvolution itself is not that complicated and has been done since forever in many different fields.


Miguel,
BXT does not use deconvolution by guessing a PSF function.  It uses a neural network that has been trained using a large sample of mathematically blurred image patches.  The NN is finding the best fit between a given image patch and a solution already computed in the training data.  A version of the computed, non-blurred image data is then substituted for the data in the given image patch.  The amount of data that's substituted is determined by the slider "amount" value that the user provides.  The NN avoids the convergence limitations of most common deconvolution algorithms.  It also reduces the effect of noise on the computed solution.

John
Like
ChuckNovice 8.21
...
· 
·  1 like
·  Share link
John Hayes:
Miguel T.:
As for BlurXTerminator, the AI part is most likely guessing the optimal PSF function to use. Deconvolution itself is not that complicated and has been done since forever in many different fields.


Miguel,
BXT does not use deconvolution by guessing a PSF function.  It uses a neural network that has been trained using a large sample of mathematically blurred image patches.  The NN is finding the best fit between a given image patch and a solution already computed in the training data.  A version of the computed, non-blurred image data is then substituted for the data in the given image patch.  The amount of data that's substituted is determined by the slider "amount" value that the user provides.  The NN avoids the convergence limitations of most common deconvolution algorithms.  It also reduces the effect of noise on the computed solution.

John

The author himself: "It's recognizing what point spread function the image has been blurred by and then using that information to deconvolve the input blurry image into an output deconvolved image."

https://www.youtube.com/watch?v=6hkVBnYYlss @ 1:15:18 

The best fit you're talking about is figuring out the PSF that blurred your image as I was saying, then whether it's a classic deconvolution or already come out deconvoluted from the neural network all in one implicit step really change nothing to the purpose of my post that wasn't intended to deep dive into the details of this specific tool to begin with. We understand that a trained neural network is running all this. The most important part: It is not a generative AI, your image had the data to begin with and there is no cheating.
Like
ldhallett 0.90
...
· 
·  3 likes
·  Share link
I see the Xterminator products as tools, not really AI because you decide how much and when to use them.  Processing for me in astrophotography after the basic stacking/registering/calibration is like 50 first dates.  To get the most out of your data no two data sets require the exact same steps in the exact same order.   Even if you use the same scope, have the same integration time, the same target and the same bortle sky - the final images will look different.  Too many other variables involved, skill in processing is just one of them.  That's why asking someone exactly how they processed their data into a 'stunning' image so you can use the same steps to process your data to get a stunning image is not really possible.  You can probably get close, but how you get there will almost always be different.  That is IMHO…
Edited ...
Like
jhayes_tucson 26.84
...
· 
·  Share link
Miguel T.:
The author himself: "It's recognizing what point spread function the image has been blurred by and then using that information to deconvolve the input blurry image into an output deconvolved image."

https://www.youtube.com/watch?v=6hkVBnYYlss @ 1:15:18 

The best fit you're talking about is figuring out the PSF that blurred your image as I was saying, then whether it's a classic deconvolution or already come out deconvoluted from the neural network all in one implicit step really change nothing to the purpose of my post that wasn't intended to deep dive into the details of this specific tool to begin with. We understand that a trained neural network is running all this. The most important part: It is not a generative AI, your image had the data to begin with and there is no cheating.


First, I completely agree with your conclusion.  

Second, my notion of how Russ does the actual training may not be precisely correct, but my point is that although the algorithm may be learning a PSF, it is not using that data to perform a classical deconvolution as you initially seemed to imply.  I was trying to say what Russ says at the 1:14 minute mark in that video; although I left out the step of using the learned PSF to compute the blurred image patch in the training data.  In the end, it's not worth speculating over the precise details of what Russ actually did without having him involved in the discussion.

John
Like
 
Register or login to create to post a reply.