James:
I'm sure it's a loaded question as to some extent most astrophotographers use moderate AI tools like Star exterminator etc.
But, I've spent 3 nights photographing and more nights processing the Cygnus Loop. I came across some images that strike me odd. For example, one person had a stunning image of the nebula. Standing beside myself, I wondered how they got that image with the same equipment that I had. I'd like to produce images with such detail so I was hoping to learn how.
I'm in a bortle 5 and their image was taken in a bortle 9 location. They had 34 frames, I had 121. Both theirs and my images taken for 300" and same gain. They didn't mention a filter but I used my Optolong L-Quad.
So, either I'm terrible at processing (I don't think I am) or they are using AI. However since I'm relatively new at astrophotography (not regular photography though), I was wondering...
How common is AI used to produce such stunning images in this hobby? Am I wrong to compare my work, hoping to glean knowledge? And, to what extent do you think AI used is acceptable to improve an image?
Hi James,
Yes, it is a loaded question, susceptible of triggering some sensitivities. We had a very long thread here when Blur Xterminator was released, you can find it in the forum. The majority of arguments are driven by the motivation of obtaining much better details than were possible before, while minimizing any questionable aspects of the truth preserving function of the AI model. Sometimes peppered with an arrogant attitude of "knowing" that it works or that it is scientific, ignoring that it suffers from all the limitation of a statistical model plus being a black box. Funny enough, it was discovered that early Bxt messed up star colours, while being praised for its infallible deconvolution capacities. We never saw details about its training sets, if it used online astrobin photos or infringed copyright, or what was the validation accuracy of the model. When the objective is to sell licences, you're not going to disclose these.
The third possibility is that your processing is less than optimal and that the other image is also using AI. It is AI we are using. People are confused by all this vocabulary of machine learning, deep learning, AI, neural networks etc. AI is the broadest concept, the wider category, machine learning is narrower and what we are using is based on deep learning via neural networks, a sub-category of machine learning. There are other methods different from neural networks, but these dominate the field.
AI clearly offers an advantage, so the majority of astrophotographers use it. Just check on Astrobin IOTD images of the same objects before the AI era compared to now. Compare the images produced by the same authors before and now, with the same equipment. You can also compare IOTD images before the AI era produced with high-end equipment, to smaller telescopes, shorter focal length and even from backyard. The difference is not due to some magic improvement in quality of the primary mirror of the smaller scope.
Another advantage of the AI product is that minimal effort is needed to use it. Minimal knowledge too. No knowledge of PSF, noise production or reduction, layers, algorithms, multiple parameters interaction, complicated mask production etc. You just push the button. For illustration, I applied directly 3 processes to your initial Veil image (hopefully you don't mind, let me know) :
https://astrob.in/4ejwhs/0/ Gradient Correction (not AI), BlurXT and NoiseXT (AI) with their default settings. I don't need to know how they work or what they do. The result looks better. Stars are rounder, the noise is gone, chrominance noise especially. The problem is that I would never have been able to produce the same thing with the classical tools in PI on a non-linear jpeg (very likely even on a 32 bit tiff or xisf). Classical noise reduction tools cannot do miracles, they transform the initial image and generate uneven sky backgrounds at different scales. The AI replaces or mixes the initial background with the solution produced by its model. Before AI, I would have to integrate more, discard images presenting gradients, work a lot on producing a background model. One click can replace all that. Just like one click can correct my tilt problem or imperfect collimation, replacing lost nights of searching for a solution with the newtonian.
As Steeve said, it is perfectly fine to compare your work to others. But it is going to be difficult to draw conclusions about your system alone, since the results are highly likely to include AI contributions. I would recommend focusing on acquiring high quality data, calibration files and robust pre-processing routine. You don't use Pixinsight, which is very powerful at all stages. This could help, especially with the pre-processing, frame selection and calibration. Get to know what your system can produce before using AI, more integration can always help to improve SNR, select your raw frames. AI tools are really helpful in managing stars, just be careful not to lose information in the bright areas when recomposing images. I think we are all biased in overestimating our competences, AI does not explain everything. Personally I think its use should be parsimonious and carefully controlled, especially when trying to enhance details.
All best,
Bogdan