Fake details in Solar Ha images [Solar System] Processing techniques · Luka Poropat · ... · 19 · 994 · 9

AstroLux 11.43
...
· 
·  9 likes
·  Share link
This is quite a topic to start, but recently I am seeing more and more images (mainly high resolution Ha images) that have a lot of fake detail in them. This is starting to be a common problem and rarely someone is addressing it. I will not call out names but you can find the images on your own. 

Given my background is mainly in DSO imaging I know this trend started and died early in DSO imaging (1-2 years ago) because it was easy to spot AI "creating non-existing" features and details in images. To prove my point there is a whole section on astrobin IOTD photographer guidelines that explains why Topaz should be avoided (because how can you use a machine learning algorithm trained on daytime photography for niche astronomical data ?  the answer is you cannot without producing made-up data).

The problem I see is getting positive feedback, likes, awards for basically a "painting" of the Sun instead of actual data because users/staff members/judges are not educated better to spot the difference what is an actual filament and what is an imaginary creation. 

I will include some examples and you can have a look for yourself. 

topazed 3 (1).JPGtopazed2 (1).JPGtopazed (1).JPG


And now for a 1:1 comparison of RAW processed data and data processed in Topaz (guess which one is which)

image (10) (1).png
ActiveRegion3825-topaz-denoise-upscale-2 (1).png



To quote a user on another forum: 

"Wow!! that's quite an incredible transformation! I don't think I would spot that, it looks like an image I would die for. I get very frustrated as I can never get detail like that, and others can. I see lovely detail in my videos and can never see those details in the processed version and this annoys me.

The question lies in the transparency of the people behind the images: Some of us are looking for visual impacts, other are looking for actual details, other are looking for both. 

As long as its transparent that possible artificial details were added im fine with that, what I would not encourage is not describing the manipulation behind the data. 

I think it should be clear to anyone that "AI" cannot replace or improve real signal when it comes to finding realistic structures. The machine is guessing the missing data / structures based on training with other images. If that is a good thing or not depends on your intentions.

In case of High resolution Solar  Topaz processing:  1. there are no knife sharp edges on the Sun
                                                                                             2. chances are Topaz is using something like an image of "Cheetah's fur" for the "hairy filaments" 


How to spot it: This is quite easy to know wheter what we seen on images are actual details or artifacts. We just have to browse through images taken with larger aperture (e.g. professional telescopes).

I would be glad if we could discuss it further.

Luka
Edited ...
Like
messierman3000 7.22
...
· 
·  Share link
I don't remember, was this processed with Gigapixel also? or just Denoise?

EDIT: yeah I remember, it's upscaled, so Gigapixel is involved; when it upscales a low res image, and it's a filamentary object, when using the right settings, it sometimes creates those (sort of) knife edge transitions between dark and light, based on what I've learned

I definitely do not agree with using Topaz on the sun
Edited ...
Like
jhayes_tucson 26.84
...
· 
·  19 likes
·  Share link
In my view, the basic premise here is flawed.  First, the use of the word "fake" implies intent.  Many types of deconvolution algorithms can generate ringing artifacts that might (for example) create dark circles around stars and I certainly would not call those dark circles "fake" data.  They are artifacts of an improperly applied algorithm.  Similarlarly, if a tool such as Topaz generates spurious patterns, I'd call those artifacts--and I'd specifically call them undesirable artifacts.  Even a tool as well implemented as BlurExterminator can generate and enhance sharpening artifacts such as "worms", which again are undesirable artifacts.  A properly trained neural net can be remarkably good at bringing out image detail that might not be immediately obvious to the human eye but it's up to the user to decide how to best apply that analysis.  I am much more inclined to believe that if an imager posts an image that contains processing artifacts that it is being done in good faith without jumping to any conclusion that someone is trying to pass off "fake data".  In my view and going beyond the premise of this thread, even the title of this thread is incendiary--and very much akin to making a "cheating" accusation.  

My suggestion is to dial it way down and simply ask what level of processing artifacts are acceptable in an image?  And, if that's your concern, what analysis have you done to show what the artifacts actually are?  Have you subtracted images or analyzed them in other ways to support your claim of "fake data?"  In my view, fantastic claims require fantastic proof.

John
Like
CCDnOES 8.34
...
· 
·  1 like
·  Share link
John Hayes:
They are artifacts of an improperly applied algorithm


Well put. Whether the user posts such an image out of intent to deceive or out of ignorance of what something really looks like is the question that one cannot easily answer and I tend to default to the assumption that the general concept (if not the specific details) of Hanlon's Razor tends to apply in that situation more often than not. Of course although that alone could be somewhat insulting to the imager, at least it does not imply deliberate intent.
Edited ...
Like
andreatax 9.89
...
· 
·  1 like
·  Share link
I'd tend to agree with John above here. Faking implies intent to pass something unreal for real and clearly this isn't the case here. Whether those details and the degree of sharpness presented in some solar images are real or not might be questionable for anyone with more than just a modicum of experience in high-res planetary imaging. Just check what the pros are achieving and that should put a lot of questions to rest, Sun-wise.
Like
Alan_Brunelle
...
· 
·  Share link
John Hayes:
In my view, the basic premise here is flawed.  First, the use of the word "fake" implies intent.


I too agree with John.  Two additional points related to the intent point:  1. There is always the possibility that the producer of the image containing processing artifacts may not be sophisticated enough to realize that they are doing something that is not to everyone's liking.  (But then who here can claim 100% compliance with that anyway!)  2.  Then one would have to feel that there is harm being done in some way.  If so, what way is that?  Given that most people here agree that this is an art site, not a hard science site, artistic license is a given.  I know from previous content in the forums that some here do believe that what they are doing here is science, so I suppose that these people may see such an image as a fraud.  But that is their problem.  As far as I can tell, the owner of this site only requires that the images result in data acquired photographically.  And no one is making money here with their images from the users of AstroBin without choice.

Alan
Edited ...
Like
coles44 4.14
...
· 
·  6 likes
·  Share link
Luka,

I've reviewed your recent post and the accompanying comments. While the topic of detail in solar image processing in astrophotography warrants discussion, I believe your approach has been somewhat misleading.

Your assertion that many high-resolution solar images contain "fake detail" is a serious allegation. For example, one of your suspect images is from an author I know personally, and it is an animation not a still frame. It seems you misunderstand the intent of the original work. The dynamic motion of the solar features, captured in the animation, is a key aspect that static images cannot fully convey.

Moreover, your implication that Topaz AI is being used to generate false details in these images is unfounded. Topaz was NOT used in the production of this solar animation. Many astrophotographers, including myself, employ careful and ethical processing techniques to enhance real features, not create artificial ones.It's important to approach such discussions with respect and accuracy.

Before making accusations, it's essential to fully understand the techniques and intentions behind the images in question. I encourage you to engage in constructive dialogue rather than spreading misinformation.

Eric
Edited ...
Like
StewartWilliam 5.21
...
· 
·  1 like
·  Share link
And now for a 1:1 comparison of RAW processed data and data processed in Topaz (guess which one is which)

image (10) (1).png
ActiveRegion3825-topaz-denoise-upscale-2 (1).png




Luka

These two images are clearly identical, and the bottom one is much sharper with more defined detail, so what is the issue..`? There is nothing in the bottom image that is not in the top one, albeit much shaper, where is the “fake” data, as you put it, as I just don’t see it at all..🤔🤔
Like
AstroLux 11.43
Topic starter
...
· 
·  2 likes
·  Share link
After reviewing the feedback on my original post, I’d like to clarify my intent and respond to some points raised. My goal is not to accuse anyone of deliberate deception but rather to spark a discussion about the transparency of processing techniques and the implications they have on representing the Sun accurately.

I will use "artifact" going forward to avoid unnecessary assumptions about intent, though I maintain that these artifacts misrepresent the true nature of the Sun.

Transparency in processing is vital, especially for high-resolution solar imaging, where features like filaments, spicules, and granules are of scientific interest. If an imager employs tools that might introduce artifacts, disclosing this information is crucial to allow viewers to interpret the image appropriately.

Some have noted that AstroBin is an art-focused platform. While I respect that, many astrophotographers (myself included) are interested in accurate representations.

A balance can be struck between aesthetics and authenticity. Transparency benefits both camps: it allows viewers to appreciate the artistic choices while also respecting the underlying data. (e.g.) providing raw stacks such as in new nebulae discovery or writing all the tools used in the processing). 

Artifacts happen (AI or not): 
e.g., Lucy-Richardson- When used improperly, this can cause ringing artifacts or overly sharp edges that don’t reflect true detail.

e.g., Topaz - AI introduces artifacts by guessing or "hallucinating" features based on its training data. (Already mentioned Solar filaments might resemble organic textures (like fur))

I strongly believe in evaluating claims with evidence. In my original post, I included examples where:
RAW Data: Shows softer, less-defined features, consistent with telescope resolution limits.
Processed Data (AI-enhanced): Contains sharp, high-contrast features not present in the RAW data.

I cannot do the same for the other non comparisons due to abundance of RAW information.


Several comments suggested that I approach this discussion from an educational perspective rather than critique. I appreciate this feedback and agree. If others are unaware of how to spot artifacts or understand what they’re seeing in processed solar images, I’d like to help bridge that gap.

An example from above yet again: 
image.png
image.png


How to: 
- compare against established references (such as professional observatories)
- look for unnatural textures
-"knife-edge" transitions often indicate excessive AI enhancement.


Regarding the author of some images that I put as an example in my original post. I believe I am well within my rights to look at images and think critically: "Hey, that doesn’t look quite right."
This is the perspective from which I approached the examples mentioned in my original post.
Yesterday’s IOTD (Image of the Day) was one of the images that prompted me to write this post. It led me to examine the author’s gallery more closely, where I noticed similarities and inconsistencies between the equipment used and the results achieved.What raised questions for me was this: earlier images from the author clearly indicate the use of tools like Topaz, while more recent images do not—yet they still exhibit similar artifacts and features.
This inconsistency made me wonder if the same or similar techniques were being used without disclosure.Initially, like many others, I was amazed by these images. However, after someone pointed out potential inconsistencies in the data and explained how it might have been manipulated—intentionally or otherwise—I began to view these images differently. They no longer seemed to represent the actual prominences, filaments, or other solar features accurately, whether in animations or still images.

My main question remains: where is the line? How do we balance artistic enhancement with scientific or visual accuracy? At what point does processing cross over from enhancing details to creating features that aren’t truly there?

Luka
Edited ...
Like
gnnyman 6.04
...
· 
·  Share link
Your comparison is interesting and demonstrates well the power of modern state-of-the-art image processing, but - a big "but" it does not show anything which is not existing. I do use modern AI-supported image processing for all my images, what else? Photoshop is AI based and supported of course!  Almost all deep-sky stacking, processing and imaging programs are SI supported/enhanced, otherwise you would end up with results like I got in the late 70´s. 
Hubble, James-Webb - what you get is of course based on intelligently trained algorithms, with other words AI. Modern science of imaging is based on artificial intelligence and a lot of computing power. 
Do you think the raw data, which come our of your camera are "raw"-raw? Not at all - as soon as you use a sensor for imaging, you get an output, which is electronically - intelligent enhanced and modified. 
All, absolutely all digital cameras do not provide "real" raw data - none of the formats, be it FITS, NEF, CRx…. are raw data - all of them are pre-cooked inside the camera to get a usable result at the beginning of the processing chain. 

On the other side, I do understand your concern - where does "real" imaging end and where is more or less electronic phantasy starting? My two-pennies are: If you end up with details which are not even slightly recognizable in the pre-processed image, then it becomes at bit suspicious and one needs to investigate further to find out if those details are just too weak and/or subtle to be recognized without further enhancement or are the questionable details pure manipulated phantasy of the software. 

Let me give you a drastic example out of my personal professional history - before retiring at 65, I was working for several imaging companies, one of them was Hamamatsu Photonics. Hamamatsu developed a special digital camera system which was able to enhance minute density differences to make them visible. With such a camera, we - my team and others where able to see (yes - see) microtubuli in cells in vivo. With pure optical microscopy not possible, with that system yes. This technology was leading to majore discoveries and is called AVEC (Allen Video Enhanced Contrast). I can give you more of those examples if you like. The question - was that real or not - came up and yes, it was real, because with a SEM, you could very well see those microtubuli very well.

We cannot fly to the sun to see of those "waves" are real or not, we can use more powerful telescopes and find it out - and yes, they are real. So what is the problem? There is no problem IMO. That you can not get more details is only limited by the air, turbulences, seeing and your budgt, I guess…

CS
Georg
Like
coles44 4.14
...
· 
·  2 likes
·  Share link
I have a suggestion for all image processing, including solar. 

Let's eliminate all image enhancing techniques, including:

Stacking, both DSO and lucky imaging (yes solar images are stacked), and specifically, no selecting the best video frames in lucky imaging.

Noise reduction including Topaz, NoiseXterminator and Pixinsight.

Detail enhancement including Topaz, BlurXterminator and PixInsight.

Perhaps we should go back to film.

Eric smile
Like
Joo_Astro 3.80
...
· 
·  6 likes
·  Share link
I have to say I'm quite surprised by the answers to this topic so far. I dont't want to dive to deep into the discussion, I just think that right now these tools are not accurate enough to bring out "real" data, especially when using software intended for"normal" (daytime) photography for astro images. 

Just last week we had this huge thread about noise reduction and sharpening, where almost everyone attacked @Bray Falls and others for being intransparent and altering data. The general consensus was that this kind of altering and editing is not accepted. And now here we are, everyone being fine with Topaz creating clearly fake artifacts in the image presented by the OP. I mean, the structures in the edited image are objectively not the same as you would get with perfect Deconvolution/miraclous upscaling. 

Again, the bottom of these topics is accepting some kind of compromise between scientific accuracy and artistic interpretation.

But I am just kind of surprised by the vastly different reactions.
Like
Die_Launische_Diva 11.54
...
· 
·  4 likes
·  Share link
Eric Coles (coles44):
Noise reduction including Topaz, NoiseXterminator and Pixinsight.

Detail enhancement including Topaz, BlurXterminator and PixInsight.

The problem is, as I see it, that many people does not know that Topaz isn't trained on astronomical images (as far as  I know, maybe astro images are only a small percentage of its training dataset). Blur/NoiseX were trained on deep-sky images. The problem is not the tool itself, the problem lies on the usage of a tool outside of its specifications. Because it's "AI", that doesn't mean it is appropriate for everything. For the same reason we can't apply BlurX on terrestrial images and expect a faithful outcome, we can't have high expectations on faithfulness when applying Topaz on astronomical images.
Like
StewartWilliam 5.21
...
· 
·  Share link
Johannes Maximilian Möslein:
I have to say I'm quite surprised by the answers to this topic so far. I dont't want to dive to deep into the discussion, I just think that right now these tools are not accurate enough to bring out "real" data, especially when using software intended for"normal" (daytime) photography for astro images. 

Just last week we had this huge thread about noise reduction and sharpening, where almost everyone attacked @Bray Falls and others for being intransparent and altering data. The general consensus was that this kind of altering and editing is not accepted. And now here we are, everyone being fine with Topaz creating clearly fake artifacts in the image presented by the OP. I mean, the structures in the edited image are objectively not the same as you would get with perfect Deconvolution/miraclous upscaling. 

Again, the bottom of these topics is accepting some kind of compromise between scientific accuracy and artistic interpretation.

But I am just kind of surprised by the vastly different reactions.

😂😂😂 if the AI software was adding fake details and data to any image,  the program would have to be massive to hold all the data, it would be terabytes in size, which we all know it isn’t, so how can it look at an image and think, hmmmm what can I add to make this better, it can’t do that, it can only sharpen and enhance what is already there.
Don't get me wrong, I am not a fan of these AI tools as they will change the hobby forever and not in a good way, and the time will come when all you will need is a couple of hundred dollars worth of kit, and with AI the images will match that of the Hubble, but we are a ways from that at the moment.
Like
StewartWilliam 5.21
...
· 
·  Share link
Die Launische Diva:
Eric Coles (coles44):
Noise reduction including Topaz, NoiseXterminator and Pixinsight.

Detail enhancement including Topaz, BlurXterminator and PixInsight.

The problem is, as I see it, that many people does not know that Topaz isn't trained on astronomical images (as far as  I know, maybe astro images are only a small percentage of its training dataset). Blur/NoiseX were trained on deep-sky images. The problem is not the tool itself, the problem lies on the usage of a tool outside of its specifications. Because it's "AI", that doesn't mean it is appropriate for everything. For the same reason we can't apply BlurX on terrestrial images and expect a faithful outcome, we can't have high expectations on faithfulness when applying Topaz on astronomical images.

It makes no odds what the Toapz software is trained on, as all it does is enhance any details that are already present in the image, so sharpens any details, hence why Topaz will work fine on many starless astro images, now if you put images with stars through Topaz, again it will work to a certain extent on any nebula and so forth, BUT what it won’t do, unlike BlurX is make stars round when they are not, that is because BlurX was specifically trained on all sorts of star aberrations, where Topaz was not, but for solar Topaz will work wonders as we have seen..
Like
Joo_Astro 3.80
...
· 
·  Share link
AstroShed:
Johannes Maximilian Möslein:
I have to say I'm quite surprised by the answers to this topic so far. I dont't want to dive to deep into the discussion, I just think that right now these tools are not accurate enough to bring out "real" data, especially when using software intended for"normal" (daytime) photography for astro images. 

Just last week we had this huge thread about noise reduction and sharpening, where almost everyone attacked @Bray Falls and others for being intransparent and altering data. The general consensus was that this kind of altering and editing is not accepted. And now here we are, everyone being fine with Topaz creating clearly fake artifacts in the image presented by the OP. I mean, the structures in the edited image are objectively not the same as you would get with perfect Deconvolution/miraclous upscaling. 

Again, the bottom of these topics is accepting some kind of compromise between scientific accuracy and artistic interpretation.

But I am just kind of surprised by the vastly different reactions.

😂😂😂 if the AI software was adding fake details and data to any image,  the program would have to be massive to hold all the data, it would be terabytes in size, which we all know it isn’t, so how can it look at an image and think, hmmmm what can I add to make this better, it can’t do that, it can only sharpen and enhance what is already there.
Don't get me wrong, I am not a fan of these AI tools as they will change the hobby forever and not in a good way, and the time will come when all you will need is a couple of hundred dollars worth of kit, and with AI the images will match that of the Hubble, but we are a ways from that at the moment.

What? Obviously the software doesn't "add" different data to images from storage. It tries to enhance the image by guessing what the underlaying, real structures are. But these AI models like Topaz are not trained on astro-images (I guess). Did you even try them yourself? I do a lot of normal photography, and I can tell you even on those images Topaz will often produce wrong structures, like faces looking completely different than reality.
Like
StewartWilliam 5.21
...
· 
·  Share link
Johannes Maximilian Möslein:
AstroShed:
Johannes Maximilian Möslein:
I have to say I'm quite surprised by the answers to this topic so far. I dont't want to dive to deep into the discussion, I just think that right now these tools are not accurate enough to bring out "real" data, especially when using software intended for"normal" (daytime) photography for astro images. 

Just last week we had this huge thread about noise reduction and sharpening, where almost everyone attacked @Bray Falls and others for being intransparent and altering data. The general consensus was that this kind of altering and editing is not accepted. And now here we are, everyone being fine with Topaz creating clearly fake artifacts in the image presented by the OP. I mean, the structures in the edited image are objectively not the same as you would get with perfect Deconvolution/miraclous upscaling. 

Again, the bottom of these topics is accepting some kind of compromise between scientific accuracy and artistic interpretation.

But I am just kind of surprised by the vastly different reactions.

😂😂😂 if the AI software was adding fake details and data to any image,  the program would have to be massive to hold all the data, it would be terabytes in size, which we all know it isn’t, so how can it look at an image and think, hmmmm what can I add to make this better, it can’t do that, it can only sharpen and enhance what is already there.
Don't get me wrong, I am not a fan of these AI tools as they will change the hobby forever and not in a good way, and the time will come when all you will need is a couple of hundred dollars worth of kit, and with AI the images will match that of the Hubble, but we are a ways from that at the moment.

What? Obviously the software doesn't "add" different data to images from storage. It tries to enhance the image by guessing what the underlaying, real structures are. But these AI models like Topaz are not trained on astro-images (I guess). Did you even try them yourself? I do a lot of normal photography, and I can tell you even on those images Topaz will often produce wrong structures, like faces looking completely different than reality.

Many people on here are stating that these AI tools add structure that was not there, well I don’t agree, sorry I misunderstood you post, or I may have replied to the wrong one as I have read so many, you are in the same camp as me then.
Like
Joo_Astro 3.80
...
· 
·  2 likes
·  Share link
AstroShed:
Many people on here are stating that these AI tools add structure that was not there, well I don’t agree, sorry I misunderstood you post, or I may have replied to the wrong one as I have read so many, you are in the same camp as me then.

I think that's just a miscommunication, when people say it's "adding structures", that doesn't literally mean adding new data, but altering it in a way there are new structures, that have not been there.
Like
OgetayKayali 12.96
...
· 
·  Share link
Luka Poropat:
though I maintain that these artifacts misrepresent the true nature of the Sun.

This premise creates a huge logical contradiction. If the intent is to show 'the true nature/physics exactly' (which can't be done whatsoever, but anyway), then why do we invert the Sun images to begin with? The edge is not brighter than the center. This is totally against the physics. What happened to limb darkening? This is even causing misinformation, I have personally met a couple of people, even students who got confused by it.

But somewhat we are more troubled by the generation of a 2px wide filament that nobody would even notice unless they pixel peep. How is this exactly causing misinformation? How does this 'misrepresentation' do harm? It's not like weird structures like rectangles --which can't be the case-- appear there. It just looks like the Sun exactly. That is not something that could impose any physical confusion. That structure isn't even static. This is something that we wouldn't even remember or try to reproduce anyway.

Even though we keep addressing it as the concern, I do not think our actual worry is about the physical reality here. Considering the major things we ignore, I see this as an overthinking. At least this is way below my red line as I can't see any non-contradicting principle to back this idea.
Edited ...
Like
HegAstro 14.24
...
· 
·  2 likes
·  Share link
Luka Poropat:
My main question remains: where is the line? How do we balance artistic enhancement with scientific or visual accuracy? At what point does processing cross over from enhancing details to creating features that aren’t truly there?


I tend to agree that transparency on what method is used is the key. I and a few others made this case in a different thread more related to DSO imaging. Tools like Blur X are quite accepted in the community, but the processor is free to use others and not disclose and I don't know that there is an easy method to force this. Then it becomes difficult to determine what is real and what is an artifact. I will not comment on the issue of awards, but perhaps in evaluating any image for a serious purpose, the casual Astrobin user needs to treat every image with a degree of skepticism as it relates to detail unless they can verify that across multiple images or through other sources such as Hubble or other professional images.
Edited ...
Like
 
Register or login to create to post a reply.