What is your real-world experience with oversampling? [Deep Sky] Acquisition techniques · Jeff Nibler · ... · 15 · 842 · 0

Hindsight 2.11
...
· 
·  1 like
·  Share link
Context here is deep sky imaging.

I believe the image scale guideline used to be 2x the seeing. I also believe the current guidance is closer to 3x the seeing, partially due to advanced deconvolution tools like BlurXterminator. 

I am very curious to hear real world experience from folks who have imaged the same target at different image scales and directly compared the results. With the amount of variables involved (guiding, tracking, optics quality, processing tools and technique, night-to-night changes in seeing), I do realize it can be difficult to get a scientific comparison.

For example, if your seeing averages at 2.0" 85% of the time, and can get down to 1.5" 15% of the time, this would mean that for 2.0" seeing nights, the finest image scale you could take advantage of would be about 0.67, and for 1.5" seeing nights, the finest image scale you could take advantage of would be about 0.5".

All other factors aside (IE assuming great tracking on an encoder mount, good optics, etc), assuming you are using a 3.76um CMOS camera, this would mean, on a 1.5" seeing night, the longest focal length you would want to use to hit ideal sampling levels would be about 1500mm, and on 2.0" nights it would be 1150mm. 

This would suggest that at this site, which averages 2.0" and sometimes drops down to 1.5" a small percentage of the time, you really wouldn't want to go longer than 1500mm telescope with a 3.76um camera. It is that statement I am really trying to validate. I would love to know if any of you have had real-world experience where you did use longer focal lengths (or rather, finer image-scales, or oversampling) than the standard formula recommends, and achieved better detail as a result. I assume the answer is no but this is what I want to verify before making some gear change decisions (in my case, going down a bit on my longest focal length OTA in exchange for shorter faster one). 

Thanks in advance for the input!
Like
andreatax 9.89
...
· 
·  1 like
·  Share link
In a test which I did many years ago the short answer to your question would have been no, you wouldn't want to oversample way too much in the fleeting hope that you'd able to catch up with the seeing. It wasn't worth the loss of signal at the average end of the spectrum. It is only worth if you're splitting doubles but I bet this isn't your major pursuit.
Like
Gondola 8.11
...
· 
·  2 likes
·  Share link
It really depends on what your goals are. Cameras and software keep evolving so the advice that made sense even 5 or 6 years ago might not be 100% the case now. If you want to go as deep as possible with wide fields and long subs, I don't think oversampling will buy you anything. If you're more interested in going for better resolution then I think there's some wiggle room.
Like
AccidentalAstronomers 18.64
...
· 
·  7 likes
·  Share link
Jeff Nibler:
I would love to know if any of you have had real-world experience where you did use longer focal lengths (or rather, finer image-scales, or oversampling) than the standard formula recommends, and achieved better detail as a result. I assume the answer is no but this is what I want to verify before making some gear change decisions (in my case, going down a bit on my longest focal length OTA in exchange for shorter faster one).


I've had long discussions about this with Ron Brecher and Warren Keller. Ron bins. I don't. So on my CDK12, I wind up with an image scale of 0.28"/px. Seeing at DSW probably averages about 1.25", so I'm way oversampled. I don't pretend to understand all of this (perhaps I don't understand any of it). But the reason I don't bin is because on CMOS cameras, it's done with software during download. So it's not like CCDs where binned pixels actually form a new, larger pixel that records and transmits a certain value. Instead, CMOS cameras like the Moravian simply sum or average the pixels on the way out--or in the case of other cameras, NINA does it when the image arrives. To me, that seems like needlessly tossing data.

I also remember a talk Russ Croman did on BXT where he talked about sampling. His contention was that if the average PSF for an image is > 8.0, then you can downsample it to 50% without any loss of detail (that's why BXT allows only PSFs <= 8.0). Conversely, if the PSF is < 3.0, you should drizzle 2X. So the first thing I do with images from the CDK is look at the PSF for the lum master. If it's > 8.0, I'll downsample all the masters to 50%--effectively duplicating what the camera or NINA would be doing if I were binning. If it's <= 8.0, I'll leave it as is. I do find that the closer the PSF is to 8.0, the lower the value I need to use for "Sharpen nonstellar" in BXT.
Like
jml79 4.17
...
· 
·  1 like
·  Share link
I don't have enough experience yet with my larger scopes to really have a deep understanding but with my 4" refractor the useful sampling was limited to about 0.8*. This could have been seeing limited or more likely scope limited. My point is to remember that it is a whole system with guiding issues, scope issues and the camera. You specifically mentioned the 3.76um pixels and this will avoid some of the challenges I faced with using small pixel cameras to get well below 1* of sampling. OK, I've added my limited experience. Now I get to follow the thread and see what results others have with much bigger scopes.
Like
CraigT82 1.20
...
· 
·  3 likes
·  Share link
It depends on what you want to do really.

If you’re going for dusty and faint then oversampling for your average seeing won’t help.

But if your project is getting in close on a bright target M31, M42 etc) and trying to get as much fine detail as possible then there will be some worth to oversampling. Be prepared to image again and again over multiple nights and only keep the very sharpest subs and you can start to “beat the seeing” as it were. Well, you can do better than average that way, certainly.
Like
Hindsight 2.11
Topic starter
...
· 
·  Share link
Thanks for the replies.

@CraigT82  that is a great point about brighter targets I haven't thought of - since the SNR is going to be so much higher on them, you can cull your subs more aggressively and keep the ones with the best FWHM. However, do you believe that even when doing this, if your BEST subs are captured under 1.5" seeing, that one could benefit from over-sampling beyond 0.67" per pixel? The former makes sense, and you might not be suggested the latter - but I wanted to check.
Like
Hillbrad 0.00
...
· 
·  1 like
·  Share link
I live in an area where there is a bad jet stream all winter…we only get good seeing here in the summers usually (I did planetary imaging for years so I know the local conditions well). However, as long as there’s not much wind on the ground to wreck guiding, I get consistent high resolution data at 3100mm and 2850mm (my two permanent setups) all year long. 


For awhile I had the 2850mm on one mount, and I was imaging at 1800mm on the other mount.  No matter how bad the seeing was, I always got better resolution at 2850mm. Sometimes it wasn’t a drastic difference, but I did do some tests on the same targets (m51 and m1) and it was better at the longer focal length. 

  It took me a lot of work to get my setups dialed in at those focal lengths…but once I did, I get surprisingly good results even when clear sky clock says the seeing is only 2/5 with a strong jet stream.  Also I bin 2x2, primarily just to save computer processing time. 


  I do spend time culling data before I stack, so that I’m only using the sharpest frames for my luminance layer… and windy nights go toward RGB.  

Just my two cents…

Brad
Like
Wjdrijfhout 6.78
...
· 
·  3 likes
·  Share link
It's a good question Jeff, and indeed I have made that direct comparison, for similar reasons as yourself. Feel free to check out the whole story on my website. In short my finding is that the oversampled images came out better. Noted, this was on two quite different systems. But even when comparing on the exact same system Bin1 (0.3"/px), Bin2 (0.6"/px) and Bin3 (0.9"/px), the 0.3"/px came out best. At my site seeing is rarely under 2", so 0.6"/px is already oversampled. Obviously there are diminishing returns. 

Just a few random thoughts as I have collected them when doing these comparisons.

- The difference shows up when using ML-based deconvolution tools such as BXT. This is a deconvolution tool and designed to 'calculate away' seeing effects. So it's not strange that any seeing based limitation (e.g. this sampling rule of thumb) will behave a bit differently when using these tools.

- The seeing/2 or seeing/3 rule of thumb originates from the Nyquist sampling theorem of digitising analog signals. While in theory there is nothing wrong with this, in reality apparently higher sampling frequencies can have benefits. According to Nyquist, CD quality (44kHz) provides an accurate representation of an analog music signal. Still, oversampled audio of 96kHz and even 192KHz offer an improvement in sound quality and are sold as HiRes Audio.

- The theoretical considerations are often focused on stars (how many pixels do I need to properly represent a star). But it works differently for larger structures (nebulosity, galaxy arms, etc). Especially these structures show up better after BXT in oversampled images.

- The noise is also oversampled. While this may not affect the SNR or the object in any quantitative way, the appearance of this 'finer' noise is often more pleasant. Just a nicer 'look' Also, it is possibly easier to reduce, but I've never tried that comparison.


When selecting a telescope/camera combination, I would start with the objects you'd like to image and what FoV would be required for that. From there get the biggest aperture that budget, mount and handling allows. Whatever the pixel scale is that comes from that's just what it is.
Like
Gondola 8.11
...
· 
·  Share link
Jeff Nibler:
Thanks for the replies.

@CraigT82  that is a great point about brighter targets I haven't thought of - since the SNR is going to be so much higher on them, you can cull your subs more aggressively and keep the ones with the best FWHM. However, do you believe that even when doing this, if your BEST subs are captured under 1.5" seeing, that one could benefit from over-sampling beyond 0.67" per pixel? The former makes sense, and you might not be suggested the latter - but I wanted to check.

yes...
Like
AwesomeAstro 2.39
...
· 
·  3 likes
·  Share link
It depends on what you want to do really.

If you’re going for dusty and faint then oversampling for your average seeing won’t help.

But if your project is getting in close on a bright target M31, M42 etc) and trying to get as much fine detail as possible then there will be some worth to oversampling. Be prepared to image again and again over multiple nights and only keep the very sharpest subs and you can start to “beat the seeing” as it were. Well, you can do better than average that way, certainly.

I'm honestly well-acquainted with the math and theory involved in the conclusions about "not" oversampling, but my direct real-world experience has been exactly as Craig says above here. Every time. If I remove that focal reducer, or if I step up my focal length (I've done this 4 times with SCTs- from the 8" focal reduced, to the 8" prime focus, to 11" reduced, to 11" prime focus), my results (mind you, on smaller targets of course, not large nebulae) are always better. And not even a little. A lot, actually. More detail, crisper, sharper, and overall more visually appealing. Even the same scope, with a focal reducer operating up to spec, never meets the level of sharpness I obtain at prime focus upon re-sampling the images to compare apples to apples. And I've run this little experiment more than once on different targets. I also agree with Craig's reasoning; you can throw out as many softer subs as you want from bad seeing.

But there's no free lunch. Be prepared for many hours of integration time to make up for the mathematical fact that you're imaging with a much slower system at that point. Want to image fast (few clear nights available, etc.) with larger targets? Keep the lower focal length. Want more detail on smaller targets and get better sharpness and crispness? Higher focal length is key, at the cost of many hours of integration (especially in light pollution).
Edited ...
Like
Gondola 8.11
...
· 
·  1 like
·  Share link
The Lunar and planetary imagers figured out a long time ago that there were benefits to be had by what we would consider extreme oversampling. The rule in that world is that your focal length should be 5 times your pixel size. The proper focal ratio for my 585 with 2.9 micron pixels would be F/14.5. A 2600 would come in at F/18.8. We can't push things that far but it is suggestive.
Like
Gondola 8.11
...
· 
·  1 like
·  Share link
As an example here's an experimental L frame shot at 0.33" per pixel.

https://app.astrobin.com/i/joqhd2

It's a 6 inch aperture so 2.2 over-sampling from the optical resolution of the system.
Like
Hindsight 2.11
Topic starter
...
· 
·  1 like
·  Share link
Great info and replies. @Willem Jan Drijfhout that's a great and informative article on your site - thank you for linking it. 

Seems like there is a consensus forming.
Like
Dan_I 2.62
...
· 
·  3 likes
·  Share link
In my place seeing is usually around 2".  

Going from 1"/pixel to 0.67"/pixel (same scope, different camera) was a huge improvement in resolution. It seemed litteraly like I traded my 8" newtonian for a larger scope.  

Next I moved from 0.67"/pixel to 0.45"/ pixel by trading the scope for a larger one. Again a significant improvement, but not as spectacular. 

This makes me think that the ideal sampling is between seeing/3 and seeing/4.
Like
MaksPower 1.20
...
· 
·  1 like
·  Share link
Ok… you mention BlurX but what about drizzling ?

The real point of being oversampled is not using drizzle, whereas those with little scopes + drizzling can achieve details you'd think should not be possible.

FWIW  use two scopes, one with/without reducer. Camera is an ASI2600MC DUO, 3.76 micron pixels.

First, a mak-newtonian at 900mm focal length, 0.7 arcsec/pixel.
Second, 10" f/12 at 0.26 arcsec/pixel
Third, 10" with reducer, 0.35 arcsec/pixel.

Both have EAF fitted and focusing is automatic, by an ASIAir.
Same mount, tripod and guiding, software etc.
The mount consistently guides around 0.3-0.5 arcsec most nights and this is seeing related, there have been occasions when the seeing settled and guiding improves to under 0.2 arcsec.

Call me crazy but the best results - sharpest images - are clearly from the 10" at f/12 (F = 3000mm) without the reducer.

Seeing… at the sites I'm using probably starts out around 1-1.5" then improves steadily through the night, this is visible in AstropixelProcessor which plots a chart of the star quality from each sub.

So I would have to say yes there is indeed a point in using a rig that resolves considerably better than the seeing. But you need a mount to match, and that is not so simple past 1500mm focal length.
Edited ...
Like
 
Register or login to create to post a reply.