Astro Imaging Channel Tonight (7/7/24): The Quest for Aperture: Why are big telescopes better? Other · John Hayes · ... · 32 · 2036 · 3

jhayes_tucson 26.84
...
· 
·  26 likes
·  Share link
I'll be making another presentation on the Astro Imaging Channel this evening.  We'll go over the optics of telescopes to better understand the advantages of large apertures.  This is a topic that I've been thinking about as I've moved to larger and larger scopes over the years.  The show starts at 9:30 EDT and you can tune in here:  https://youtube.com/live/HiJoqQp1qFI?feature=share

I've attached a spreadsheet you can use for comparing signals between different imaging systems.

Signal Comparison 7-3-24.xlsx



John
Like
bsteeve 11.22
...
· 
·  Share link
Looking forward to checking this out John
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  Share link
That's great Steve!  I'll try my best not to put you to sleep.  smile))))
Like
Marcelof 6.20
...
· 
·  Share link
Oh, I'll certainly be watching. Your presentations are always worthwhile.
Like
Deep_Space_Dave 3.31
...
· 
·  Share link
Thanks John and excellent presentation!  I actually enjoy astrophotography more from the technical aspect.  Fascinating stuff :-)
Like
ashastry 2.81
...
· 
·  Share link
Thanks for presenting this @John Hayes! Really looking forward to watching this later tonight (hopefully it is available as a YouTube recording).
Like
HegAstro 14.24
...
· 
·  Share link
Hi John,

Very nice presentation. I watched half of it and will watch the rest tomorrow.  It was nice to see the mathematical representation of how seeing degrades resolution of various aperture telescopes.

On radiometry: I had a couple questions with respect to radiometric fall off (cos^4(theta)):
  • I would imagine this is much more significant for wide angle photography? For something like a 10" f/4 and an APS-C sensor, would the fall off be significant, since the FOV is quite narrow?
  • Can this be properly corrected by anything other than sky flats? I'm guessing that, to mimic the fall off, the flat light source would need to be effectively at infinity (or at least several  focal lengths away from the lens). Which means what many of us do, which is to use panels close to the objective of a refractor, or entrance of a reflector, may not properly correct for fall off. Is this incorrect?

Thanks again,

Arun
Like
jrista 11.18
...
· 
·  Share link
Hey John,

I was looking at the spreadsheet, and I think it has one mistake. The R factor for the LSST Ruben was 95, while the others were all 0,85. This pushed the unnormalized signal for that system way beyond the rest. I assume you meant for it to be 0.95? When I change R for the two LSST columns, it seems to bring that scope "back to earth" so to say.  For example, the Sn/S0 ratio was 566794% when R is 95, which seems insane, but it is 5668% when R is 0.95 which seems a bit more realistic.
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  1 like
·  Share link
Jon Rista:
Hey John,

I was looking at the spreadsheet, and I think it has one mistake. The R factor for the LSST Ruben was 95, while the others were all 0,85. This pushed the unnormalized signal for that system way beyond the rest. I assume you meant for it to be 0.95? When I change R for the two LSST columns, it seems to bring that scope "back to earth" so to say.  For example, the Sn/S0 ratio was 566794% when R is 95, which seems insane, but it is 5668% when R is 0.95 which seems a bit more realistic.

Good catch Jon.  Yes, responsivity is always 1.0 or less.  I think that that error sprung from a formatting change and I didn't catch it.

John
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  1 like
·  Share link
Arun H:
Hi John,

Very nice presentation. I watched half of it and will watch the rest tomorrow.  It was nice to see the mathematical representation of how seeing degrades resolution of various aperture telescopes.

On radiometry: I had a couple questions with respect to radiometric fall off (cos^4(theta)):
  • I would imagine this is much more significant for wide angle photography? For something like a 10" f/4 and an APS-C sensor, would the fall off be significant, since the FOV is quite narrow?
  • Can this be properly corrected by anything other than sky flats? I'm guessing that, to mimic the fall off, the flat light source would need to be effectively at infinity (or at least several  focal lengths away from the lens). Which means what many of us do, which is to use panels close to the objective of a refractor, or entrance of a reflector, may not properly correct for fall off. Is this incorrect?

Thanks again,

Arun

Hi Arun,
Yes, radiometric fall off is angle dependent so the bigger the angle the more the fall off.  The one factor that's not accounted for in the calculation is the change in the apparent solid angle of the entrance pupil as seen at the source.  So, that means if the angle gets too big, we have to add yet another cosine multiplier!  That's normally not done simply because for most systems the angle isn't big enough to matter.  I came across this nice little paper if you want to see the full derivation:  http://dougkerr.net/Pumpkin/articles/Cosine_Fourth_Falloff.pdf.  The author also points out some real world limitations of the calculation that are interesting to consider.

For systems that don't have such a wide field the fall off is of course much smaller.  Keep in mind that in astro-imaging, we stretch the results to amplify low level signals and that non-linear results will still be quite sensitive to cos^4 fall off even in a narrow field system.

Photoshop has some tools to remove the effect mathematically so yes, that's possible.  It's not really the best way to do things but if you are careful, radiometric fall off can be corrected mathematically.  As for light panels, they do indeed correct for radiometric fall off quite nicely.


John
Like
jego 2.41
...
· 
·  Share link
Hi John, thanks for the talk. Question about the spreadsheet. Does this account for how much of the image circle is actually hitting the sensor?

For example, if you have an f/7.2 scope with a 70mm image circle on a 43mm sensor, and if you put a 0.66x reducer on that same scope to make it f/4.75 with a 46mm image circle (almost entirely captured by the same sensor), the spreadsheet shows a significant signal boost. But would the effective increase actually be even more, given that a big portion of the photons in the 70mm case are essentially wasted? Assuming a fixed sensor size, intuitively this seems like a system that is "wasting" signal is taking another hit over one that isn't, but I'm not sure if my mental model is correct.


FYI, you mentioned in your talk the bigger pixel Sony sensors that aren't available in any astrocams - I believe the reason is actually because Sony doesn't sell those ones in monochrome. They make the IMX366 (4.4um) and the IMX410 (5.94um), and some astro cam makers actually do sell the IMX410, but I believe QHY and ZWO both discontinued theirs. ToupTek/RisingCam still have a 410C offering. I considered getting one and scraping off the bayer matrix to convert it to mono, but I have been convinced that this is a bad idea, and losing the microlenses would result in significant efficiency loss anyway (although, it might help with pesky microlens diffraction artifacts…).
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  1 like
·  Share link
Hi John, thanks for the talk. Question about the spreadsheet. Does this account for how much of the image circle is actually hitting the sensor?

For example, if you have an f/7.2 scope with a 70mm image circle on a 43mm sensor, and if you put a 0.66x reducer on that same scope to make it f/4.75 with a 46mm image circle (almost entirely captured by the same sensor), the spreadsheet shows a significant signal boost. But would the effective increase actually be even more, given that a big portion of the photons in the 70mm case are essentially wasted? Assuming a fixed sensor size, intuitively this seems like a system that is "wasting" signal is taking another hit over one that isn't, but I'm not sure if my mental model is correct.


FYI, you mentioned in your talk the bigger pixel Sony sensors that aren't available in any astrocams - I believe the reason is actually because Sony doesn't sell those ones in monochrome. They make the IMX366 (4.4um) and the IMX410 (5.94um), and some astro cam makers actually do sell the IMX410, but I believe QHY and ZWO both discontinued theirs. ToupTek/RisingCam still have a 410C offering. I considered getting one and scraping off the bayer matrix to convert it to mono, but I have been convinced that this is a bad idea, and losing the microlenses would result in significant efficiency loss anyway (although, it might help with pesky microlens diffraction artifacts...).

When you consider the field size determined by the sensor, you are talking about the total optical power gathered over the whole field, which relates to Etendue.  The signal that you record comes from each individual pixel--no matter how many of them you have.   If you point your scope at a uniformly illuminated region of the sky and take an exposure with a big sensor and another exposure with a small sensor, the sum of the signals from ALL of the pixels in the sensor will of course be a bigger number for the large sensor than the for the small sensor; but, that's irrelevant.   What counts is the magnitude of the signal from each pixel.  That is what will change with different telescope parameters (D, F/#, R, T, Ob, etc.) and that's what the spreadsheet is computing.  More signal is always better!  Having a bigger sensor is really just about achieving a larger field of view independent of the signal strength.

Thanks for the update on those Sony sensors.  I thought that I saw the IMX366 as a monochrome sensor but I must have misread it!

John
Like
jrista 11.18
...
· 
·  Share link
John, the presentation was excellent. Thank you for taking the time. I still remember many of your formulas from when you shared them on CN. They have become very fundamental for me. 

I am really intrigued by the example resolution images demonstrating just how much a larger aperture benefits resolving power! The comparisons were excellent, and I'm amazed at how much sharper the 1500mm aperture was in comparison to your own 600mm aperture. There was really a significant difference. I honestly didn't think that would be possible from earth, and am amazed that it is. I doubt I'll ever have the funds for a scope with an aperture that large, but I probably could afford a 14" aperture. 

I do have a question regarding stellar signals with very fast systems. You DO mention that you need a smaller pixel with very large aperture fast systems, but we are already generally operating near the limits of pixel size here, with the smallest around 2.4 microns (and in some cases even smaller). I've noticed that people with very fast imaging systems that have large apertures, such as hyperstar or RASA, often have notably star clipping AS WELL AS less than optimal background signal quality (i.e. its noisy/shallow). Every time I've seen that (and there have been threads on CN over the years about people having trouble imaging with such fast systems), I always remembered your equations and the fact that star signal grows so much faster with increase in aperture and reduction ins f-ratio.

So, is that a potential PROBLEM for imagers who are most interested in imaging extended objects…be that galaxies or nebula? If you are going to increase star saturation rate so significantly with an ultra fast system like that, would that be counter-productive for extended object signals? Is there a sweet spot where you get great resolution, without causing problems with star clipping AND extended object signal SNR?

Finally, a small note on the Dragonfly Array. You mention it may not have much of a cost advantage, and that one of the main advantages would be redundancy. There is another CRITICAL advantage, and the key reason why that team chose to use an array of Canon lenses like they did, rather than a very large reflecting telescope: Transmission. The Canon great white telephoto lenses use a nanocoating technology on internal lens surfaces, which is vastly superior in reducing reflections compared to a normal multicoating. If it were not for the exceptional transmission capabilities of that array, they would be incapable of detecting the ultra-faint objects that they are (which are as faint as 32mag/sq" surface brightness…at least, that was the limit the last time I read one of their papers which is admittedly a few years now.) They have numerous papers where they talk about the benefits of having practically zero reflection and minimal scattering…if the scopes scattered more light (and a reflecting telescope will scatter significantly more), then the extremely rare photons that arrive from an ultra faint source 30mag/sq" or fainter, will usually just be scattered incoherently into the background signal. So an object couldn't be detected. The ability to minimize scattering is the main reason they chose to build an array of Canon 400mm lenses each with their own camera. Even though the effective aperture of their system is technically the same as a 1 meter telescope, as I understand it, their system is vastly more capable of detecting faint objects than any other system, here on earth or in space. 

I'd be curious to hear your thoughts on that. Their project is to survey not just extended objects, but to discover the faintest objects in the universe, which I think they have had some success at. They are often integrating tens of thousands of subs  (since they are acquiring many concurrently with all those cameras), which is in part why they are able to integrate enough signal from objects so faint. As I understand it, though, they could acquire ten times as much data and still not even detect these objects if the system scattered more than it does thanks to the Canon nanocoating.
Like
HegAstro 14.24
...
· 
·  Share link
Jon Rista:
I honestly didn't think that would be possible from earth, and am amazed that it is. I doubt I'll ever have the funds for a scope with an aperture that large, but I probably could afford a 14" aperture.


Jon - from the presentation, it seems like seeing would be the limiting factor with anything greater than 10". So, unless you are at a place with great seeing, your cost is likely to be much more than the cost of just the scope (ongoing remote rental, travel, etc.)?
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  1 like
·  Share link
Jon Rista:
John, the presentation was excellent. Thank you for taking the time. I still remember many of your formulas from when you shared them on CN. They have become very fundamental for me. 

I am really intrigued by the example resolution images demonstrating just how much a larger aperture benefits resolving power! The comparisons were excellent, and I'm amazed at how much sharper the 1500mm aperture was in comparison to your own 600mm aperture. There was really a significant difference. I honestly didn't think that would be possible from earth, and am amazed that it is. I doubt I'll ever have the funds for a scope with an aperture that large, but I probably could afford a 14" aperture. 

I do have a question regarding stellar signals with very fast systems. You DO mention that you need a smaller pixel with very large aperture fast systems, but we are already generally operating near the limits of pixel size here, with the smallest around 2.4 microns (and in some cases even smaller). I've noticed that people with very fast imaging systems that have large apertures, such as hyperstar or RASA, often have notably star clipping AS WELL AS less than optimal background signal quality (i.e. its noisy/shallow). Every time I've seen that (and there have been threads on CN over the years about people having trouble imaging with such fast systems), I always remembered your equations and the fact that star signal grows so much faster with increase in aperture and reduction ins f-ratio.

So, is that a potential PROBLEM for imagers who are most interested in imaging extended objects...be that galaxies or nebula? If you are going to increase star saturation rate so significantly with an ultra fast system like that, would that be counter-productive for extended object signals? Is there a sweet spot where you get great resolution, without causing problems with star clipping AND extended object signal SNR?

Finally, a small note on the Dragonfly Array. You mention it may not have much of a cost advantage, and that one of the main advantages would be redundancy. There is another CRITICAL advantage, and the key reason why that team chose to use an array of Canon lenses like they did, rather than a very large reflecting telescope: Transmission. The Canon great white telephoto lenses use a nanocoating technology on internal lens surfaces, which is vastly superior in reducing reflections compared to a normal multicoating. If it were not for the exceptional transmission capabilities of that array, they would be incapable of detecting the ultra-faint objects that they are (which are as faint as 32mag/sq" surface brightness...at least, that was the limit the last time I read one of their papers which is admittedly a few years now.) They have numerous papers where they talk about the benefits of having practically zero reflection and minimal scattering...if the scopes scattered more light (and a reflecting telescope will scatter significantly more), then the extremely rare photons that arrive from an ultra faint source 30mag/sq" or fainter, will usually just be scattered incoherently into the background signal. So an object couldn't be detected. The ability to minimize scattering is the main reason they chose to build an array of Canon 400mm lenses each with their own camera. Even though the effective aperture of their system is technically the same as a 1 meter telescope, as I understand it, their system is vastly more capable of detecting faint objects than any other system, here on earth or in space. 

I'd be curious to hear your thoughts on that. Their project is to survey not just extended objects, but to discover the faintest objects in the universe, which I think they have had some success at. They are often integrating tens of thousands of subs  (since they are acquiring many concurrently with all those cameras), which is in part why they are able to integrate enough signal from objects so faint. As I understand it, though, they could acquire ten times as much data and still not even detect these objects if the system scattered more than it does thanks to the Canon nanocoating.

Jon,
Thanks for your kind words.  I never know if I'm going to put everyone to sleep with all that technical stuff so I'm glad to hear that you found it valuable.

You are totally correct that fast systems typically require small pixels--particualarly under good seeing conditions.  Most fast systems are not diffraction limited over the field and most are undertsamped.  You are also correct that as the system becomes faster (and larger) the star signal get so strong that it's hard to avoid clipping the brighter stars--even with fairly short exposures.  You either have to accept it, go to really short exposures, or use HDR methods to fix the star profiles.  My former F/1.9, 14" Hyperstar system, produced very dense star fields around many of the nebula that I imaged and I often had to use various methods to de-emphasize all those stars during processing.

With objects like galaxies imaged at more "normal" focal ratios like F/7, the ability of a larger aperture to bring out the faint stars is a huge advantage.  That "feature" makes it much easier to image the individual stars in the distant galaxy--when that's possible.  Again, go look at Wolfgang's 1.5 m image.  That image was taken with just 1 hour of RGB data and look at how many stars he recorded.  Of course if there's a bright star in the field, it might be saturated; but, so what?  I've posted a lot of images with saturated stars and it's not that big of a deal--as long as there are only a few.

I do realize that the Dragonfly folks make a big deal about the Canon nano-coatings but there's something that they don't talk about all that much.  Those lenses have so many internal surfaces and they use so many high index glasses, that ultra-high efficiency coatings are a fundamental product requirement. The Dragonfly project simply takes advantage of the coatings the Canon developed so that they get the highest possible throughput while eliminating strays.  Dragonfly still has a front surface on each one of those lenses and unless they clean them very regularly, they will get quite dirty--just like the mirrors in a1m scope.  And if you could build a F/0.4, 1m scope, you could certainly coat the mirror with an equally high performance coating to minimize scatter and maximize throughput.  The real problem is that making a F/0.4, 1m scope is not a trivial.  For the field that they want to work at, the lens array is a better choice.

John
Like
jrista 11.18
...
· 
·  Share link
John Hayes:
Jon,
Thanks for your kind words.  I never know if I'm going to put everyone to sleep with all that technical stuff so I'm glad to hear that you found it valuable.


I am always down for learning more of the technical side of things. This IS a technical hobby, as much as its become more accessible these days, I don't think that will ever really change.

I'm always interested in learning from you, so as long as you are willing to teach, I'll be there to learn. ;)
John Hayes:
You are totally correct that fast systems typically require small pixels--particualarly under good seeing conditions.  Most fast systems are not diffraction limited over the field and most are undertsamped.  You are also correct that as the system becomes faster (and larger) the star signal get so strong that it's hard to avoid clipping the brighter stars--even with fairly short exposures.  You either have to accept it, go to really short exposures, or use HDR methods to fix the star profiles.  My former F/1.9, 14" Hyperstar system, produced very dense star fields around many of the nebula that I imaged and I often had to use various methods to de-emphasize all those stars during processing.


Does HDR processing help with heavily blown out stars? Those would be fairly significant halos in your deepest exposures.... II guess if you could properly correct that particular issue, then that might be a solution. 
John Hayes:
With objects like galaxies imaged at more "normal" focal ratios like F/7, the ability of a larger aperture to bring out the faint stars is a huge advantage.  That "feature" makes it much easier to image the individual stars in the distant galaxy--when that's possible.  Again, go look at Wolfgang's 1.5 m image.  That image was taken with just 1 hour of RGB data and look at how many stars he recorded.  Of course if there's a bright star in the field, it might be saturated; but, so what?  I've posted a lot of images with saturated stars and it's not that big of a deal--as long as there are only a few.


I actually did find Wolfgang Promper here on ABin. His images look like Hubble images! The details are just incredible. I guess that requires very good seeing, too, but still...amazing what can be achieved even here on Earth. He even has an image of Trifid with that 1500mm scope, with only 1 hour of integration, and the SNR even on the background sky signal is exceptional, as are the details:

https://www.astrobin.com/full/6400yy/0/

This image also demonstrates the benefits of larger aperture that you were discussing, but on more than just stars. 

I totally agree with you regarding some saturated stars. IMO almost every image is going to have some saturated stars, and its really not a problem. My precious concern was only with those ultra fast copes with big apertures like Hyperstar or some of the other very large aperture low f-ratio systems. The stars saturate so fast, that it then becomes challenging to get enough background signal. Even with small pixel sensors of today...an 11" RASA or or large hyperstar, etc. can become fairly problematic in a lot of fields that have enough bright stars that too many of them saturate before you have a sufficient background sky signal. 
John Hayes:
I do realize that the Dragonfly folks make a big deal about the Canon nano-coatings but there's something that they don't talk about all that much.  Those lenses have so many internal surfaces and they use so many high index glasses, that ultra-high efficiency coatings are a fundamental product requirement. The Dragonfly project simply takes advantage of the coatings the Canon developed so that they get the highest possible throughput while eliminating strays.  Dragonfly still has a front surface on each one of those lenses and unless they clean them very regularly, they will get quite dirty--just like the mirrors in a1m scope.  And if you could build a F/0.4, 1m scope, you could certainly coat the mirror with an equally high performance coating to minimize scatter and maximize throughput.  The real problem is that making a F/0.4, 1m scope is not a trivial.  For the field that they want to work at, the lens array is a better choice.

John


These lenses do have a lot of internal elements, however despite that, with the nanocoatings, the transmission is extremely high. I don't know if you remember, but I have the Canon 600mm f/4 L II lens, which is just a longer version of the 400mm lenses they use in the Dragonfly Array. That lens has some issues with regards to using it for astrophotography (although the Astromechanics adapters might be a good solution, I've had minimal time using it), but there is no question that in my own experience, I get deeper signals in less time with that scope, than any other telescope I've ever owned, or borrowed. I've NEVER had any scattering issues with it...stars that will wreak havoc with most other scopes, even my FSQ106 EDX IV, and many other high end scopes that only use multicoaings, have minimal to no halos with the Canon 600mm lens. I'm able to pick up very faint signals very easily with that scope, even with very tiny pixels (2.4 microns) with about double or so the read noise of modern cameras like the IMX455. So even though the lens has a lot of elements, the transmission rate is extremely high and the scattering is almost non-existent. 

I am also curious to hear you say a mirror could be coated. I did some research back when I first discovered the Dragonfly Array (which, IIRC, was around 2012 or so), after reading some of their papers. They stated that while a very high grade mirror could be ground to much better specifications than your run of the mill consumer grade or even a quality hand made mirror, that they still just don't achieve anywhere near the same level of low scattering as the Canon lenses. That even the best mirrors, tend to scatter signals fainter than about 26th magnitude, maybe 27th, and that nothing except the Dragonfly Array itself had ever imaged any objects as faint as 30mag/sq" or fainter (their image of a super large eliptical galaxy reached as deep as 33mag/sq" at the outside, and solidly 32mag/sq", which is 10,000 times fainter than an airglow limited 22mag/sq" sky.)

'm curious what kind of coating could be used on a mirror to help minimize scattering... The thing about the SWC (Canon's Sub-Wavelength Coating, their name for their nanocoating) is that it "softens" the refractive barrier between an air pocket and the optical material, so that instead of a hard, sudden change in refractive index, the change is gradual, which nearly eliminates the reflection/scattering effects. Is there anything that can do that with a mirror? Or is that something you could only achieve with refractive optics? I've been so interested in this, because I have one of those lenses, and in my own experiences, it definitely DOES seem to have something going for it as far as the clarity of the image and the depth of the signal you can get in any given unit time, and I've never heard of a reflecting system that could match the performance of those refractive nanocoatings.
Like
jrista 11.18
...
· 
·  Share link
Arun H:
Jon Rista:
I honestly didn't think that would be possible from earth, and am amazed that it is. I doubt I'll ever have the funds for a scope with an aperture that large, but I probably could afford a 14" aperture.


Jon - from the presentation, it seems like seeing would be the limiting factor with anything greater than 10". So, unless you are at a place with great seeing, your cost is likely to be much more than the cost of just the scope (ongoing remote rental, travel, etc.)?

Yes, I understand that. I've actually followed John's journeys with his scopes for some time, and the travel costs alone (for both scope and human) are quite probably the greater cost overall (John could confirm.) If I were to purchase such a scope, I'd certainly put it in a place with exceptional seeing, such as Chile there. The thing that amazed me was that systems accessible to amateurs such as us, are capable of the kind of resolution that John, or Wolfgang, are delivering. Wolfgang's images with the 1500mm scope legitimately look like they came from Hubble.
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  1 like
·  Share link
Jon Rista:
I am also curious to hear you say a mirror could be coated. I did some research back when I first discovered the Dragonfly Array (which, IIRC, was around 2012 or so), after reading some of their papers. They stated that while a very high grade mirror could be ground to much better specifications than your run of the mill consumer grade or even a quality hand made mirror, that they still just don't achieve anywhere near the same level of low scattering as the Canon lenses. That even the best mirrors, tend to scatter signals fainter than about 26th magnitude, maybe 27th, and that nothing except the Dragonfly Array itself had ever imaged any objects as faint as 30mag/sq" or fainter (their image of a super large eliptical galaxy reached as deep as 33mag/sq" at the outside, and solidly 32mag/sq", which is 10,000 times fainter than an airglow limited 22mag/sq" sky.)

'm curious what kind of coating could be used on a mirror to help minimize scattering... The thing about the SWC (Canon's Sub-Wavelength Coating, their name for their nanocoating) is that it "softens" the refractive barrier between an air pocket and the optical material, so that instead of a hard, sudden change in refractive index, the change is gradual, which nearly eliminates the reflection/scattering effects. Is there anything that can do that with a mirror? Or is that something you could only achieve with refractive optics? I've been so interested in this, because I have one of those lenses, and in my own experiences, it definitely DOES seem to have something going for it as far as the clarity of the image and the depth of the signal you can get in any given unit time, and I've never heard of a reflecting system that could match the performance of those refractive nanocoatings.

Jon,
It is certainly possible to make ultra low scattering reflective surfaces and it's done all the time for high power laser optics.   It requires a super smooth substrate along with a very highly reflecting thin film coating.  There is no doubt that it's a bit harder with a reflecting surface (n=-2) than with a refracting surface with an index of around 1.5.  I'm sure that Dragonfly achieves very low scattering as described in the papers but when I read self justifying design choices, I'm always a bit skeptical.   Let me tell you a story about that.

Many years ago I sat on the PIT review team at NASA during the "down-select process" for JWST.  NASA had "Hubble-itis" and they were terrified of making a similar mistake on JWST so they appointed this "blue-ribbon" panel to review and essentially second guess every decision about the optical design and testing program.  The "down-select" was NASA speak for picking the vendor consortium that would actually build and deliver the telescope.  There were two competing teams; one lead by Northrup and one led by Lockheed and one of the big issues was how to do the vacuum testing.  This was not an insignificant problem since the telescope is so large and there are very few vacuum facilities in the country that could handle the job.  Lockheed found a vertical chamber and they came in with a presentation that showed how not only was vertical testing the best way to test the system; it was the ONLY way to test the system.  There must have been two dozen slides with complex analysis showing how a horizontal test simply would not work.  After lunch, the Northrup team came in with their proposal, which involved a horizontal chamber.  Yep...you guessed it.  They had dozens of slide showing all the technical reasons that vertical testing was impossible.  Both these groups did extensive mathematical analysis and I was left wondering how in the world we were going to sort any of that out.  The two teams had collectively shown that the vacuum test plan was impossible!  The point here is that although the Dragonfly folks are smart and well meaning, there's an inherent bias in whatever argument they make about why the approach they chose is the best and only one that will work.  I personally believe that they made a very good choice given the science goals but it's unlikely to be the one and only way to do the job.

I'm sure that your Canon lens has spectacular performance.  I too have a refractor that I believe would give it a run for your money and it would be interesting to compare data from these two scopes.  The AP GTX130 is very well color corrected, it has superior baffling against stay light, and I've never seen a hint of stray reflections.  And even if it's not quite as good as your Canon, it's a LOT less expensive!!  )))))

John
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  3 likes
·  Share link
Jon Rista:
Yes, I understand that. I've actually followed John's journeys with his scopes for some time, and the travel costs alone (for both scope and human) are quite probably the greater cost overall (John could confirm.) If I were to purchase such a scope, I'd certainly put it in a place with exceptional seeing, such as Chile there. The thing that amazed me was that systems accessible to amateurs such as us, are capable of the kind of resolution that John, or Wolfgang, are delivering. Wolfgang's images with the 1500mm scope legitimately look like they came from Hubble.


Jon,
The travel expenses are not the greatest cost--at least up front.  Going remote in Chile is probably the most expensive way to go remote but you don't need to go to Chile to find a good domestic remote facility.  You can save on shipping if you can crate up your own stuff and drive it there.  Yes, the monthly rent can add up but in my view, it's cheaper than trying to buy land and set up your own remote observatory.  And, if you do it right, you really shouldn't have to visit your scope all that often.  I will admit that I visited DSW about monthly when I first set up the C14 out there but that was a part of my learning experience--and I just like going out there!  I think that you could set up a remote system with just 2 visits during the first year and maybe only one visit per year after that.  One thing that helps is if you pick a facility with good support.  Eric Coles has a 20" at SRO that he's never even seen!  He has made ZERO trips out there.

The biggest expense is your equipment.  After that it's really just the ongoing rent.  Shipping (and VAT in Chile) can be painfully expensive but that's a one time expense.

John
Like
jrista 11.18
...
· 
·  Share link
John Hayes:
Jon Rista:
I am also curious to hear you say a mirror could be coated. I did some research back when I first discovered the Dragonfly Array (which, IIRC, was around 2012 or so), after reading some of their papers. They stated that while a very high grade mirror could be ground to much better specifications than your run of the mill consumer grade or even a quality hand made mirror, that they still just don't achieve anywhere near the same level of low scattering as the Canon lenses. That even the best mirrors, tend to scatter signals fainter than about 26th magnitude, maybe 27th, and that nothing except the Dragonfly Array itself had ever imaged any objects as faint as 30mag/sq" or fainter (their image of a super large eliptical galaxy reached as deep as 33mag/sq" at the outside, and solidly 32mag/sq", which is 10,000 times fainter than an airglow limited 22mag/sq" sky.)

'm curious what kind of coating could be used on a mirror to help minimize scattering... The thing about the SWC (Canon's Sub-Wavelength Coating, their name for their nanocoating) is that it "softens" the refractive barrier between an air pocket and the optical material, so that instead of a hard, sudden change in refractive index, the change is gradual, which nearly eliminates the reflection/scattering effects. Is there anything that can do that with a mirror? Or is that something you could only achieve with refractive optics? I've been so interested in this, because I have one of those lenses, and in my own experiences, it definitely DOES seem to have something going for it as far as the clarity of the image and the depth of the signal you can get in any given unit time, and I've never heard of a reflecting system that could match the performance of those refractive nanocoatings.

Jon,
It is certainly possible to make ultra low scattering reflective surfaces and it's done all the time for high power laser optics.   It requires a super smooth substrate along with a very highly reflecting thin film coating.  There is no doubt that it's a bit harder with a reflecting surface (n=-2) than with a refracting surface with an index of around 1.5.  I'm sure that Dragonfly achieves very low scattering as described in the papers but when I read self justifying design choices, I'm always a bit skeptical.   Let me tell you a story about that.

Many years ago I sat on the PIT review team at NASA during the "down-select process" for JWST.  NASA had "Hubble-itis" and they were terrified of making a similar mistake on JWST so they appointed this "blue-ribbon" panel to review and essentially second guess every decision about the optical design and testing program.  The "down-select" was NASA speak for picking the vendor consortium that would actually build and deliver the telescope.  There were two competing teams; one lead by Northrup and one led by Lockheed and one of the big issues was how to do the vacuum testing.  This was not an insignificant problem since the telescope is so large and there are very few vacuum facilities in the country that could handle the job.  Lockheed found a vertical chamber and they came in with a presentation that showed how not only was vertical testing the best way to test the system; it was the ONLY way to test the system.  There must have been two dozen slides with complex analysis showing how a horizontal test simply would not work.  After lunch, the Northrup team came in with their proposal, which involved a horizontal chamber.  Yep...you guessed it.  They had dozens of slide showing all the technical reasons that vertical testing was impossible.  Both these groups did extensive mathematical analysis and I was left wondering how in the world we were going to sort any of that out.  The two teams had collectively shown that the vacuum test plan was impossible!  The point here is that although the Dragonfly folks are smart and well meaning, there's an inherent bias in whatever argument they make about why the approach they chose is the best and only one that will work.  I personally believe that they made a very good choice given the science goals but it's unlikely to be the one and only way to do the job.

I'm sure that your Canon lens has spectacular performance.  I too have a refractor that I believe would give it a run for your money and it would be interesting to compare data from these two scopes.  The AP GTX130 is very well color corrected, it has superior baffling against stay light, and I've never seen a hint of stray reflections.  And even if it's not quite as good as your Canon, it's a LOT less expensive!!  )))))

John

I did not know you were involved in JWST in any way. That is pretty awesome. I've been so excited since the JSWT platform finally went online, the imagery is amazing.

I am curious...obviously, someone found a way to vacuum test the scope. Which route did they ultimately go with? Horizontal? Vertical? Something else?

I get the bias factor, for sure. I guess I shouldn't say I don't think there is any other way to achieve the goals of the Dragonfly team...however, at the time they started (2012), they had a small array...IIRC it was either 4 or 5 of those lenses each with a KAF-8300 based camera. They didn't start out with a 50-element array, they kind of progressively built out to that point. I suspect, at the time, the small array they started with was a vastly more cost-effective option than say, custom designing and building a reflecting telescope (or some other kind of system) that would achieve their goals of ultra low scattering. 

It would indeed be interesting to pit the GTX130 (a scope I've only heard WONDERFUL things about) against the Canon 600mm lens. I suspect the GTX130 will deliver superior overall optical performance (the Canon lens has an aperture issue that, at least at f/4, causes a split halo around stars that increases the more you move into the periphery of the field...(one which can be mitigated if I leave the lens hood off...but, given the cost of the lens, I have never been able to bring myself to do that for long). I also think that one of the elements is very slightly out of alignment. I don't know when or how it happened, but, there was a point at which it seemed to stop delivering quite the same level of razor sharp super crisp details, and that's also true on terrestrial targets (mostly birds, is what I use it for, their feathers have just never been the same level of detail since I put it back into service...and, getting it fixed with CPS is undoubtedly going to be costly.) I suspect the Canon would pick up fainter signals faster, but, I am truly curious how much faster...
Edited ...
Like
kevinkiller 2.11
...
· 
·  Share link
.
Edited ...
Like
HegAstro 14.24
...
· 
·  Share link
Once again - a great presentation. 

I appreciated the math, because, once you take the time to understand it, it can really yield deep insights.

To me, the biggest takeway, for extended objects, is this:

At constant sampling in object space, a larger aperture scope will always give greater signal. 


And of course, larger diameter scopes will show fainter stars for a given integration time.

The focal length of the scope dictates what size pixels are needed to achieve a given sampling, and generally, given the difficulty in making fast, large aperture optics that are well corrected, bigger scopes need bigger pixels to achieve good signal.

But, my question is: what is the real disadvantage of using small pixel cameras with large scopes, so long as you are covering a desired FOV? You can always bin/resample in software and essentially get the same signal as you would if you'd used a larger pixel in the first place. Isn't the real disadvantage just processing and storage?
Edited ...
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  2 likes
·  Share link
Jon Rista:
Does HDR processing help with heavily blown out stars? Those would be fairly significant halos in your deepest exposures.... II guess if you could properly correct that particular issue, then that might be a solution.


HDR works really well for most over-exposed stars.  It is less successful on really bright stars that have so much "spill over" from stray/scattered  light that the profile is no longer a very good Moffat function.  Look at my M42 image (https://www.astrobin.com/srzsit/D/).   I used HDR to better control the dynamic range and it helped with the star profiles as well.

Jon Rista:
I did not know you were involved in JWST in any way. That is pretty awesome. I've been so excited since the JSWT platform finally went online, the imagery is amazing.

I am curious...obviously, someone found a way to vacuum test the scope. Which route did they ultimately go with? Horizontal? Vertical? Something else?


Yeah, I sat on both the PIT and IPT2 review teams for JWST.  My company, 4D Technology Corporation, supplied virtually all of the optical test equipment for the program.  We supplied a number of custom, purpose-built interferometers.  One was a 100+ MW instantaneous speckle  interferometer for testing the carbon fiber backplane structure.  Another was a multi-wavelength PhaseCam for testing the  phasing precision of the mirrors in the assembled telescope.  On top of that we supplied numerous PhaseCams for testing all of the optical elements.  I personally installed a PhaseCam at Tinsley where I saw the Beryllium primary mirror segments on the polishing machines.  (Beryllium is highly toxic so we had to "suit up" with protective gear to get into that area to see the mirrors).  On the IPT2 committee, we were advising NASA on how to test the optics and at the same time I was running the company selling most of the test gear.   So I ultimately concluded that I couldn't do both without the appearance of a conflict of interest and I resigned from the committee.  After JWST launched, I was very proud to hear from Ritva Kesk-Kuha, the telescope deputy manager, that without the interferometers from 4D, JWST would not have been possible!  I still have a NASA certificate of appreciation for some of our work hanging on my office wall.  After the launch, I was honored when Lee Feinberg (the telescope director) invited me to a high-level review of the initial commissioning of the telescope where he shared a lot of interesting behind the scenes data along with a couple of stories about technical close calls.

John
Like
jhayes_tucson 26.84
Topic starter
...
· 
·  3 likes
·  Share link
Arun H:
But, my question is: what is the real disadvantage of using small pixel cameras with large scopes, so long as you are covering a desired FOV? You can always bin/resample in software and essentially get the same signal as you would if you'd used a larger pixel in the first place. Isn't the real disadvantage just processing and storage?


Yes, the real disadvantage is in processing and storage.  I've noticed that BXT seems to do a bit better with unbinned data so in spite of starting with a lower SNR (wrt binned data) I tend to get a better looking image with sharper features from unbinned data.  This all falls apart somewhat if you are talking about a telescope that would require say 4x binning.  Big scopes such as the CDK1000 from Planewave with an EFL over 6m are reaching the point where they benefit from bigger pixels to better match the seeing conditions.  I didn't say it in the presentation but in reality, as the size of the telescope gets larger, everything gets harder!

John
Edited ...
Like
talbotj 2.41
...
· 
·  Share link
Hi John,

Great job on the presentation and thanks for the spreadsheet where I can compare my systems.  Great stuff…

Jon
Like
 
Register or login to create to post a reply.