The Astrobin All Sky Survey: A proposal for a community resource Other · Brian Boyle · ... · 363 · 13507 · 59

This topic contains a poll.
Would you be interested in contributing towards an AB all sky survey?
No. I wouldn't find such a survey useful.
No. Satisfactory data already exists for me elsewhere.
No. I would find such a survey useful, but I don't have the time, location or equipment to contribute.
Yes, I would be interested in taking part. One or two fields maximum.
Yes, I would be interested in taking part. Prepared to do multiple fields.
andreatax 9.89
...
· 
·  Share link
@andrea tasselli The distortion correction does only correct distortion relative to the reference frame, not on an objective scale in comparison to the coordinate grid. AlignByCoordinates removes the distortion from the image by aligning it in a way that it matches the coordinate grid, and then StarAlignment aligns every single image to that reference using DistortionCorrection, so that there are no alignment errors in the corners. 

CS Gerrit

Thanks for your answer, as PI documentation in non-existent on this very subject. Then it should be irrelevant whether the matching is done beforehand or after the integration is done. In fact it should be more efficient to do it afterwards as you wouldn't have to apply 1 extra interpolation for each frame you are registering.
Like
Astrogerdt 0.90
...
· 
·  1 like
·  Share link
You are correct, except for drizzle integration with CFA drizzle activated. That is why I put it at the beginning. 

StarAlignment produces *.xdrz files during the registration process, which are used by DrizzleIntegration to align the images during the process. DrizzleIntegration avoids interpolation completely. 

So by putting it at the end, we can do the complete process without any interpolation at all. 

CS Gerrit
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
Michael Ring:
9x6 which seems to produce acceptable matches for most combinations so going into more detail:

APS-C 135mm: 1 Tile (Rotation 0°)
APS-C 200mm: 3 Tiles (Rotation 90°) 4 Tiles (Rotation 0°)
APS-C 250mm: 4 Tiles (Rotation 0°, works up to 50°) 6 Tiles (Rotation 90°)

FF 135mm: Can cover 2 Fields in 1 shot
FF 200mm: 1 Tile (Rotation 0°)
FF 250mm: 4 Tiles (Rotation 0°)

I would not go lower than this as the number of required fields will already by now have exploded, @James Tickner , would you mind doing a calculation based on a Field to Field Grid of 7.5x5 (based on 9x6 min Field size)

@Michael Ring   great work on the optimal field size.  

Earlier in this thread, I proposed the principle of inclusivity and, following @andrea tasselli  helpful comments, I do think we should do everything we can to help those with APS-C cameras to engage in the survey.    

I strongly support the 7.5 x 5 field-to-field centre spacing grid with a 9x6 deg field size.  I also support a field grid designed on a rotation 0 - i.e. dec strips every 5degs; -90, -85, -80,   with 48 fields in the equatorial dec strip i.e. RA spaced every 7.5deg (0.5hr).    To clarify, a rotation of 0 is defined by angle the short axis of the sensor makes with respect to the NCP.  Correct?

The reduction in field will increase the number of fields to over 1000.  But, of course, this number is somewhat misleading, since larger survey fields would simply need more individual panes from each photographer.  In the end the amount of observing is the same.  All we are doing is pushing the stitching effort more into the hands of the survey team [whoever that may be!] than in the hands of individuals.  For survey homogeneity, that is not a bad thing.

You table also highlighted wastage for those with FF and shorter focal lengths, but of course that "wastage" could also be seen as field-multiplexing.  As you demonstrate above at FF 135mm can take two fields in one shot. 

Whether it is as a result of this discussion on smaller field sizes or not, I do notes that the numbers interest in taking part for imaging have gone up to over 70.

If, on average, each of those 70 took two fields per lunation [and for those with 135mm + FF that is just one pointing] the observations for the survey would complete within a year.  

Brian
Like
messierman3000 7.22
...
· 
·  Share link
@Brian Boyle Would you accept data that has diffraction spikes on the stars? Because when I try to imagine it, the mosaic must be made with either refractors and SCTs or with Newts and telescopes with spiders, otherwise parts of the mosaic wont match with other parts.
Please correct me if I'm saying nonsense 
Edited ...
Like
GoldfieldAstro 0.90
...
· 
·  1 like
·  Share link
@Brian Boyle Would you accept data that has diffraction spikes on the stars? Because when I try to imagine it, the mosaic must be made with either refractors and SCTs or with Newts and telescopes with spiders, otherwise parts of the mosaic wont match with other parts.
Please correct me if I'm saying nonsense 

The mm that have been quoted isn’t aperture but focal length so there isn’t likely to be any Newtonian that has a FL of 85-250mm used.

For myself, I have an ASI094 (Nikon D810 sensor) and a Zeiss 135mm F/2 APO Sonnar.
Also a QHY600M if there is need for Ha.
Like
messierman3000 7.22
...
· 
·  Share link
@GoldfieldAstro Thank you for explaining. I didn't realize the max FL for the project was 250mm.
Edited ...
Like
MichaelRing 4.64
...
· 
·  1 like
·  Share link
There is no such thing as a max FL, there is only the point that with a to long FL it takes longer to fully capture a field.

I, for example, love to do panoramic shots with my 400mm lens and my APS-C sensor but to cover a field of 6x9 I would have to do 9 tiles.

With a Telescope in Chile no big deal, but where I live the nights with clear skies and moon < 25% are not that frequent so I personally would not consider this a good combination for my circumstances.

On the other side, people owning a 400mm Lens and a Full Frame sensor would only need to do 4-6 tiles, something that is far more realistic to do.

Also, nobody said that you need to collect fresh data, when you have already collected large fields of the sky then you could also think about donating this data, likely the only requirement will be that you still have the raw data and are willing to provide re-stacked data in case that is needed.

Michael
Like
james.tickner 1.20
...
· 
·  1 like
·  Share link
I've regenerated the tile map with 9 x 6 fields on a 7.5 x 5 degree spacing. The DEC centres are at -88, -85, -80 ... 80, 85, 88 degrees. As before, I've avoided putting fields exactly at the poles - this introduces two extra fields (one at each pole) compared to the minimum possible. In total there are 1120 fields to be imaged.

Link to Google Sheet here: https://docs.google.com/spreadsheets/d/1eyHT4dZhSlMvfJHPtBrt93e5wDW6eJbs23bMJ7gjz1o/edit?usp=sharing

Image of new map below.

image.png
Like
MichaelRing 4.64
...
· 
·  Share link
Many thanks for the calculation and the sheet!

When I find time over the weekend I will create a number of tiles in PI to try to find out how far I can get with MosaicByCoordinates before my computer implodes…. Also a test run with the generated tiles in Nina will be on my list.

Michael
Like
james.tickner 1.20
...
· 
·  Share link
@Michael Ring  No worries!

I'm not imagining that we could do the tiling of the master image using PI due to memory constraints. I'm thinking of something along the lines of:

- For each field, collect the set of images of neighbouring fields.
- Assume that each field image has been registered and scaled using PI so that it lies on 10 x 10" grid centred on its nominal field centre, and has a known projection type (eg gnomonic) 
- Transform the surrounding field images to have the same projection as the central field (2d interpolation from the old to new projections)
- Estimate residual gradients using the method I described above (or similar)
- Correct central image for residual gradient
- Transform central image to master projection (probably plate carree for simplicity ie simple, uniformly spaced RA/DEC grid)
-  Save the central field image, recording its (x,y) offsets in the master projection grid

I think this has the advantage that we never need to deal with particularly huge image arrays - at most 3 x 3 fields.
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
@Michael Ring ​@James Tickner@Astrogerdt

 Thank you so much for your work over the past day or so.

I suggest that we make an announcement tomorrow that the first "test" lunation of the ABC survey is open for imaging.  This will give those interested a few days to plan, before it gets dark enough to image. 

This would be a test round, to see how make interest there is and whether our proposed work flow is sustainable.  

The biggest issue is that we begin the survey with no clear idea how to stitch it together, either how to do it or who to do it.   Nevertheless, as (if) the survey gains momentum, then we should find solutions/volunteers.  

The one outstanding issue which we do need to decide on at this time, is whether ABE (1st order max) and/or SPCC are applied to flatten/calibrate the linear data.   On the one hard the argument is that this may take out gradients, and further data processing should be left to the data curators/stitchers.  On the other it might be difficult for those with longer focal lengths/smaller sensors to create a 9 x 6 field from smaller panes.  

Any thoughts?  Whatever we decide, we should make the final update to @Astrogerdt pipeline and include the link to that (and @James Tickner field-centres) with the announcement.  

Since I have not had any other volunteers to be the ABC account manager, it think this falls to me.  @Salvatore Iovene should I just set-up and purchase another Ultimate Subscription?  To manage the file transfer, I would propose to purchase a 3TB professional Dropbox account.  This is a 30-day free trial, so if there is "fast fail" on the survey, I am not out of pocket.


Thank you once again

Brian
Like
andreatax 9.89
...
· 
·  Share link
Brian Boyle:
The one outstanding issue which we do need to decide on at this time, is whether ABE (1st order max) and/or SPCC are applied to flatten/calibrate the linear data.   On the one hard the argument is that this may take out gradients, and further data processing should be left to the data curators/stitchers.  On the other it might be difficult for those with longer focal lengths/smaller sensors to create a 9 x 6 field from smaller panes.


I can certainly volunteer for the stitching effort although I doubt that a full 360deg x 24h is even feasible (maybe on a down-scaled set). I'd recommend to apply just a 1st degree ABE and CC to the linear datasets, leaving everything else to the assemblers/stitchers. How to move from linear to stretched data will be another big can of worm which is best left for another day.
Like
GoldfieldAstro 0.90
...
· 
·  Share link
I would suggest not to do ABE to hastily. It’s easy to do it later on with an ImageContainer and a single process (set and forget) but once it’s done it cannot be undone. The last thing you want in 12 months time is to realise that it’s creating unwanted gradients and have to get regions redone because the original files have been “lost” for whatever reason.

One thing I have learned from my efforts in large mosaics is that until you have all of your processing workflow down pat, don’t do any destructive processes.

CC isn’t destructive and neither is background neutralisation.
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
I just updated the processing guide to the third draft to include the distortion correction using AlignByCoordinates script. Unfortunately, this added a little complexity because doing it after the integration would result in a loss of SNR, so it had to be done prior to integration. The procedure is described here: https://docs.google.com/document/d/1QZxLRpfVuxSxTDWNpjHYDpZ7p9Ua6vdz/edit?usp=sharing&ouid=102793495713995642568&rtpof=true&sd=true

Changes can be found in the "Preperation for WBPP" and "Preprocessing in WBPP" -> "Registration Reference Image". 

Is that added complexity OK for everyone?

CS Gerrit



@Astrogerdt  A couple of final suggestions/question on the script.

1) Should be remove the ABE?DBE stage and reduce SPCC to just CC (with background subtraction?). I think the argument that we wouldn't want to do anything destructive to the data for later stitching is a good one.  

2) Regarding Image Solver and AlignByCoordinates before WBPP.  I am worried that this might not only add complexity for imagers, but might lock us into a projection model right up front [no projection model is specified in the pipeline at the moment].   

3) Linking to issue 2 above, we don't ask people to trim their final field to 9 x 6 centred on the field centre, ensuring images 3240 x 2160 pixel (10arcsec/pixel.   Should we, or do we start running into problems at the poles. 

4) Despite the small loss of SNR it might we easier to ask people not to undistort and just take astrometric solution out of WBPP (but I am not sure what it uses) and then do distortion correction when it comes to the whole sky stitching.  Thoughts?     Apolgies if this has been raised before and I have simply missed it.

Brian
Like
IrishAstro4484 5.96
...
· 
·  Share link
Brian Boyle:
Dear AB friends,

Increasingly I am struck by the depth and beauty of some of the wide-field images posted here on Astrobin.  

I find myself increasingly using some of these images - including those I have generated myself - to act as a substitute sky atlas to find objects for follow-up study.   

This led me to thinking whether AB users could work together to produce an atlas of the sky that would be of general use to everyone.

Now AB is an incredibly valuable resource with many hundreds of thousand images already, but I am not sure it it has the uniformity, homogeneity and completeness that would characterise a true survey.   I may be wrong in this assumption, and I would be happy to be corrected below. [The wonderful heat map of image centres on AB does suggest that the entire sky is well covered with images here, but the real question is how uniform is the image data]

There are many other surveys out there, but none designed specifically for those of us looking for nightly targets.  I use telescopius and the DSS, but the reproduction and uniformity still make in sub-optimal (at least for me).   And none to my knowledge that has such a crowd-sourced origin, which I think it kind of cool.  Not to say potentially good promotion for AB.  [I am also writing this in the shadow of the IP theft discussion, and I really wanted to spotlight the potential for good the internet, particularly this community, has.]

Having been a professional astronomer, associated with a number of surveys I noted down what I would want from a survey design to maximise its usefulness with the bracketed values initial suggestions] 

1) Regular field centres, with 10-20% overlap.  [10degree field centres, 15x15 degree areas, mosaicing likely to be required ]
2) Uniform pixel scale [10arcsec/pix] 
3) Uniform passband/colour [RGB or OSC]
4) Uniform depth and image quality. [SNR=60 at 22.5mag/arcsec^2, standard PI WBPP processing pipeline with SPCC and BXT/NXT]

Since the sky just over 40000 square degrees, then field centres spaced 10 degrees apart would result in a survey of just  over 400 separate areas., with individual fields being 12 x 12 deg in size.

To some extent, this is tailored to the type of survey equipment that appears to be quite common on AB - the fast 135mm telephoto camera lens.  Such an area could be covered in two overlapping panes using a full frame sensor or four x APSC size at a scale of 10arcsec/pixel.

Longer focal lengths up to 200mm would also be suitable (more mosaicing required, and some binning up) and down to 100mm (no or little mosaicing - but possibly some drizzling) 

The passband would initially be broadband - achieved either through LRGB or OSC.

Uniformity will be crucial.  Although I would not propose to make a mega-mosaic about of all the fields, people would need achieve a degree of uniformity over the panes contributing to their individually mosaiced frame.   Clearly there are many out there who can do this brilliantly well, but until recently I struggled.  However, I do find that with the new PI WBPP processing script including local normalisation and autocrop, plus SPCC and the RC Astro Blur/Noise Xterminator tools, I can get good uniformity between moscalced panels. 

 Then there is the issue of depth, and the one I am the most unsure about.  In a Bortle 2 sky with an f/2 system, I can get to a SNR of around 90 at 10arcsec/px for 22.5mag/arcsec in 2.5hours. This is deep - and possibly overkill.  I would be interested to hear from those who might want to take part, just what a realistic limit should be.  It will depend of typical camera speeds and night sky brightness, and probably should be two much more than 3 hours per panel.


Given this wonderful confluence between hardware and software, I do think the time is right to attempt something like this as a new generation of community  sourced astro-photography atlas.   But it needs people to do it.

At over 400 fields [field centres would be distributed randomly to volunteers based on latitude, with the poles perhaps needing some special attention]., this survey would need a few volunteers.  Even if people were to do more than one patch, I don't think it feasible to do with less than 100 volunteers. And take a couple of years.  

Are there then many people out there with the right kit, right skies and inclination to spend a night or two imaging a random bit of the sky for the "greater good".  I don't know.   And finally there is a question of workload.  It is a huge job to coordinate, but I am happy to stick my hand up.  Having said that, it will rely at lot on individual contributors to take processing a significant way [to the end of the linear processing regime?]  following a largely prescribed pipeline.  [PI being the most obvious, simply because of the number of users]. 

The poll included with this note might help assess whether this idea is just unnecessary, stupid, crazy or possible.   Note that responses are just to assess feasibility, you are not signing up to anything yet! 

Comments on the survey design parameters would also be welcome.   

Clear skies!

Brian

*** Fantastic initiative Brian. Happy to help if I can. Cheers bud. ***
Like
Astrogerdt 0.90
...
· 
·  Share link
Wow, a lot of posts while I was offline. I try to address all the points that had me tagged: 
1. I think it is reasonable to skip ABE because of the destructive nature of the process. If no one is clearly against that, I will add that to the processing guide. 
2. It is essential to do some color calibration on the integrations. ColorCalibration is heavily biased by the selection of correct ROIs for white and black point correction. This makes it somewhat ill-suited for the purpose of objectivity. And since the image will still have a gradient, background neutralization can't be done accurately. 
But it is essential to do an accurate color calibration on every integration separately because of the different equipment. 
Whatever we finally decide on, we should consider uploading a processed image according to the processing guide and also the raw integration in case we later on find out some problems caused by different integrations. And we could use those files if some people want to take deeper images of certain regions. 
3. It is in theory possible to correct for distortion in the stitching phase, but that brings us to my next point
4. Additional to SNR losses due to integration, StarAlignment can also introduce artifacts on small stars, of which we will have a lot. This can complicate processing or very dense star field and compromises the quality. If this were my own private project, I would not want to sacrifice my hard-earned SNR to a processing step that is so destructive if I could avoid it. But this is up to discussion. 

Regarding the proection method: isn't this determined by the way we choose to publish this project? For example, a publication as a single all sky JPG image required a different projection than a stellarium sky survey. Maybe this gives us a practical answer to this question. 

Edit: I made a few more changes to the processing guide due to suggestions by @Michael Ring. The changes were made to the dithering and calibration frames part in the beginning. He also specified the new field coverage requirements and added a version requirement for WBPP. I think this offers even more clarity. 

CS Gerrit
Edited ...
Like
MichaelRing 4.64
...
· 
·  Share link
From my point of view we should try to make the process for contributors as easy as possible, so here are my 10 cents on the processing guide:

We should include best practices for single frame exposure times, in Nina we get stats about the number of blown out pixels per frame, likely something else exists in other tools. If not, some rules of thumb for exposure will be helpful. I cannot offer much help in this respect, I usually do widefield in SHO. In the last year most of my clear skies lined up arround a full moon, so SHO is the natural choice...

Then I'd start the processing guide with telling people to preselect the "Maximum Quality" preset and if changes to this preset are absolutely required then I'd document those changes, hopefully no changes are needed....

Then we should emphasize even more on the importance of properly aligning the camera in the field, In the past months of really bad weather I from time to time downloaded raw data that was offered for free to practice processing and it is unbelieveable how bad even well established youtubers can be at aligning their frames over several nights, and as @Astrogerdt said, it will be painful to loose the hard earned SNR, in this case by stacking not well aligned data and having parts of the stack only covered by a smaller number of frames. For this reason we should also ask people to only work with the autocropped images, in that case WBPP will make sure that we do not loose too much SNR in the corners.

But now I will contradict myself on the SNR topic: For the sake of ease of use I'd drop the preprocessing requirement to align during stacking. I think it complicates things to much and can be a source of nasty issues.
Instead we should ask people to keep their raw data on their harddrive for at least a year so that we can ask them to re-stack when problems in their frame show. It is anyway advisable that people keep their raw subs, most will do so anyway but it will not hurt to explicitly ask for that.

We should also agree that we do only ask for downsized data when absolutely necessary, the more data we have in original size of the raw stack the more options to re-use the data will be available. There might be some insanely big files from latest gen Full Frame Cameras, there we could consider downsizing, but only if really necessary.

We should also not ask for pre-aligned data, this was already mentioned before in this thread I think, the less we do to the data the more options we will have later and the easier things will get for contributors.

We should also try to be inclusive, as Brian already mentioned several times before, and should think about how people can contribute their data without owning Pixinsight.

We must however add a section to the post-processing where we ask people that own PI to do a mosaic and to check that they actually cover the 6x9 arera.
I have already documented most of what is necessary in this thread, I can add this later to Gerrit's document.

And yes, let us skip DBE/ABE, and color calibration as we should ask for all tiles that make up the 6x9 region we can do those steps later....


Michael
Edited ...
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
@Michael Ring@Astrogerdt

I tend to agree with @Michael Ring on the KISS (Keep it Simple, Stupid) principle.

I suggest we drop any mandatory post-processing requirements and any intermediary requirements for frame distortion correction.

No ABE/DBE, SPCC/PCC/CC since we can't agree which is best between ourselves.

No intermediate AlignByCoordinates or ImageSolver, as I think this is just too complicated to encourage people to join in.  I know that I am begging getting a bit lost here myself.

@Michael Ring suggestion of focussing on a) getting the best quality data + calibration and b) asking users to keep it for 12 months, is, I think, a good solution.   

At the moment we are focussed on trying to find a solution to stitching an all-sky mosaic and were are in danger of letting that drive an overly-complex processing procedure that may or may not be needed. 

I suggest we reduce the "barrier to entry" as far as possible by making the processing as simple as possible.  If we need to, we can ask for extra steps later - on the basis that users have kept their original subframes + calibration.  [Which is only good observing practice anyway].   If the survey proves to be successful, then individuals might be very willing to go back and reduce data.

We have also been silent on using Blink/SubFrameSelector to remove obviously bad sub-frames from the stack.  It is very difficult to prove an exact prescription for this.  I presume most people remove such frames before running WBPP.  Perhaps we should say something here.

Finally, I also agree that let people do a post-processing if they wish.  It  provides imagers with a) a check for the field coverage of any mosaic b) a check on the final quality of any image.

If I were to recommend and post-processing steps it would be 

a) 1st order ABC correction
b) SPCC (or PCC)
c) Auto STF [if people wanted to submit final image for posting on ABC survey AB account]


I will make these changes  to @Astrogerdt  document.  They can be changed back if people disagree, but I think most as consistent with @Michael Ring
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
In editing @Astrogerdt document, I thought of a couple more things. 

1) We want to mention DrizzleIntegration for those with systems  > 10arcsec/px.  
2) Changed Altitude > 40 to Altitude > 30 [to be consistent with zd < 60] 
3) Have specified final "product" and permitted users to send files comprising 9 x 6, rather than insist they do their own - potentially destructive - mosaicing to begin with.  To keep the number of files within reason, I suggest that we permit a maximum of up to 3 three files per field.

I haven't included text about keeping the data for 12 months.  I would propose to include that in the Forum announcement, along with a draft copyright statement.   I am not a lawyer [but I had plenty of experience of legal IP matters during my career], but I am sure that some members of the AB community will be, so I suspect if we put it out there in the announcement, we will get informed feedback.

I think it is also good to put an IP intent out there at the outset.  

Some draft words for our announcement.  

The ABC Survey: Announcement

The ABC survey team are delighted to announce the first lunation of the Astrobin Community (ABC) survey.  This is a community-based attempt to produce a high-quality, relatively deep all-sky colour survey of the night sky.

Following an extensive discussion of the survey parameters on the AB forum, the observing and processing pipeline may be found here [link] and the field centres may be found here [link]

The survey design has been founded on the principles of inclusivity, community and quality.   

The ABC Survey team  would like as many people as possible to take part.  While the quality principle drives us to an prescribed observing and processing pipeline under relatively dark skies (Bortle 4 or better), the field size (9 x 6 deg) and resolution (10arcsec/pix) has been design to be inclusive of imaging systems with focal lengths 85-250mm and both full frame and APS-C sized sensors.

For those that cannot take part in the imaging part of the survey, there will be plenty of opportunity for the community as a whole engage in the subsequent QC and mosaicing of the data. 

Survey Procedure 

1) Please book the fields you propose to observe on a lunation-by-lunation basis, using this google docs form [link].  
2) Follow the observing and linear processing pipeline in this document here [link]
3) One you have the final image[s] for the field[s] you have booked, please send a message to my Astrobin Account and I will send you a Dropbox link to upload you image too.
4) I will then download these images into an Astrobin account setup for the purpose of collating the data, and from which the survey will be built.
5) Any booked field[s] whose image[s] have not received by full moon, will  be unbooked again.  So others may make an attempt.  If you have not been able to complete a field, but already have some/much of the data taken, please message me and the booking can be kept open.  The ABC Survey team would also request that you keep all sub-frames and master calibration frames on disk for at least 12 months.  As the survey team progress the task of bringing the all-sky survey together, we may need to ask for the original sub-frames to be re-processed. 

We do not hide from the fact that we do not know (yet) the best way to bring all these images into a seamless all-sky survey.  Working on it together as a community, we give ourselves the best chance of finding a solution.

Nevertheless, we embark on this survey not knowing whether it will work or not.  This first dark lunation is very much a test of the community take-up, ability to manage the data and the homogeneity in bringing it all together.   

We understand that you are giving up some of your rare clear, dark nights to contribute to a greater endeavour whose success (or failure) is, as yet, unknown.  For that, we thank you.   


Draft Copyright

Our proposal is that all individual images remain the copyright to the person who acquired the data.  In providing the data to the ABC survey team, that individual permits the ABC survey team to use that data for the purpose of constructing an all-sky survey, from which various products [catalogues, images of sub-regions of the sky] can be derived.  Those products will be copyright of the ABC community, and freely available to all subscribed members to Astrobin.
Like
MichaelRing 4.64
...
· 
·  1 like
·  Share link
Very impressive announcement, Brian, looks like a lot of work and is very clear and pleasant to read.

I still have a few requests for change on the Processing guide, I directly addressed the minor details to @Astrogerdt but for me there are still 2 points worth discussing:
  • If pixel scale > 10 arcsec/pixel, then perform DrizzleIntegration of WBPP registered frames using drizzled files with WBPP LocalNormalisation.  
  • Up[down]sample all images to 10arcsec/pixel

I think we can drop the first bullet point about DrizzleIntegration because we ask for it anyway and it makes a lot of sense to do 1x drizzle with OSC data to mitigate the effects of the bayer matrix.

Then we should from my point of view not ask people to up/downsample data thermself as this is a very destructive process and downsampling before registering to a grid will give worse stars than registering and then downsampling. The only exception should be exessively large data from latest gen full frame sensors, otherwise I think we should keep the data a untouched as possible.

Michael
Like
james.tickner 1.20
...
· 
·  1 like
·  Share link
@Brian Boyle I echo Michael's sentiments - excellent job on the proposed announcement!

I think the idea of keeping things simple is a good one. Given - as you say - that we're still figuring out the data reduction process ourselves, it's probably premature to ask users to do too much. If we figure out a good process that is both prescriptive enough to be easy to follow and also compute intensive, then there could be benefits down the track to asking users to help out (parallel processing the results if you will).

A couple of other thoughts:

- Noting some of the comments above about how we present the final image (projection, stretching, noise treatment) etc I'd vote that we initially think of the survey as a data set (linear data, regularised grid, no processing) rather than an image. With linear RGB values stored versus RA/DEC we can then reduce the data in all sorts of interesting ways to suit different needs. Down the track, we could even look at providing a web interface or API to allow users to automatically extract excerpts (eg from NINA or planetarium software). But maybe that's getting ahead of ourselves!
- I did wonder whether instead of collating data on a pure storage system (eg Dropbox) we look at a cloud server (ie data storage + compute). Registration, stitching, colour correction etc could then be run directly on the remote server, avoiding the need to pull data back and forth over the internet. Talking to my IT guys at work however, we'd probably be looking at $US100-300 per month depending on the CPU, memory and storage we need. That's probably a bit steep. It might be something worth looking at in the closing stages however when we pull everything together.
- If we can collectively pull this thing off, I think it would be worth writing up as a paper for publication. I think the aspects of community engagement, the technical challenges of managing heterogeneous scopes and cameras and the data reduction would make for an interesting read. Again, premature at this stage perhaps, but I'd encourage everyone to keep records and results of experiments in processing with an eye to the future.
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
Hi @Michael Ring I agree with both of your points on the pipeline.  No need to specify Drizzle twice and no need to ask people to downsample data to 10arcsec/pixel - but submitted images need to be at least that (and we might wish to specify that) 

CS Brian
Like
profbriannz 17.56
Topic starter
...
· 
·  Share link
Sorry @James Tickner - our messages crossed.  It looks lime we are converging now.  

Some great ideas there too.. Lets hope the survey gains some momentum to try this out

@Astrogerdt i will leave to you to comment/approve the revisions to the pipeline and perhaps we go live in the next few days….

CS Brian
Like
afjk 3.58
...
· 
·  Share link
What kind of effort is required per area, ie. how many images or acquisition time is needed and thus must be committed?
Like
james.tickner 1.20
...
· 
·  1 like
·  Share link
Arny:
What kind of effort is required per area, ie. how many images or acquisition time is needed and thus must be committed?

The aim of the survey is to have as many people participate as possible. The field size is 6 x 9 degrees, so if you have a 135 mm lens + APS-C camera (for example) then one shot is enough to cover one field. The exposure time would be about 2 hours in Bortle 2 skies, 3 hours in Bortle 3 or 4 hours in Bortle 4. If you're able to help with even one field that would be great - we have about 1100 fields to cover in total, so every bit helps!
Like
 
Register or login to create to post a reply.