![]() ...
·
![]() |
---|
Had anyone been an early adopter and upgraded to the newest Nvidia cards? If you have, are the Xterminators running much faster or is it simply an incremental change. I have a 3080 that I pulled from a gaming rig and the exterminators run pretty fast, about a min on Star Xterminator but if I run a bunch of images in a container then the time does add up. Wondering if anyone has any experience… |
![]() ...
·
![]()
·
1
like
|
---|
Based on the specs I don't expect a huge jump. I don't know how exactly the xterminators use the GPU resources to know if memory, memory bandwidth, CUDA cores, or something else entirely is most important, though, so my intuition could certainly be wrong.
|
![]() ...
·
![]() |
---|
I know it's not a direct answer but maybe antecdotal. I upgraded from a 3060 to a 4070ti super and noticed about a 20 percent uptick in speed.
|
![]() ...
·
![]() |
---|
20% is 20%!
|
![]() ...
·
![]() |
---|
I feel like the newer cards use a lot of bells and whistles that AP app’s don’t care about. But sheer horse power isn’t that much greater. IMHO
|
![]() ...
·
![]()
·
2
likes
|
---|
I have a 4080 and starxterminator runs in about 15s on dense starfields in 26mp images, and nxt is so fast that I can run live preview while adjusting settings, and bxt might take on average 8~10s Yes, it all adds up, but it gets to a point where wanting it faster is just unnecessary. I'd be more focused on the rest of your rig which is where 99% of PI's performance comes from. Upgrading to a 50 series gpu is HELLA expensive if it's just for PI. I would never have gone to the 4080 if my work didnt require cuda acceleration too. |
![]() ...
·
![]() |
---|
I just went from a medium-high end 4 year old gaming laptop with a mobile 3070 to a brand new high spec desktop with a 5080. Moved to a 61mp full frame sensor and things just felt so slow on the laptop. Generally the XT tools run about 2-3x faster on a full frame image than on the laptop. And wbpp is also over 3x faster. I imagine going from a 3080 desktop card to a 5080 wouldn’t be as big of a jump |
![]() ...
·
![]() |
---|
Time can be saved by 20% to 30%
|
![]() ...
·
![]() |
---|
Does anyone know of any external GPU enclosures that can house and power the 5090? I would consider picking one up and testing since I would also make use of the 5090 for work, but am restricted to connecting over USB-C; for the work I do the data bandwidth constraint has negligible impact compared to the computational benefit.
|
![]() ...
·
![]() |
---|
Billy Buchanan: You could make one - but none of the e-GPU enclosures I've seen are REMOTELY large enough for a 5090, and they certainly do not provide enough power for one.. Being restricted by USB-C (if its actually USB-C 3.1) means that regardless of the bandwidth you think you don't need, all the data to be computed and all the results post computation need to be sent backwards and forwards via a USB 3.1 bus... There is still a significant hit.. Honestly, I'd find a laptop with a 5080 discrete GPU if I were you rather than trying to find a way to e-GPU a 5090... |
![]() ...
·
![]() |
---|
Alex Nicholas:Billy Buchanan: Given the batch sizes that are used for work the bandwidth isn’t really a limitation at all. The GPU memory tends to be the bigger factor in terms of a constraint on the compute. For image processing things could easily be way more challenging though. |
![]() ...
·
![]() |
---|
I run an AMD 7950x with 128gb of memory and a superfast M.2 disk. I dont think I can get any faster within reason. most taks are very fast with the exception of Linear defect correction within WBPP. I typically set the computer to run WBPP and do other things. but when I sit down to process I would like to maximize throughput with the Xterminators…
|
![]() ...
·
![]() |
---|
My take is that it is faster in terms of percent but the actual time savings in minutes is not significant. Because of that unless you also are a gamer with that machine it might not be worth it. Of course depending on your power supply you might need to upgrade that as well…. I went from a 3060 to a 4080 and noticed only a small improvement in actual seconds to already fairly fast times for Blur X and such. This on a AMD 9950X with 96 Gig RAM. For a 571 based image BXT took maybe 25 sec on the 3060 and that went to about 15 sec on the 4080. If it were not for the gaming it would not have made much sense to upgrade, especially since I did need to go to a larger PSU. |
![]() ...
·
![]()
·
1
like
|
---|
Not worth the money unless you like wasting money on a “small” time saver. i went from gtx 1080 to rtx 4080 and was kinda meh about it. I mean it’s faster but when the gtx 1080 took 1 minute any way percentage increases don’t matter a whole lot. |
![]() ...
·
![]()
·
1
like
|
---|
You'd definitely have to run it a lot for it to add up to enough to matter. Generally for a narrowband image for example I run blurx three times (pre channel combination to use the appropriate FWHM value to each), on starless data, I run noisex twice, once on starless linear data and once on starless stretched data, and starx four times, once on each narrowband master, and once to extract the stars from my RGB combined star image. Out of the many, many hours I spend processing an image that amount of time is a tiny, tiny percentage.
|
![]() ...
·
![]()
·
2
likes
|
---|
HI everyone i have AMD RYZEN 9 5950X 16 core, 128 GB RAM, nvidia 3080 Ti image with 373 MB full frame . blurxterminator around 30s , starxterminator around 25s CS Brian |
![]() ...
·
![]()
·
2
likes
|
---|
I think once PI implements their upcoming CUDA optimizations throughout their whole suite, there will be a bigger bang for that large buck. I only upgrade every 5 years or so, so I choose the higher end stuff so it lasts longer. Over doubling the speed of the RC tools with a better Nvidia card was nice, but well over 3x the speed for WBPP was my main goal which I know has nothing to do with the card yet. But if I can get another nice boost in WBPP "for free" with CUDA sometime soon, that'll be icing on the cake. Plus my games all run great now ![]() Rick |
![]() ...
·
![]()
·
1
like
|
---|
Rick Krejci: Oh... I am waiting for that day! |
![]() ...
·
![]()
·
1
like
|
---|
Brian Diaz: Similar performance to mine.. BXT and SXT both being cuda accelerated, my 4080 does about the same BXT/SXT times. I've got a Ryzen 9 9950X and 128GB RAM, and things like WBPP changed immensely when I upgraded to the Ryzen 9 from my old dual Xeon rig, but BXT/NXT/SXT didn't change almost at all jumping from my 1080 to a 4080, and no change at all when I upgraded the rest of my rig around that 4080. |
![]() ...
·
![]() |
---|
Hi Harry, I've upgraded my rtx 4090 to 5090. 3xt performance is lower, no matter in Linux or Win11. Cs. Fernando |
![]() ...
·
![]() |
---|
Alex Nicholas:Brian Diaz: With my new rig with the 5080 and a FF IMX455 image, BlurX takes about 18s and StarX about 15s.=flex flex-wrap sm:w-full=sm:w-3/4 Intel i9-14900KF , 96 GB [48 GB X2] DDR5-6000MHz, NVIDIA GeForce RTX 5080=flex flex-wrap sm:w-full=sm:w-3/42x4TB Samsung 990 Pro (one with the image data, the other the PI swapfiles) |
![]() ...
·
![]() |
---|
Rick Krejci: I'd love to get my hands on some IMX455 data to test mine on.. on IMX294 data I get 1.5s NXT, 4s BXT and 3s SXT.. about 4x that on IMX571 data... |
![]() ...
·
![]() |
---|
For me bottleneck seem to be memory, I use full frame datasets with 6200MM, initialization take a thousand year and after that progress look not bad.
|
![]() ...
·
![]() |
---|
YingtianZZZ: I compare on desktop 4080 and laptop 3080 |