Impact of RAM Capacity on PixInsight Performance: 64GB vs 128GB Moravian Instruments C3-61000 PRO · Rafael Sampaio · ... · 38 · 2060 · 0

TareqPhoto 2.94
...
· 
·  Share link
Tony Gondola:
I'm running a core i7 8 core 2.3 Mhz processor with 32 gigs of ram on Windows 11. PI took an hour to stack 1306 3840x2160 subs using WPBB, Siril took about 15 min. and was a bit tighter. I know these are small 585 sensor frames and an APS-C or full frame sensor will take longer but these are some real world numbers to work with.

Ok, cool, good to know your real world numbers and your computer specs, it will help me for future plans of computers building or picking, most likely i will never go any less than 32GB of RAM, and i am building one now and choosing 8 cores CPU as starter, later i will see how good it is and go from there.

Thank you very much
Like
prjcole 1.91
...
· 
·  1 like
·  Share link
Hi.
I run a E5 Xeon.. 14 cores 28 threads 2.6 Mhz with 64gb memory. W10 
Under load i use 55gb of my memory.
I upgraded from 32gb and found a slight improvement using WBPP but it wasn't huge.
Like
drmdvl 0.00
...
· 
·  Share link
Intersting thread! Not sure many people know that you can run a benchmark script in PI and see the ranking at https://pixinsight.com/benchmark/
PI recommends Linux stating Windows and MacOS versions are ports of the Linux trunk. 
They also recommend minimum 64-128g on x64 bit systems. 

I personally run a 13th gen core I9 with 128g DDR5. I did notice a difference when I upgraded from 64g to 128g. I often see 75%+ usage during WBPP especially during integration / 1x drizzle.
Although I think the biggest bang for the buck ugrade for WBPP is multiple fast NVME SSD for swap. In processing, GPU acceleration is the biggest bang for the buck. 
I just ran 400x 50mb subs (ASI2600 huge files) through WBPP in 19 minutes.
Like
Semper_Iuvenis 3.10
...
· 
·  Share link
I run 128gb.  I've not had a memory issue in PI since going from 64gb to 128gb.
Like
TareqPhoto 2.94
...
· 
·  Share link
I still don't know from where people can get that much RAM to install, i mean i ordered a new cheap motherboard that can accommodate up to 256GB DDR5, i tried to find largest RAM sticks to be used, all what i find is just 48GB, so 4 of them will give me maximum 192GB, so how to make it up to 256GB, and how i see some benchmarks results setup with like 512GB RAM, and the motherboards they are using not of server ones and nor with big cores CPUs, so if i can jump from 32/64GB to 128GB or 192GB isn't that really enough?

I do have so many cameras up to APS-C, no full frame yet, but i can't tell how many frames i might use, so assume i will use 3000 frames of APS-S, how long it will take me then with 64Gb or 128GB or 192GB using 8cores CPU [i might upgrade to 12 or 16 cores in future]?!!!
Like
Gondola 8.11
...
· 
·  1 like
·  Share link
PI is written natively for Linux so it looks  like if you want top performance you'll have to install Linux as a dual boot or on stand alone machine just to run it. Pile on as much RAM as the machine will hold. PI was never really designed for Windows.
Like
just.kice 1.51
...
· 
·  Share link
Tony Gondola:
PI is written natively for Linux so it looks  like if you want top performance you'll have to install Linux as a dual boot or on stand alone machine just to run it. Pile on as much RAM as the machine will hold. PI was never really designed for Windows.

I've also found that for top performance, linux is the best to run PI. I now dual boot with windows 11. Haven't found out how to get GPU acceleration working for the XT programs on linux, so I just do pre-processing on Fedora 41 and the rest on windows. I haven't done any measurements, but there is absolutely a difference running WBPP on linux that just makes for a better experience (although LocalNorm is still slow) with my Xeon E5 2696 v3 workstation build.
Like
TareqPhoto 2.94
...
· 
·  Share link
Ok, i keep thinking about installing Linux as a second or dual boot OS, but which version of Linux to use? And what steps of processing is needed to be done on Linux?

I will see with my new builds how i can get them good enough performance, but i can't just push it too much and spend much more, i already spend a lot of money on other stuff and also another hobbies, and most important thing is that i still didn't use my new equipment since years, i made a plan to have a lot of data sooner or later so i should be prepared, but i don't know how far i should go or can, and most important part is that i hate to return or get back to imaging, otherwise my upgrading computer and buying more gear might not see lights later if my life isn't extend longer.
Like
Alexn 12.25
...
· 
·  Share link
I have run PI on lots of things from laptops with 16GB of RAM through to my current dual xeon rig with 256GB of RAM, once you go to 32Gb, you don't REALLY notice the difference of adding RAM.

The only time on my 256GB rig that I see memory usage go over 32GB is when I'm running fast integration with more than 200 subs from the IMX571… and honestly, I think this is exaserbated by the fact that I've got 64 threads, and so it will bring 64 worker threads up to process one light frame each during calibration and registration operations. This results in a very large amount of data in memory at once. Other than that, PI sits at around 9~14GB of memory usage 99% of the time. 

Depending on your computers current specifications, I honestly think the BIGGEST increase in performance you can attain comes from the best possible processor (high core/thread count is important, but so is IPC performance of the CPU). 

My xeons are relatively ancient in computer terms… They are Xeon E5 2697A V4, Broadwell-E CPU's… These are 2016 vintage CPU's, so hardly new. but each of them is 16 cores, 32 threads and 3.3GHz… A modern i7 at 5.5GHz is much faster per individual core, but by the fact that they are 8c/16t, and my CPU pair total 32c/64t  even if they are double the performance per clock cycle to my old xeons, I still have 4x the core and thread count, resulting in my overal performance beating most modern CPU's.

Obviously, CPU upgrades are platform dependent, and if you're on a laptop - you're kind of locked in… 

RAM will help if you're running 16GB, but past 32, the difference is minimal.

a CUDA accelerated GPU will help ONLY in AI based processing tasks like Blur/Noise/StarXterminator, Starnett and Graxpert

CPU performance scales really really well and the imrovements are felt across the board.
Like
jhayes_tucson 26.84
...
· 
·  2 likes
·  Share link
Here’s my advice:  Memory is cheap so go with the most you can afford.  Nothing bad ever comes from having too much memory.   Every 2-3 years when I upgrade my Mac, I always spring for the most memory I can buy.  I even dump a lot of memory in my PCs that run my scopes—and I’ve never regretted it.  They are lightening fast and they even boot and shut down in about 30 seconds—and they are getting a little long in the tooth.

I currently “only” have 64Gb in the MacBook Pro M2 that I use for processing and I’ve actually run it out of memory; however, I’ve never had that happen when using PI all on it’s own.  I normally have a lot of stuff running concurrently.  I might have two browsers going each with 50 tabs open, a long Power Point presentation with a lot of graphics, PI stacking 200 x 220 MB images along with PS running with a huge stack of processed images.  Yes, I really have to jam it up to run out of memory but it has happened—and it’s a pain when it does.

John
Like
rafaelss123 1.20
Topic starter
...
· 
·  1 like
·  Share link
John Hayes:
Here’s my advice:  Memory is cheap so go with the most you can afford.  Nothing bad ever comes from having too much memory.   Every 2-3 years when I upgrade my Mac, I always spring for the most memory I can buy.  I even dump a lot of memory in my PCs that run my scopes—and I’ve never regretted it.  They are lightening fast and they even boot and shut down in about 30 seconds—and they are getting a little long in the tooth.

I currently “only” have 64Gb in the MacBook Pro M2 that I use for processing and I’ve actually run it out of memory; however, I’ve never had that happen when using PI all on it’s own.  I normally have a lot of stuff running concurrently.  I might have two browsers going each with 50 tabs open, a long Power Point presentation with a lot of graphics, PI stacking 200 x 220 MB images along with PS running with a huge stack of processed images.  Yes, I really have to jam it up to run out of memory but it has happened—and it’s a pain when it does.

John

Hi John! Thank you for the advice , a  very good one, as usual. I chose the Macbook Pro M4 with 128gb. 

Best, 

Rafael
Like
jhayes_tucson 26.84
...
· 
·  2 likes
·  Share link
Rafael Sampaio:
Hi John! Thank you for the advice , a  very good one, as usual. I chose the Macbook Pro M4 with 128gb. 

Best, 

Rafael


Hi Rafael,
Gosh, now I feel like I'm running an antique.  I guess that I'm going to have to spring for a new MacBook Pro one of these days!  )))

John
Like
darkmattersastro 11.95
...
· 
·  1 like
·  Share link
John Hayes:
Rafael Sampaio:
Hi John! Thank you for the advice , a  very good one, as usual. I chose the Macbook Pro M4 with 128gb. 

Best, 

Rafael


Hi Rafael,
Gosh, now I feel like I'm running an antique.  I guess that I'm going to have to spring for a new MacBook Pro one of these days!  )))

John



Hey now, I am still limping along with my M1 Max MacBook Pro. Really hoping we see an Apple Silicon optimized PI build one of these days.
Edited ...
Like
Tapfret 4.95
...
· 
·  Share link
A memory story.

My main processing PC is a 11th gen I9 desktop running POP!_OS (an Ubuntu Linux derivative). The original build was with 64gb PC4400 DDR4 that I could run stably at full 4400mhz speed. Unfortunately I would have issues with WBPP with larger sub sets (500+), at the very end of course after waiting for the all the steps up to integration. Running system monitor I could see the memory usage cap out as line after line was building in the final image. The first time it happened I left it running overnight and the system would not boot. After booting from an external rescue media I noticed the OS SSD remaining capacity was measuring in 100's of KB's rather than GB's. A search of the files found that PI had filled a swap file with 100's of GB's to the remaining capacity. After cleaning that up I played around with the swap files and still had the problem. The system monitor would show only 10-20% swap usage during integration phase of WBPP while the RAM usage would peg out. Once it hit max the system would become unresponsive. However the swap was still being written to in the background. It seemed that PI was somehow prioritizing itself over space in RAM critical to GUI function rather than switching to swap. The weird thing was, if I unchecked integration and run ImageIntegration outside of the script, I could modify the buffer size and the RAM usage would cap at 95%, which apparently left enough for the essential OS usage (primarily GUI and input as the OS was evidently still writing to swap in the background) to prevent crashing. Sadly the integration module within the WBPP script will not allow buffer size modification (I'm sure there is some code that could be written within the script to accomplish that. But I am nowhere near that savvy). But I wanted to be able to fire up WBPP and walk away to come back to completed master images. 

The fix: I stepped up to 128gb PC4000. WBPP integration still wanted to use as a lot of RAM, but it always leaves enough for the GUI to remain functional. Plus swap usage started much earlier and utilized a larger share for some reason. Unfortunately the memory would only clock to 3600mhz. But even though I was losing 18-20% of my original speed (what one gets from the actual bus clock is not going to hit those speeds anyway) I was benchmarking higher and getting completion the full WBPP script. Its still a mystery to me why the swap files were not being utilized properly. They are literally there to prevent this kind of thing.    

TLDR: Yes, I noticed a difference upgrading from 64gb - 128gb; but for a probably different reason than you or asking. Results may vary depending on hardware combinations and of course OS choice. It is entirely possible those factors had more relation with my issue than the memory itself. I would be curious to know if the same issue would manifest with later DDR5 utilizing generations, or AMD systems, as well as the Ubuntu base OS. This is really the only issue that has come up otherwise.
Edited ...
Like
 
Register or login to create to post a reply.