Page 1 of 1

GW calculation memory failure

Posted: Thu Mar 06, 2025 11:59 am
by alfonso_gallobueno

Dear all,
I am sorry to post a message on a rather well documented topic, but I do not seem to be able to successfully run a not too complicated calculation. Attached are my INCAR, POSCAR, KPOINTS and POTCAR. I am trying to run a GW EVGW0R (also tried G0W0R and other non low-scaling options) calculation on a 18 C-atom system (108e-) as a 2D slab using a 4x4x1 k-mesh, which isn't huge. This is requiring:

EVGW0R
CPU(s): 64
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
MemAvailable: 518 GB
Calculation is sent with OMP_NUM_THREADS = 4 in 64 processors (16 MPI ranks with 4 threads/MPI)
I manually set MAXMEM = 31800 (MemAvailable*1024/16 - 200)
Fails with error: 'Available memory per mpi rank: 31800 MB, required memory: 158575 MB'

G0W0R
CPU(s): 96
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
MemAvailable: 381GB
Calculation is sent with OMP_NUM_THREADS = 4 in 96 processors (24 MPI ranks with 4 threads/MPI)
I manually set MAXMEM = 16440 (MemAvailable*1024/24 - 200)
Fails with error: 'Available memory per mpi rank: 16440 MB, required memory: 100936 MB'

NTAUPAR is automatically set to 1. I could of course further reduce the number of kpoints, but I think it shouldn't be necessary, I must be doing something wrong... Or is it really so memory demanding?

Any help is highly appreciated.

Thanks
Alfonso Gallo


Re: GW calculation memory failure

Posted: Fri Mar 07, 2025 2:20 pm
by ferenc_karsai

I tried the calculation myself and discussed with colleagues. The low-scaling GW is extremely memory demanding and your example needs around 2.5 TB of memory in the current setting.

What can you try:
1) Go to more nodes until you have the memory.
2) Try to use the old algorithms for GW which need less memory but are significantly slower (https://www.vasp.at/wiki/index.php/Prac ... lculations). These are the algorithms without an "R" at the end.
3) Tune down the parameters of your calculation (less kpoints would be the first thing to try).