Hello!
Ok, I tried to estimate the memory consumption and I believe that the VASP run should fit on 220 cores if each core gets 6 GB of memory. This is the result of a test run on a single core:
Code: Select all
VASP std binary
Single core run
KPOINTS: 1x1x1
ENCUT = 625 ===> 95.8 GB total memory consumption
Now, the memory consumption should scale linearly with the number of k-points. In your
OUTCAR file you can find the line
Hence, the planned run with
KPOINTS 1x5x5 setting should require an approximate total of
1246 GB memory. If that can be evenly split across 220 cores you will need
~5.7 GB per core. Of course there will be some overhead if you run this in parallel but that should be manageable.
-----------
Some additional coments:
1.) Your SLURM output mentions
which sounds incorrect as you have allocated 26 nodes (see SLURM_JOB_NODELIST) with 220 cores in total. Can you please find out on what kind of nodes you are running this job (number and type of CPU used, how much memory is installed)?
2.) You should make use of parallelization via the
NCORE tag when you can find a setup that works. However, do
not use parallelization via
KPAR as it will multiply the memory demand!
3.) Is there a specific reason why you turned scaLAPACK off (LSCALAPACK = .FALSE.)?
All the best,
Andreas Singraber