Page 1 of 1

Optimum values for DCACHE_SIZE and DMPI_BLOCK value

Posted: Tue Oct 12, 2010 12:20 pm
by shaldar
Dear vasp master,

I have some query regarding parallel compilation of VASP 5.2 on our Intell Xeon cluster
I am using the following compiler and library to compile.
OS : fc12.x86_64
Intel ifort 11.0.83
Openmpi 1.4.3
MKL Library 10.1.2.024 and FFTW library from Intel.
The system was Intel Xeon X5570 @ 2.93GHz with cache size : 8192 KB
Connectivity in between nodes through infiniband

My query is what will be the optimal DCACHE_SIZE and DMPI_BLOCK values for the compilation. And also for other system architecture how to decide the optimum values of the above two variables.

Thanks in advance ....... :)

Optimum values for DCACHE_SIZE and DMPI_BLOCK value

Posted: Fri Oct 29, 2010 10:21 am
by admin
please make benchmark tests to determine the optimum values
for your system and applications. The values set in the makefiles
are suggesttions which work well usually

Optimum values for DCACHE_SIZE and DMPI_BLOCK value

Posted: Tue Feb 08, 2011 5:15 am
by Gu Chenjie
[quote="shaldar"]Dear vasp master,

I have some query regarding parallel compilation of VASP 5.2 on our Intell Xeon cluster
I am using the following compiler and library to compile.
OS : fc12.x86_64
Intel ifort 11.0.83
Openmpi 1.4.3
MKL Library 10.1.2.024 and FFTW library from Intel.
The system was Intel Xeon X5570 @ 2.93GHz with cache size : 8192 KB
Connectivity in between nodes through infiniband

My query is what will be the optimal DCACHE_SIZE and DMPI_BLOCK values for the compilation. And also for other system architecture how to decide the optimum values of the above two variables.

Thanks in advance ....... :) [/quote]

Hi, I am a new user of VASP, and it seems that we have the similar hard ware configuration, however, now the VASP does not run very well on my cluster, Once I run the VASP on more than two nodes, it crashed. So could you share your makefile with me? Thanks a lot....

Optimum values for DCACHE_SIZE and DMPI_BLOCK value

Posted: Tue Feb 08, 2011 1:45 pm
by admin
this rather indicates an error in the installation/setup of your local MPI, the combination Intel Compiler /Openmpi is a very stable one, if you use the standard makefiles for the Intel compiler