NEB calculation always stop at the first step

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
Tiger-paw

NEB calculation always stop at the first step

#1 Post by Tiger-paw » Fri Oct 30, 2009 4:46 am

I am trying to run NEB calculations on our own cluster, but jobs always stop at the first step. Some info is attached below. It seems related to the cluster system, not the INPUT or IMAGE files. Can anyone give some tips how to find the source of the problem? Thanks much for any kind help.

$ tail -4 01/OUTCAR
----------------------------------------- Iteration 1( 1) ---------------------------------------
POTLOK: VPU time 7.04: CPU time 7.07
SETDIJ: VPU time 0.61: CPU time 0.61

$ more run_q.e42520
cat: POS*: No such file or directory

$ more run_q.o42520
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
tail008 tail007
running on 4 nodes
each image running on 1 nodes
distr: one band on 1 nodes, 1 groups
vasp.4.6.26 15Dec04 complex
01/POSCAR found : 3 types and 73 ions
LDA part: xc-table for Ceperly-Alder, Vosko type interpolation para-ferro
00/POSCAR found : 3 types and 73 ions
05/POSCAR found : 3 types and 73 ions
POSCAR, INCAR and KPOINTS ok, starting setup
FFT: planning ... 16
reading WAVECAR
entering main loop
N E dE d eps ncg rms rms(c)
--------------------------------------
Running PBS epilogue script

Killing processes of user jsmith on the batch nodes
Doing node tail008
Stopping gm... done.
Starting gm... units is
SHELL=/bin/bash 0
active mapper... active mapper... done.
Doing node tail007
Stopping gm... done.
Starting gm... units is
SHELL=/bin/bash 0
active mapper... active mapper... done.
Restarting GM
Stopping gm... done.
Starting gm... units is
PATH=/bin:/usr/bin 0
active mapper... active mapper... done.
Done
Last edited by Tiger-paw on Fri Oct 30, 2009 4:46 am, edited 1 time in total.

Tiger-paw

NEB calculation always stop at the first step

#2 Post by Tiger-paw » Mon Nov 02, 2009 1:46 am

The problem has been detected as due to the limited RAM on each node of the cluster. Thanks.
Last edited by Tiger-paw on Mon Nov 02, 2009 1:46 am, edited 1 time in total.

Post Reply