NEB calculations errors
Posted: Sun Jun 18, 2006 9:08 pm
Sorry, I tried to ask this question on Henkelman's forum several days ago, but after I registered, I never receive the activation email, so have not been able to post there yet.
Usually I use 2 computers (8 nodes) to run NEB. The situation I met was that the only those NEB calculations that has IMAGES=1,2, 4, 8 worked. For IMAGES=3,5,7, VASP crashed. The errors are attached at the bottom.
Is this because I can only run NEB in which nodes/images=integer?
Can I set some parameter in INCAR so that I can run whatever IMAGES using 8 nodes? The INCAR I am using now is also attached. BTW: I have test cases of setting NPAR to default value (=nodes numer) and LSCALU =.TRUE.
Thanks for your attention.
NWRITE = 1
LWAVE = .FALSE. ! write WAVECAR?
LCHARG = .FALSE. ! write CHGCAR?
LVTOT = .FALSE. ! write LOCPOT?
Electronic relaxation
# IALGO = 48 ! 8: CG, 48: DIIS algorithm for electrons
ALGO = Fast
ISMEAR = 0 ! 0: Gaussian, electron smearing
SIGMA = 0.100
PREC = normal
LREAL = auto
ROPT = 2e-2 2e-2 2e-2
ISTART = 0
NELM = 100
NELMDL = -8
EDIFF = 1e-2
ISPIN = 1 ! polarization?
# NUPDOWN= 1 ! excess electrons of majority spin
# MAGMOM = 0 0 0 0 5 5 5 5
Elastic Band
LCLIMB=.TRUE. ! climbing Image NEB
IMAGES = 2
SPRING = -5
Ionic relaxation
NSW = 2 ! # of steps in optimization (default 0!)
ISIF = 2 ! 0: relax ions, 1,2:relax ions,calc stresses, 3:relax ion+cell
IBRION = 1 ! 1: quasi-NR, 2:CG algorithm for ions
NFREE = 10 ! number of DIIS vectors to save
POTIM = 0.5 ! reduce trial step in optimization
EDIFFG = -0.5
DOS
# RWIGS = 1.585 1.072 ! Wigner-Seitz radii
# LORBIT = 11 ! turn on dos/band decomposition
Parallel
NPAR = 1
LPLANE = .T.
NSIM = 10
LSCALU = .FALSE.
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
Rank (7, MPI_COMM_WORLD): Call stack within LAM:
Rank (7, MPI_COMM_WORLD): - MPI_Recv()
Rank (7, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (7, MPI_COMM_WORLD): - main()
Rank (5, MPI_COMM_WORLD): Call stack within LAM:
Rank (5, MPI_COMM_WORLD): - MPI_Recv()
Rank (5, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (5, MPI_COMM_WORLD): - main()
Rank (4, MPI_COMM_WORLD): Call stack within LAM:
Rank (4, MPI_COMM_WORLD): - MPI_Recv()
Rank (4, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (4, MPI_COMM_WORLD): - main()
Rank (6, MPI_COMM_WORLD): Call stack within LAM:
Rank (6, MPI_COMM_WORLD): - MPI_Recv()
Rank (6, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (6, MPI_COMM_WORLD): - main()
Usually I use 2 computers (8 nodes) to run NEB. The situation I met was that the only those NEB calculations that has IMAGES=1,2, 4, 8 worked. For IMAGES=3,5,7, VASP crashed. The errors are attached at the bottom.
Is this because I can only run NEB in which nodes/images=integer?
Can I set some parameter in INCAR so that I can run whatever IMAGES using 8 nodes? The INCAR I am using now is also attached. BTW: I have test cases of setting NPAR to default value (=nodes numer) and LSCALU =.TRUE.
Thanks for your attention.
NWRITE = 1
LWAVE = .FALSE. ! write WAVECAR?
LCHARG = .FALSE. ! write CHGCAR?
LVTOT = .FALSE. ! write LOCPOT?
Electronic relaxation
# IALGO = 48 ! 8: CG, 48: DIIS algorithm for electrons
ALGO = Fast
ISMEAR = 0 ! 0: Gaussian, electron smearing
SIGMA = 0.100
PREC = normal
LREAL = auto
ROPT = 2e-2 2e-2 2e-2
ISTART = 0
NELM = 100
NELMDL = -8
EDIFF = 1e-2
ISPIN = 1 ! polarization?
# NUPDOWN= 1 ! excess electrons of majority spin
# MAGMOM = 0 0 0 0 5 5 5 5
Elastic Band
LCLIMB=.TRUE. ! climbing Image NEB
IMAGES = 2
SPRING = -5
Ionic relaxation
NSW = 2 ! # of steps in optimization (default 0!)
ISIF = 2 ! 0: relax ions, 1,2:relax ions,calc stresses, 3:relax ion+cell
IBRION = 1 ! 1: quasi-NR, 2:CG algorithm for ions
NFREE = 10 ! number of DIIS vectors to save
POTIM = 0.5 ! reduce trial step in optimization
EDIFFG = -0.5
DOS
# RWIGS = 1.585 1.072 ! Wigner-Seitz radii
# LORBIT = 11 ! turn on dos/band decomposition
Parallel
NPAR = 1
LPLANE = .T.
NSIM = 10
LSCALU = .FALSE.
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
MPI_Recv: process in local group is dead (rank 1, comm
Rank (7, MPI_COMM_WORLD): Call stack within LAM:
Rank (7, MPI_COMM_WORLD): - MPI_Recv()
Rank (7, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (7, MPI_COMM_WORLD): - main()
Rank (5, MPI_COMM_WORLD): Call stack within LAM:
Rank (5, MPI_COMM_WORLD): - MPI_Recv()
Rank (5, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (5, MPI_COMM_WORLD): - main()
Rank (4, MPI_COMM_WORLD): Call stack within LAM:
Rank (4, MPI_COMM_WORLD): - MPI_Recv()
Rank (4, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (4, MPI_COMM_WORLD): - main()
Rank (6, MPI_COMM_WORLD): Call stack within LAM:
Rank (6, MPI_COMM_WORLD): - MPI_Recv()
Rank (6, MPI_COMM_WORLD): - MPI_Allreduce()
Rank (6, MPI_COMM_WORLD): - main()