Sub-Space-Matrix is not hermitian in surface slab calculation

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Locked
Message
Author
julien_steffen
Newbie
Newbie
Posts: 25
Joined: Wed Feb 23, 2022 10:18 am

Sub-Space-Matrix is not hermitian in surface slab calculation

#1 Post by julien_steffen » Wed Oct 18, 2023 11:35 am

We try to optimize a simple fcc(111) surface of Pd. We already optimized the corresponding bulk cell without problems, but if we try to optimize the corresponding surface slab (five layers deep), the calculation fails before calculating the first SCF cycle step with the message: "WARNING: Sub-Space-Matrix is not hermitian in DAV", finally, the calculation stops with "ERROR FEXCP: supplied Exchange-correletion table"

Maybe we oversaw some simple error or unreasonable setting in the input, but we were so far not able to find a solution by change of any parameter. The geometry of the cell should be reasonable, with the interatomic distances corresponding to the bulk cell where the error did not occurred.

Attached are the input and output files of the calculation.
You do not have the required permissions to view the files attached to this post.

martin.schlipf
Global Moderator
Global Moderator
Posts: 542
Joined: Fri Nov 08, 2019 7:18 am

Re: Sub-Space-Matrix is not hermitian in surface slab calculation

#2 Post by martin.schlipf » Fri Oct 20, 2023 7:59 am

I ran the same example locally and I saw no issue.

From your output you can see that something goes completely wrong: e.g. the energies are enormous and the number of electrons changes. Also the imaginary parts of you Hamiltonian are gigantic. The final error is then just a consequence of this. Did you try variations of the setup: different number of processes, different versions of VASP, ... and observe this consistently?

Martin Schlipf
VASP developer


julien_steffen
Newbie
Newbie
Posts: 25
Joined: Wed Feb 23, 2022 10:18 am

Re: Sub-Space-Matrix is not hermitian in surface slab calculation

#3 Post by julien_steffen » Mon Oct 23, 2023 3:39 pm

Thank you for the suggestion! It seemed to be indeed some issue with MPI or the inter-node communication. We now ran the calculation on one node without a problem. Interestingly, this behavior so far never occurred for larger calculations on several nodes, maybe, the system was too small in this case for such a high parallelization?

Locked