Compilation problem with PGI 7.1

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
yxie

Compilation problem with PGI 7.1

#1 Post by yxie » Tue Feb 26, 2008 2:14 pm

Dear VASP users,

I tried to compile the serial version of VASP by using PGI fortran compiler 7.1 and GotoBLAS. During the compilation, every thing goes fine and I get the executable vasp. But I meet some problems when I execute the vasp, the erro message is as following:
*** glibc detected *** /cluster/home/matl/yxie/vasp4/vasp.4.6/vasp: corrupted double-linked list: 0x0000000012284fb0 ***
======= Backtrace: =========
/lib64/libc.so.6[0x365866f41c]
/lib64/libc.so.6(cfree+0x8c)[0x3658672b1c]
/cluster/home/matl/yxie/vasp4/vasp.4.6/vasp[0x4fe137]
======= Memory map: ========
00400000-00ae3000 r-xp 00000000 00:17 45778483 /cluster/home/matl/yxie/vasp4/vasp.4.6/vasp
00ce2000-00d10000 rwxp 006e2000 00:17 45778483 /cluster/home/matl/yxie/vasp4/vasp.4.6/vasp
00d10000-01089000 rwxp 00d10000 00:00 0
11d0a000-124ce000 rwxp 11d0a000 00:00 0
3658200000-365821a000 r-xp 00000000 08:02 1998850 /lib64/ld-2.5.so
3658419000-365841a000 r-xp 00019000 08:02 1998850 /lib64/ld-2.5.so
365841a000-365841b000 rwxp 0001a000 08:02 1998850 /lib64/ld-2.5.so
3658600000-3658746000 r-xp 00000000 08:02 1998853 /lib64/libc-2.5.so
3658746000-3658946000 ---p 00146000 08:02 1998853 /lib64/libc-2.5.so
3658946000-365894a000 r-xp 00146000 08:02 1998853 /lib64/libc-2.5.so
365894a000-365894b000 rwxp 0014a000 08:02 1998853 /lib64/libc-2.5.so
365894b000-3658950000 rwxp 365894b000 00:00 0
3658e00000-3658e82000 r-xp 00000000 08:02 1998900 /lib64/libm-2.5.so
3658e82000-3659081000 ---p 00082000 08:02 1998900 /lib64/libm-2.5.so
3659081000-3659082000 r-xp 00081000 08:02 1998900 /lib64/libm-2.5.so
3659082000-3659083000 rwxp 00082000 08:02 1998900 /lib64/libm-2.5.so
3659200000-3659215000 r-xp 00000000 08:02 1998876 /lib64/libpthread-2.5.so
3659215000-3659414000 ---p 00015000 08:02 1998876 /lib64/libpthread-2.5.so
3659414000-3659415000 r-xp 00014000 08:02 1998876 /lib64/libpthread-2.5.so
3659415000-3659416000 rwxp 00015000 08:02 1998876 /lib64/libpthread-2.5.so
3659416000-365941a000 rwxp 3659416000 00:00 0
365a200000-365a207000 r-xp 00000000 08:02 1998884 /lib64/librt-2.5.so
365a207000-365a407000 ---p 00007000 08:02 1998884 /lib64/librt-2.5.so
365a407000-365a408000 r-xp 00007000 08:02 1998884 /lib64/librt-2.5.so
365a408000-365a409000 rwxp 00008000 08:02 1998884 /lib64/librt-2.5.so
365c200000-365c20d000 r-xp 00000000 08:02 1999021 /lib64/libgcc_s-4.1.2-20070626.so.1
365c20d000-365c40d000 ---p 0000d000 08:02 1999021 /lib64/libgcc_s-4.1.2-20070626.so.1
365c40d000-365c40e000 rwxp 0000d000 08:02 1999021 /lib64/libgcc_s-4.1.2-20070626.so.1
2aaaaaaab000-2aaaaaab1000 rwxp 2aaaaaaab000 00:00 0
2aaaaaabc000-2aaaab0e5000 rwxp 2aaaaaabc000 00:00 0
2aaaac000000-2aaaac021000 rwxp 2aaaac000000 00:00 0
2aaaac021000-2aaab0000000 ---p 2aaaac021000 00:00 0
7fff42a18000-7fff42a2e000 rwxp 7fff42a18000 00:00 0 [stack]
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0 [vdso]

Here is my makefile:
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler release 3.0-1, 3.1
# and release 1.7
# (http://www.pgroup.com/ & ftp://ftp.pgroup.com/x86/, you need
# to order the HPF/F90 suite)
# we have found no noticable performance differences between
# any of the releases, even Athlon or PIII optimisation does
# not seem to improve performance
#
# The makefile was tested only under Linux on Intel platforms
# (Suse X,X)
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
# for a list of optimized BLAS try
# http://www.kachinatech.com/~hjjou/scilib/opt_blas.html
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f
SUFFIX=.f

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=pgf90
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE 6.X, maybe some Red Hat distributions:

CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (usually faster)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (faster on P4)
# **** definitely use -DRACCMU_DGEMV if you use the mkl library
#-----------------------------------------------------------------------

CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
-Dkind8 -DNGXhalf -DCACHE_SIZE=8000 -DPGF90 -Davoidalloc \
# -DRPROMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
# the -Mx,119,0x200000 is required if you use older pgf90 versions
# on a more recent LINUX installation
# the option will not do any harm on other 3.X pgf90 distributions
#-----------------------------------------------------------------------

FFLAGS = -Mfree -tp k8-64 -i4 -Bstatic

#-----------------------------------------------------------------------
# optimization,
# we have tested whether higher optimisation improves
# the performance, and found no improvements with -O3-5 or -fast
# (even on Athlon system, Athlon specific optimistation worsens performance)
#-----------------------------------------------------------------------

OFLAG = -O3

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =
OBJ_NOOPT =
DEBUG = -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# what you chose is very system dependent
# P4: VASP works fastest with Intels mkl performance library
# Athlon: Atlas based BLAS are presently the fastest
# P3: no clue
#-----------------------------------------------------------------------

# Atlas based libraries
#ATLASHOME= $(HOME)/Linux_HAMMER64SSE2_2/lib
#BLAS= -L$(ATLASHOME) -LLIBDIR -lf77blas -latlas

# use specific libraries (default library path points to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a

# use the mkl Intel libraries for p4 (www.intel.com)
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread

# even faster Kazushige Goto' BLAS
BLAS= /cluster/home/matl/yxie/GotoBLAS/libgoto_opteron-r1.23.a

# LAPACK, simplest use vasp.4.lib/lapack_double
LAPACK= ../vasp.4.lib/lapack_double.o

# use atlas optimized part of lapack
#LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas

# use the mkl Intel lapack
#LAPACK= -lmkl_lapack


#-----------------------------------------------------------------------

LIB = -L../vasp.4.lib -ldmy \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(BLAS) -Bstatic

# options for linking (none required)
LINK = -tp k8-64

#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.5 can use FFTW (http://www.fftw.org)
# since the FFTW is very slow for radices 2^n the fft3dlib is used
# in these cases
# if you use fftw3d you need to insert -lfftw in the LIB line as well
# please do not send us any querries reltated to FFTW (no support)
# if it fails, use fft3dlib
#-----------------------------------------------------------------------

FFT3D = fft3dfurth.o fft3dlib.o
#FFT3D = fftw3d.o fft3dlib.o /cluster/home/matl/yxie/fftw3/lib/libfftw3.a


#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90
# compiler however appends only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 -Mx,119,0x200000" \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \
# --with-f77flags=-O --without-romio
#
# lam was generally faster and we found an average communication
# band with of roughly 160 MBit/s (full duplex)
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following lines
#-----------------------------------------------------------------------


#FC=mpif77
#FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

#CPP = $(CPP_) -DMPI -DHOST=\"LinuxPgi\" \
# -Dkind8 -DNGZhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

BLACS=/usr/local/BLACS_lam
SCA_= /usr/local/SCALAPACK_lam

SCA= $(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a $(SCA_)/tools_LINUX.a \
$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

#LIB = -L../vasp.4.lib -ldmy \
# ../vasp.4.lib/linpack_double.o $(LAPACK) \
# $(SCA) $(BLAS)

# FFT: only option fftmpi.o with fft3dlib of Juergen Furthmueller

#FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o

SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)


Your advice would be a great help for me.

Thanks very much.

Yours sincerely,

Yu
Last edited by yxie on Tue Feb 26, 2008 2:14 pm, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2921
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

Compilation problem with PGI 7.1

#2 Post by admin » Wed Mar 26, 2008 10:14 am

this looks as if the intrinsic library files are not compatible with your executable (have they been generated with the same compiler (release?), is the given path correct?
Last edited by admin on Wed Mar 26, 2008 10:14 am, edited 1 time in total.

Post Reply