Machine specific configure scripts: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Install on your local machine using internal libraries ==
== Yambo @ HPC machines ==
The simplest install of yambo is the one using internal libraries and compiled with gfortran. This is useful to test the code on your local machine, or in order to easily follow the [[Tutorials|yambo tutorials]].
 
You can run the following configure script (named for example <code>yambo_install.sh</code>) which should work for '''yambo 5.0''' on '''Linux machines''':
YAMBO_EXT_LIBS="/user/defined/path/for/internal/libs"
./configure FC=gfortran \
  --with-extlibs-path=$YAMBO_EXT_LIBS \
  --enable-keep-extlibs \
  --enable-time-profile \
  --enable-msgs-comps \
  --enable-keep-src \
  --enable-memory-profile \
  --enable-int-linalg \
  --enable-par-linalg \
  --enable-netcdf-output \
  --enable-slepc-linalg
 
== Preinstalled ==
 
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
Niflheim at Technical University of Denmark
* Leonardo at CINECA
SP6 at CINECA
* Eurora at CINECA
Arina at SGI at Universidad del Pais Vasco.
* Niflheim at Technical University of Denmark
Core.Sam at University of Pittsburgh.
* SP6 at CINECA
* Arina at SGI at Universidad del Pais Vasco.
* Core.Sam at University of Pittsburgh.


==  HPCs ==
Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.
Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.


CH: Suggest to list/group these in order of architecture, linux, IBM, Cray, OS/X, etc
<!-- CH: Suggest to list/group these in order of architecture, linux, IBM, Cray, OS/X, etc -->
 
===HLRN===


=== Leonardo @ CINECA ===
== CINECA HPC centre ==
=== Leonardo ===


To do ..
To do ..


=== EURORA @ CINECA ===
=== EURORA SP6 ===


EURORA is a hybrid supercomputer, with Intel Xeon andyBridge processors and GPU NVIDIA Tesla K20 accelerators
EURORA is a hybrid supercomputer, with Intel Xeon andyBridge processors and GPU NVIDIA Tesla K20 accelerators
Line 47: Line 28:
  --with-netcdf-link="-L/cineca/prod/libraries/hdf5/1.8.9_ser/intel--cs-xe-2013--binary/lib -L/cineca/prod/libraries/szip/2.1/gnu--4.6.3/lib -lhdf5_fortran -lhdf5_hl -lhdf5 -lnetcdff -lnetcdf -lcurl -lsz -lz"
  --with-netcdf-link="-L/cineca/prod/libraries/hdf5/1.8.9_ser/intel--cs-xe-2013--binary/lib -L/cineca/prod/libraries/szip/2.1/gnu--4.6.3/lib -lhdf5_fortran -lhdf5_hl -lhdf5 -lnetcdff -lnetcdf -lcurl -lsz -lz"


=== IBM AIX and xlf (SP6 @ CINECA) ===
=== IBM AIX and xlf ===
 
Linking with netCDF, PWscf, FFTW. Production runs
Linking with netCDF, PWscf, FFTW. Production runs
  export CPP=cpp  
  export CPP=cpp  
Line 61: Line 41:
  --with-p2y=4.0
  --with-p2y=4.0


To compile, use GNU make:
== Other HPC centers ==
./gmake yambo interfaces
GNU/Linux: ifort
 
ifort 11.0 with netCDF and iotk/p2y 4.0 support, serial
./configure FC=ifort --with-netcdf-lib=/usr/local/libraries/netcdf/4.0.1/ifort--11.1/lib --with-netcdf-include=/usr/local/libraries/netcdf/4.0.1/ifort--11.1/include --with-iotk=/opt/espresso/4.1.2/iotk/ --with-p2y=4.0 --with-blacs=no
 
A typical configure command line that can be used to link the mkl BLAS, LAPACK and FFTW is (thanks to Giovanni Pizzi for this!)
./configure FC=ifort \
--with-blas="-L/opt/intel/Compiler/11.1/073/mkl/lib/em64t/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-lapack="-L/opt/intel/Compiler/11.1/073/mkl/lib/em64t/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-blacs="-L/opt/intel/Compiler/11.1/073/mkl/lib/em64t/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-scalapack="-L/opt/intel/Compiler/11.1/073/mkl/lib/em64t/ -lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
--with-fftw="/opt/intel/Compiler/11.1/073/mkl/lib/em64t/" \
--with-fftw-lib="-lmkl_core -lmkl_intel_lp64 -lmkl_sequential" \
 
Production runs
FCFLAGS='-O3 -xW -tpp7 -assume bscc -nofor_main'
LDFLAGS='-xW' --with-blacs=no
./configure FC=mpif90
 
Debugging runs
./configure --enable-debug FC=mpif90 CFLAGS='-O0 -g' FCFLAGS='-O0 -tpp7 -C -check noarg_temp_created -w90 -w95  -assume bscc -nofor_main -g' --with-blacs=no
 
Debugging runs (GNU/Linux: g95)
./configure FC=g95 FCFLAGS='-O0 -fbackslash -g' CFLAGS='-O0 -g -Dextcus -Dextfus' CC=gcc --with-blacs=no
 
Production runs (GNU/Linux: g95)
./configure FC=g95 FCFLAGS="-O3 -fbackslash -fno-second-underscore"
GNU/Linux: gfortran
 
Debugging runs
./configure --enable-debug --without-mpi FC=gfortran FCFLAGS='-O0 -g -fbounds-check' CFLAGS='-O0 -g -Dextcus -Dextfus'  CC=gcc --with-blacs=no
 
GNU/Linux: PGI Fortran
./configure FC=pgf95
 
GNU/Linux: SUN Fortran
./configure FC=f95
 
GNU/Linux: Pathscale Fortran
./configure FC=pathf90 PFC=mpif90 FCFLAGS="-fno-second-underscore" UFFLAGS="-fno-second-underscore"
 
GNU/Linux: Open64 Fortran
./configure FC=openf95 FCFLAGS="-fno-second-underscore" UFFLAGS="-fno-second-underscore"
GNU/Linux using xlf, Tested on Mare nostrum (ppc64). Production runs
./configure CC=xlc F77=xlf PFC=mpif90 --with-blacs=no
with NETCDF support
./configure CC=xlc F77=xlf PFC=mpif90 \
--with-netcdf-include=/gpfs/apps/NETCDF/netcdf-3.6.0_64/include \
--with-netcdf-lib=/gpfs/apps/NETCDF/netcdf-3.6.0_64/lib --with-blacs=no
 
Magerit (CeSViMa). Production runs with NetCDF support
./configure PF90=xlf90_r FC=xlf90_r F77=xlf_r CC=xlc_r CPP="cpp -P" \ --with-blas=/gpfs/apps/BLAS/1.0.0/64/lib/libblas.a --with-lapack=/gpfs/apps/LAPACK/3.0/64/liblapack.a \ --with-netcdf-include=/gpfs/apps/NETCDF/3.6.0/64/include --with-netcdf-lib=/gpfs/apps/NETCDF/3.6.0/64/lib \ --with-fftw=/gpfs/apps/FFTW/3.2/64/lib
 
Default
./configure
 
ACML libraries, Production runs
./configure --with-blas=$ACMLPATH/libacml.a --with-lapack=$ACMLPATH/libacml.a
 
FFT in ACML not supported yet MKL libraries, Production runs
export MKLROOT=/applis/intel/mkl
export MKLPATH=${MKLROOT}/lib/intel64
export MKLINCLUDE=${MKLROOT}/include
export MKLLIBS="-L${MKLPATH} -I${MKLINCLUDE} -I${MKLINCLUDE}/intel64/lp64 -llapack -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -Wl,--start-group
                ${MKLPATH}/libmkl_intel_lp64.a ${MKLPATH}/libmkl_sequential.a ${MKLPATH}/libmkl_core.a -Wl,--end-group -lpthread"
 
./configure FC=ifort F77=ifort --with-blas="$MKLLIBS" --with-lapack="$MKLLIBS"
 
use this page (link to be fixed) to determine the exact library version for your system: Intel MKL Link Line Advisor  FFT in MKL are supported through the FFTW interface presents in the latest MKL packages.
--with-fftw="${MKLROOT}/interfaces/fftw3xf/"
--with-fftw-lib="${MKLROOT}/interfaces/fftw3xf/libfftw3xf_intel.a $MKLLIBS"

Latest revision as of 14:52, 9 September 2024

Yambo @ HPC machines

The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:

  • Leonardo at CINECA
  • Eurora at CINECA
  • Niflheim at Technical University of Denmark
  • SP6 at CINECA
  • Arina at SGI at Universidad del Pais Vasco.
  • Core.Sam at University of Pittsburgh.

Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.


CINECA HPC centre

Leonardo

To do ..

EURORA SP6

EURORA is a hybrid supercomputer, with Intel Xeon andyBridge processors and GPU NVIDIA Tesla K20 accelerators

module load autoload/0.1 intel/cs-xe-2013--binary intelmpi/4.1.0--binary mkl/11.0.1--binary gnu/4.6.3 cuda/5.0.35 qe/5.0.3 netcdf/4.1.3--intel--cs-xe-2013--binary hdf5/1.8.9_ser--intel--cs-xe-2013--binary szip/2.1--gnu--4.6.3 zlib/1.2.7--gnu--4.6.3
./configure --with-p2y=5.0 \
--with-iotk=/cineca/prod/build/applications/qe/5.0.3/cuda--5.0.35/BA_WORK/espresso-5.0.3/iotk/ \
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/lib/ \
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/include \
--with-netcdf-link="-L/cineca/prod/libraries/hdf5/1.8.9_ser/intel--cs-xe-2013--binary/lib -L/cineca/prod/libraries/szip/2.1/gnu--4.6.3/lib -lhdf5_fortran -lhdf5_hl -lhdf5 -lnetcdff -lnetcdf -lcurl -lsz -lz"

IBM AIX and xlf

Linking with netCDF, PWscf, FFTW. Production runs

export CPP=cpp 
export CC=xlc_r
export F77=xlf_r
export FC=xlf90_r
export FCFLAGS='-O2 -q64 -qstrict -qarch=pwr6 -qtune=pwr6 -qmaxmem=-1 -qsuffix=f=f'
./configure --build=powerpc-ibm --with-fftw=/cineca/prod/libraries/fftw/3.2.2/xl--10.1/lib 
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/lib 
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/include 
--with-iotk=/cineca/prod/build/applications/QuantumESPRESSO/4.1/xl--10.1/BA_WORK/QuantumESPRESSO-4.1/iotk 
--with-p2y=4.0

Other HPC centers