Machine specific configure scripts: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
mNo edit summary
Line 1: Line 1:
== Install on your local machine using internal libraries ==
The simplest install of yambo is the one using internal libraries and compiled with gfortran. This is useful to test the code on your local machine, or in order to easily follow the [[Tutorials|yambo tutorials]].
You can run the following configure script (named for example <code>yambo_install.sh</code>) which should work for '''yambo 5.0''' on '''Linux machines''':
YAMBO_EXT_LIBS="/user/defined/path/for/internal/libs"
./configure FC=gfortran \
  --with-extlibs-path=$YAMBO_EXT_LIBS \
  --enable-keep-extlibs \
  --enable-time-profile \
  --enable-msgs-comps \
  --enable-keep-src \
  --enable-memory-profile \
  --enable-int-linalg \
  --enable-par-linalg \
  --enable-netcdf-output \
  --enable-slepc-linalg
== Preinstalled ==
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
Niflheim at Technical University of Denmark
* Leonardo at CINECA
SP6 at CINECA
* Eurora at CINECA
Arina at SGI at Universidad del Pais Vasco.
* Niflheim at Technical University of Denmark
Core.Sam at University of Pittsburgh.
* SP6 at CINECA
* Arina at SGI at Universidad del Pais Vasco.
* Core.Sam at University of Pittsburgh.


==  HPCs ==
==  HPCs ==

Revision as of 14:46, 9 September 2024

The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:

  • Leonardo at CINECA
  • Eurora at CINECA
  • Niflheim at Technical University of Denmark
  • SP6 at CINECA
  • Arina at SGI at Universidad del Pais Vasco.
  • Core.Sam at University of Pittsburgh.

HPCs

Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.


Leonardo @ CINECA

To do ..

EURORA @ CINECA

EURORA is a hybrid supercomputer, with Intel Xeon andyBridge processors and GPU NVIDIA Tesla K20 accelerators

module load autoload/0.1 intel/cs-xe-2013--binary intelmpi/4.1.0--binary mkl/11.0.1--binary gnu/4.6.3 cuda/5.0.35 qe/5.0.3 netcdf/4.1.3--intel--cs-xe-2013--binary hdf5/1.8.9_ser--intel--cs-xe-2013--binary szip/2.1--gnu--4.6.3 zlib/1.2.7--gnu--4.6.3
./configure --with-p2y=5.0 \
--with-iotk=/cineca/prod/build/applications/qe/5.0.3/cuda--5.0.35/BA_WORK/espresso-5.0.3/iotk/ \
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/lib/ \
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/include \
--with-netcdf-link="-L/cineca/prod/libraries/hdf5/1.8.9_ser/intel--cs-xe-2013--binary/lib -L/cineca/prod/libraries/szip/2.1/gnu--4.6.3/lib -lhdf5_fortran -lhdf5_hl -lhdf5 -lnetcdff -lnetcdf -lcurl -lsz -lz"

IBM AIX and xlf (SP6 @ CINECA)

Linking with netCDF, PWscf, FFTW. Production runs

export CPP=cpp 
export CC=xlc_r
export F77=xlf_r
export FC=xlf90_r
export FCFLAGS='-O2 -q64 -qstrict -qarch=pwr6 -qtune=pwr6 -qmaxmem=-1 -qsuffix=f=f'
./configure --build=powerpc-ibm --with-fftw=/cineca/prod/libraries/fftw/3.2.2/xl--10.1/lib 
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/lib 
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/include 
--with-iotk=/cineca/prod/build/applications/QuantumESPRESSO/4.1/xl--10.1/BA_WORK/QuantumESPRESSO-4.1/iotk 
--with-p2y=4.0