Machine specific configure scripts: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
mNo edit summary
No edit summary
Line 1: Line 1:
==  HPCs ==
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:
* Leonardo at CINECA
* Leonardo at CINECA
Line 7: Line 8:
* Core.Sam at University of Pittsburgh.
* Core.Sam at University of Pittsburgh.


==  HPCs ==
Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.
Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.



Revision as of 14:47, 9 September 2024

HPCs

The GPL version of Yambo is already installed on different HPC systems around the world, here we report some of them:

  • Leonardo at CINECA
  • Eurora at CINECA
  • Niflheim at Technical University of Denmark
  • SP6 at CINECA
  • Arina at SGI at Universidad del Pais Vasco.
  • Core.Sam at University of Pittsburgh.

Below are some configure options that have been used in the past. Of course, since compilers and architectures vary a lot, there are no guarantees that they will work on your system. Be particularly careful when specifying FCFLAGS, as you may override settings which are necessary for compilation, e.g. -nofor_main with ifort.


Leonardo @ CINECA

To do ..

EURORA @ CINECA

EURORA is a hybrid supercomputer, with Intel Xeon andyBridge processors and GPU NVIDIA Tesla K20 accelerators

module load autoload/0.1 intel/cs-xe-2013--binary intelmpi/4.1.0--binary mkl/11.0.1--binary gnu/4.6.3 cuda/5.0.35 qe/5.0.3 netcdf/4.1.3--intel--cs-xe-2013--binary hdf5/1.8.9_ser--intel--cs-xe-2013--binary szip/2.1--gnu--4.6.3 zlib/1.2.7--gnu--4.6.3
./configure --with-p2y=5.0 \
--with-iotk=/cineca/prod/build/applications/qe/5.0.3/cuda--5.0.35/BA_WORK/espresso-5.0.3/iotk/ \
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/lib/ \
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.1.3/intel--cs-xe-2013--binary/include \
--with-netcdf-link="-L/cineca/prod/libraries/hdf5/1.8.9_ser/intel--cs-xe-2013--binary/lib -L/cineca/prod/libraries/szip/2.1/gnu--4.6.3/lib -lhdf5_fortran -lhdf5_hl -lhdf5 -lnetcdff -lnetcdf -lcurl -lsz -lz"

IBM AIX and xlf (SP6 @ CINECA)

Linking with netCDF, PWscf, FFTW. Production runs

export CPP=cpp 
export CC=xlc_r
export F77=xlf_r
export FC=xlf90_r
export FCFLAGS='-O2 -q64 -qstrict -qarch=pwr6 -qtune=pwr6 -qmaxmem=-1 -qsuffix=f=f'
./configure --build=powerpc-ibm --with-fftw=/cineca/prod/libraries/fftw/3.2.2/xl--10.1/lib 
--with-netcdf-lib=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/lib 
--with-netcdf-include=/cineca/prod/libraries/netcdf/4.0.1/xl--10.1/include 
--with-iotk=/cineca/prod/build/applications/QuantumESPRESSO/4.1/xl--10.1/BA_WORK/QuantumESPRESSO-4.1/iotk 
--with-p2y=4.0