Configure-suggested
If you want to install yambo on your local machine or just for testing/tutorial purposes, you can start with the configure options suggested here:
Performances: MPI, OpenMP and GPU support
--enable-mpi # enabled by default --enable-open-mp
to enable MPI and OpenMP parallelizarion;
--enable-cuda=<opts>
to enable CUDA-Fortran support, where <opts>
are the CUDA version and the compute capability of the NVidia device (e.g. cuda11.0,cc70
). Please consider this last option only if you are sure that your machine have a GPU CUDA-capable.
Performances: GPU support
To enable GPU support ... under construction
Profiling
--enable-time-profile # enabled by default --enable-memory-profile
To enable time and memory profiling (very useful for benchmarking and debugging);
Linear algebra
--enable-par-linalg --enable-slepc-linalg
To enable rispectively the support to the parallel (with ScaLAPACK) linear algebra and the suport for the diagonalization of BSE using SLEPc library;
I/O via HDF5 and Netcdf library
Yambo by defaults produces binary files in Netcdf4 (aka HDF5) format with parallel I/O support and large files support enabled. Relevant configure flag:
--enable-hdf5-par-io Enable the HDF5 parallel I/O. Default is yes
to enable HDF5 parallel I/O in versions of the code older than 5.1. Not needed anymore. If you experience issues with parallel I/O you can disable it with
--disable-hdf5-par-io
For p2y:
--enable-hdf5-p2y-support Activate HDF5 support in p2y. Default is no unless parallel HDF5 libs are linked.
This is for p2y support [TO BE COMPLETED]
Other options which the user might consider:
--enable-netcdf-v3 Switch to OLD NETCD v3 format. Default is no.
This switches to old v3 format with large files support. It might be useful if you are not able to compile netcdf linked to HDF5 library. However some functionalities of the code might not work properly. We plan to drop plane netcdf I/O in the future.
--enable-netcdf-classic Switch to OLD NetCDF classic. Default is no.
This is similar to v3 format, but it also disables large files support.
--enable-netcdf-output Activate the netcdf copy for some output files. Default is no.
This will create a netcdf version of some output files. The main advantage, compared to standard output files, is that real numbers are written with larger precisions.
--enable-hdf5-compression Activate the HDF5 data compression. Default is no.
This will create binary file using HDF5 data compression. It is not compatible with databases written via parallel I/O. Also the I/O becomes significantly slower. It is a developer option for testing purposes.
Compiling external libraries
Yambo can automatically download, compile and install the needed external libraries. Use
--with-extlibs-path=<path>
to specify a directory where Yambo will install the needed external libraries (replace <path>
with a valid directory path where you have write permissions);
Linking external libraries
Usually in HPC systems you can find already installed the library needed by Yambo. So you can use them through the specific configure options. Here below some examples.
In order to link to a specific installation of a library you can use the relative configure option and specify the path:
--with-hdf5-path=</path/to/hdf5> --with-netcdf-path=</path/to/netcdf-c> --with-netcdff-path=</path/to/netcdf-fortran>
Here and example on how to use an installation of the Intel MKL libraries for the linear algebra and the Fourier transformations:
--with-blas-libs="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl" \ --with-lapack-libs="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl" \ --with-scalapack-libs="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64" \ --with-blacs-libs="-L${MKLROOT}/lib/intel64 -lmkl_blacs_intelmpi_lp64" \ --with-fft-includedir="${MKLROOT}/include" \ --with-fft-libs="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl"
About environment variables
Using specific environment variables it is also possible to tell the configuration script the name of the compiler or of the MPI wrapper or other compilation options. Here some examples in case you are using the NVHPC compilers suite:
./configure CC=nvc FC=nvfortran FPP="nvfortran -Mpreprocess -E" MPIFC=mpif90 ...