Install Yambo on Ubuntu/LinuxMint with NVfortran compiler: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(25 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The NVIDIA compiler are freely available on Linux machines.
The NVIDIA compiler are freely available on Linux machines.
You can download Fortran,C, C++ compiler and debugger from:
You can download Fortran,C, C++ compiler and debugger from:
[https://developer.nvidia.com/hpc-sdk-downloads NVIDIA HPC Software Development Kit (SDK) ] <br>
On Ubuntu, it can be easely installed via the following procedure
sudo apt-get update -y && sudo apt-get upgrade -y
sudo apt-get install -y build-essential automake autoconf libtool zlib1g-dev curl gpg wget git tar cmake
curl https://developer.download.nvidia.com/hpc-sdk/ubuntu/DEB-GPG-KEY-NVIDIA-HPC-SDK | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg
echo 'deb [signed-by=/usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg] https://developer.download.nvidia.com/hpc-sdk/ubuntu/amd64 /' | sudo tee /etc/apt/sources.list.d/nvhpc.list
sudo apt-get update -y && sudo apt-get install -y nvhpc-25-1


[https://developer.nvidia.com/hpc-sdk NVIDIA HPC Software Development Kit (SDK) ]
'''Setup NVIDIA compilers '''


Once you downloaded and installed the NVIDIA compiler do not forget to set the correct PATH and LD_LIBRARY_PATH variables.<br>
Once you downloaded and installed the NVIDIA SDK it is suggested avoiding manually setting environment variables for compilers and MPI wrappers. The safest approach is to use the module files provided by Nvidia, located at: `/opt/nvidia/hpc_sdk/modulefiles`
If you want to use NVIDIA compiler in parallel you need to recompile [https://www.open-mpi.org/ openmpi] or [https://www.mpich.org/ mpich] with this compiler.
 
Before using module files, you need to install the module management tool:
 
sudo apt-get install environment-modules
echo "source /etc/profile.d/modules.sh" >> ~/.bashrc
echo "module use /opt/nvidia/hpc_sdk/modulefiles" >> ~/.bashrc
source ~/.bashrc
 
The commands above only need to be used the first time. Now, you can load the Nvidia SDK module, which will correctly set up all environment variables for both compilers and MPI:
 
module load nvhpc/25.1
 
'''Configure Yambo with NVfortran and openMPI '''


Then you can configure Yambo with the command:
Then you can configure Yambo with the command:


  ./configure FC=nvfortran F77=nvfortran CC=nvc CPP="gcc -E -P" FPP="gfortran -E -P -cpp" \
  ./configure MPIFC=mpif90 MPICC=mpicc FC=nvfortran F77=nvfortran CPP="cpp -E" FPP="nvfortran -Mpreprocess -E" F90SUFFIX=".f90" \
  --enable-open-mp --enable-par-linalg --enable-hdf5-par-io --enable-slepc-linalg
--enable-memory-profile --enable-open-mp --enable-par-linalg --enable-hdf5-par-io --enable-slepc-linalg  


'''Configure NVfortran with a CUDA graphic card'''
If you have an installed CUDA GPU on your machine you can compile Yambo to use it, by adding the flag
<span style="color:#0000FF">''--enable-cuda-fortran</span>
and then specify your GPU architecture and the CUDA runtime version, look at configure help for more info.


If you have an installed CUDA GPU on your machine you can compile Yambo to use it, by adding the flag <span style="color:#0000FF">''--enable-cuda="cuda-version,card-version"</span>.<br>
'''Check that your Nvidia graphic card is properly installed'''
Where ''cuda-version'' is the version of your cuda libraries, just look in the folder ''/opt/nvidia/hpc_sdk/Linux_x86_64/'' and you will find a folder
with the version of your NVIDIA SDK. <br>For the ''card-version'' different options are available:
 
cc20            Compile for compute capability 2.0
cc30            Compile for compute capability 3.0
cc35            Compile for compute capability 3.5
cc50            Compile for compute capability 5.0
cc60            Compile for compute capability 6.0
cc70            Compile for compute capability 7.0


just have a look to the wiki webpage to see the compatibility of your card: [https://en.wikipedia.org/wiki/CUDA#GPUs_supported GPUs supported]
To be sure that the code will run fine on your GPU card, you need the proper driver installed on your machine. If you have the nvidia drivers, just run
$nvidia-smi
Mon Sep  9 16:20:02 2024     
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6    |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf          Pwr:Usage/Cap |          Memory-Usage | GPU-Util  Compute M. |
|                                        |                        |              MIG M. |
|=========================================+========================+======================|
|  0  NVIDIA GeForce GTX 1650        Off |  00000000:01:00.0  On |                  N/A |
| 20%  39C    P8              8W /  75W |    280MiB /  4096MiB |      0%      Default |
|                                        |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
''CUDA Version: V.S'' should be lower or equal to what you set in your ''cuda-runtime'' in the configure. In this case we are running with driver 560, which supports cuda version 12.6 or newer via the CUDA Forward Compatibility Package (see [https://docs.nvidia.com/deploy/cuda-compatibility/ https://docs.nvidia.com/deploy/cuda-compatibility/]).


<span style="color:#ff0000">Nota bene 1</span>: cc20 and cc30 are not support anymore in the last version of the nvidia compiler<br>
More info on Yambo on NVIDIA graphic cards can be found here: [https://www.nvidia.com/en-us/on-demand/session/gtcspring21-e32448/ Materials Design Toward the Exascale: Porting Electronic Structure Community Codes to GPUs]
<span style="color:#ff0000">Nota bene 2</span>: if you want to use a CUDA graphic card you need to compile openmpi or mpich with the cuda support

Latest revision as of 09:50, 11 March 2025

The NVIDIA compiler are freely available on Linux machines. You can download Fortran,C, C++ compiler and debugger from: NVIDIA HPC Software Development Kit (SDK)
On Ubuntu, it can be easely installed via the following procedure

sudo apt-get update -y && sudo apt-get upgrade -y
sudo apt-get install -y build-essential automake autoconf libtool zlib1g-dev curl gpg wget git tar cmake
curl https://developer.download.nvidia.com/hpc-sdk/ubuntu/DEB-GPG-KEY-NVIDIA-HPC-SDK | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg
echo 'deb [signed-by=/usr/share/keyrings/nvidia-hpcsdk-archive-keyring.gpg] https://developer.download.nvidia.com/hpc-sdk/ubuntu/amd64 /' | sudo tee /etc/apt/sources.list.d/nvhpc.list
sudo apt-get update -y && sudo apt-get install -y nvhpc-25-1

Setup NVIDIA compilers

Once you downloaded and installed the NVIDIA SDK it is suggested avoiding manually setting environment variables for compilers and MPI wrappers. The safest approach is to use the module files provided by Nvidia, located at: `/opt/nvidia/hpc_sdk/modulefiles`

Before using module files, you need to install the module management tool:

sudo apt-get install environment-modules
echo "source /etc/profile.d/modules.sh" >> ~/.bashrc
echo "module use /opt/nvidia/hpc_sdk/modulefiles" >> ~/.bashrc
source ~/.bashrc

The commands above only need to be used the first time. Now, you can load the Nvidia SDK module, which will correctly set up all environment variables for both compilers and MPI:

module load nvhpc/25.1

Configure Yambo with NVfortran and openMPI

Then you can configure Yambo with the command:

./configure MPIFC=mpif90 MPICC=mpicc FC=nvfortran F77=nvfortran CPP="cpp -E"  FPP="nvfortran -Mpreprocess -E" F90SUFFIX=".f90" \
--enable-memory-profile  --enable-open-mp --enable-par-linalg  --enable-hdf5-par-io  --enable-slepc-linalg 

If you have an installed CUDA GPU on your machine you can compile Yambo to use it, by adding the flag

--enable-cuda-fortran 

and then specify your GPU architecture and the CUDA runtime version, look at configure help for more info.

Check that your Nvidia graphic card is properly installed

To be sure that the code will run fine on your GPU card, you need the proper driver installed on your machine. If you have the nvidia drivers, just run

$nvidia-smi
Mon Sep  9 16:20:02 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1650        Off |   00000000:01:00.0  On |                  N/A |
| 20%   39C    P8              8W /   75W |     280MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

CUDA Version: V.S should be lower or equal to what you set in your cuda-runtime in the configure. In this case we are running with driver 560, which supports cuda version 12.6 or newer via the CUDA Forward Compatibility Package (see https://docs.nvidia.com/deploy/cuda-compatibility/).

More info on Yambo on NVIDIA graphic cards can be found here: Materials Design Toward the Exascale: Porting Electronic Structure Community Codes to GPUs