Rome 2023

From The Yambo Project
Jump to navigation Jump to search

A general description of the goal(s) of the school can be found on the Yambo main website

Use CINECA computational resources

Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 here. In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.

Connect to the cluster using ssh

You can access M100 via ssh protocol in different ways.


- Connect using username and password

Use the following command replacing your username:

$ ssh username@login.m100.cineca.it

However, in this way you have to type your password each time you want to connect.


- Connect using ssh key

You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your .ssh directory (usually located in the home directory):

$ cd $HOME/.ssh

If you don't have this directory, you can create it with mkdir $HOME/.ssh.

Once you are in the .ssh directory, run the ssh-keygen command to generate a private/public key pair:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: m100_id_rsa
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in <your_.ssh_dir>/m100_id_rsa
Your public key has been saved in <your_.ssh_dir>/m100_id_rsa.pub
The key fingerprint is:
<...>
The key's randomart image is:
<...>

Now you need to copy the public key to M100. You can do that with the following command (for this step you need to type your password):

$ ssh-copy-id -i <your_.ssh_dir>/m100_id_rsa.pub <username>@login.m100.cineca.it

Once the public key has been copied, you can connect to M100 without having to type the password using the -i option:

$ ssh -i <your_.ssh_dir>/m100_id_rsa username@login.m100.cineca.it

To simplify even more, you can paste the following lines in a file named config located inside the .ssh directory adjusting username and path:

Host m100 
 HostName login.m100.cineca.it
 User username
 IdentityFile <your_.ssh_dir>/m100_id_rsa

With the config file setup you can connect simply with

$ ssh m100

General instructions to run tutorials

Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:

  • $HOME: it's the home directory associated to your username;
  • $WORK: it's the work directory associated to the account where the computational resources dedicated to this school are allocated;
  • $CINECA_SCRATCH: it's the scratch directory associated to your username.

You can find more details about storage and FileSystems here.

Please don't forget to run all tutorials in your scratch directory:

$ echo $CINECA_SCRATCH
/m100_scratch/userexternal/username
$ cd $CINECA_SCRATCH

Computational resources on M100 are managed by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.


- Run a job using a batch script

This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script job.sh. Please note that the instructions in the batch script must be compatible with the specific M100 architecture and accounting systems. The complete list of Slurm options can be found here. However you will find ready-to-use batch scripts in locations specified during the tutorials.

To submit the job, use the sbatch command:

$ sbatch job.sh
Submitted batch job <JOBID>

To check the job status, use the squeue command:

$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             <...>  m100_...      JOB username  R       0:01    <N> <...>

If you need to cancel your job, do:

$ scancel <JOBID> 


- Open an interactive session

This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation here):

$ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00
salloc: Granted job allocation 10164647
salloc: Waiting for resource configuration
salloc: Nodes r256n01 are ready for job

We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.

With squeue you can see that there is now a job running:

$ squeue -u username
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
          10164647 m100_usr_ interact username  R       0:02      1 r256n01

To run the tutorial, ssh into the node specified by the job allocation and cd to your scratch directory:

$ ssh r256n01
$ cd $CINECA_SCRATCH

Then, you need to manually load yambo as in the batch script above. Please note that the serial version of the code is in a different directory and does not need spectrum_mpi:

$ module purge
$ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary
$ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH

Finally, set the OMP_NUM_THREADS environment variable to 4 (as in the --cpus-per-task option):

$ export OMP_NUM_THREADS=4

To close the interactive session when you have finished, log out of the compute node with the exit command, and then cancel the job:

$ exit
$ scancel <JOBID>


- Plot results with gnuplot

During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, open a new terminal window and connect to M100 enabling X11 forwarding with the -X option:

$ ssh -X m100

Please note that gnuplot can be used in this way only from the login nodes:

username@login01$ cd <directory_with_data>
username@login01$ gnuplot
<...>
Terminal type is now '<...>'
gnuplot> plot <...>

Tutorials

DAY 1 - Monday, 22 May

16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo

To get the tutorial files needed for the following tutorials, follow these steps:

$ ssh m100
$ cd $CINECA_SCRATCH
$ mkdir YAMBO_TUTORIALS
$ cd YAMBO_TUTORIALS
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz
$ ls
hBN-2D.tar.gz  hBN.tar.gz
$ tar -xvf hBN-2D.tar.gz
$ tar -xvf hBN.tar.gz
$ ls
hBN-2D hBN hBN-2D.tar.gz  hBN.tar.gz

Now that you have all the files, you may open the interactive job session with salloc as explained above and proceed with the tutorials.

DAY 2 - Tuesday, 23 May

14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)

DAY 3 - Wednesday, 24 May

17:00 - 18:30 Real-time Bethe-Salpeter equation Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)

DAY 4 - Thursday, May 25

14:00 - 16:30 Real-time approach with the time dependent berry phase Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)

DAY 5 - Friday, 26 May

Lectures

DAY 1 - Monday, 22 May