Rome 2023: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
Line 68: Line 68:
''' - Run a job using a batch script '''
''' - Run a job using a batch script '''


This procedure his suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script.
This procedure his suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script, whose generic structure is the following:
#!/bin/bash
#SBATCH --account=tra23_Yambo          # Charge resources used by this job to specified account
#SBATCH --time=0:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss
#SBATCH --job-name=JOB                  # Specify a name for the job allocation         
#SBATCH --nodes=1                      # Number of nodes to be allocated for the job
#SBATCH --ntasks-per-node=4            # Number of MPI tasks invoked per node
#SBATCH --ntasks-per-socket=2          # Tasks invoked on each socket
#SBATCH --cpus-per-task=32              #
#SBATCH --gpus-per-node=4
#SBATCH --partition=m100_usr_prod
#SBATCH --qos=m100_qos_dbg
#SBATCH --error=job.err
#SBATCH --output=job.out
 
Please note that these instructions are chosen considering the specific [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture of M100].


''' - Open an interactive session '''
''' - Open an interactive session '''

Revision as of 13:58, 17 May 2023

A general description of the goal(s) of the school can be found on the Yambo main website

Use CINECA computational resources

Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 here. In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.

Connect to the cluster using ssh

You can access M100 via ssh protocol in different ways.

- Connect using username and password

Use the following command replacing your username:

$ ssh username@login.m100.cineca.it

However, in this way you have to type your password each time you want to connect.

- Connect using ssh key

You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your .ssh directory (usually located in the home directory):

$ cd $HOME/.ssh

If you don't have this directory, you can create it with mkdir $HOME/.ssh.

Once you are in the .ssh directory, run the ssh-keygen command to generate a private/public key pair:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: m100_id_rsa
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in <your_.ssh_dir>/m100_id_rsa
Your public key has been saved in <your_.ssh_dir>/m100_id_rsa.pub
The key fingerprint is:
<...>
The key's randomart image is:
<...>

Now you need to copy the public key to M100. You can do that with the following command (for this step you need to type your password):

$ ssh-copy-id -i <your_.ssh_dir>/m100_id_rsa.pub <username>@login.m100.cineca.it

Once the public key has been copied, you can connect to M100 without having to type the password using the -i option:

$ ssh -i <your_.ssh_dir>/m100_id_rsa username@login.m100.cineca.it

To simplify even more, you can paste the following lines in a file named config located inside the .ssh directory adjusting username and path:

Host m100 
 HostName login.m100.cineca.it
 User username
 IdentityFile <your_.ssh_dir>/m100_id_rsa

With the config file setup you can connect simply with

$ ssh m100

General instructions to run tutorials

Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:

  • $HOME: it's the home directory associated to your username;
  • $WORK: it's the work directory associated to the account where the computational resources dedicated to this school are allocated;
  • $CINECA_SCRATCH: it's the scratch directory associated to your username.

You can find more details about storage and FileSystems here.

Please don't forget to run all tutorials in your scratch directory:

$ echo $CINECA_SCRATCH
/m100_scratch/userexternal/username
$ cd $CINECA_SCRATCH

Computational resources on M100 are managed by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.

- Run a job using a batch script

This procedure his suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script, whose generic structure is the following:

#!/bin/bash
#SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account
#SBATCH --time=0:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss
#SBATCH --job-name=JOB                  # Specify a name for the job allocation          
#SBATCH --nodes=1                       # Number of nodes to be allocated for the job
#SBATCH --ntasks-per-node=4             # Number of MPI tasks invoked per node
#SBATCH --ntasks-per-socket=2           # Tasks invoked on each socket
#SBATCH --cpus-per-task=32              # 
#SBATCH --gpus-per-node=4
#SBATCH --partition=m100_usr_prod
#SBATCH --qos=m100_qos_dbg
#SBATCH --error=job.err
#SBATCH --output=job.out

Please note that these instructions are chosen considering the specific architecture of M100.

- Open an interactive session

Tutorials

DAY 1 - Monday, 22 May

16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo

DAY 2 - Tuesday, 23 May

14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)

DAY 3 - Wednesday, 24 May

17:00 - 18:30 Real-time Bethe-Salpeter equation Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)

DAY 4 - Thursday, May 25

14:00 - 16:30 Real-time approach with the time dependent berry phase Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)

DAY 5 - Friday, 26 May

Lectures

DAY 1 - Monday, 22 May