Colombia 2024
General info
Workshop on High Performance Computing for Materials Characterization, Design and Discovery 12-26 October 2024, Barranquilla - Colombia
General instructions to run tutorials
Computational resources are managed on Cineca HPC by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials can be also run interactively. The two procedures are explained below.
Tutorials databases
Before proceeding, it is useful to know the different workspaces available on the machine, which can be accessed using environment variables. The main ones are:
$HOME
: it's thehome
directory associated to your username;$WORK
: it's thework
directory associated to the account where the computational resources dedicated to this school are allocated;
Please don't forget to run all tutorials in your scratch directory:
$ echo $SCRATCH /leonardo_scratch/large/usertrain/a08tra01 $ cd $SCRATCH $ cp -R $WORK/YAMBO_TUTORIALS/ .
Run a job using a batch script
One option is to submit the job using a batch script job.sh
, whose generic structure is the following:
$ more job.sh #SBATCH --account=tra24_ictpcolo # Charge resources used by this job to specified account #SBATCH --time=00:10:00 # Set a limit on the total run time of the job allocation in hh:mm:ss #SBATCH --job-name=<JOB> # Specify a name for the job allocation #SBATCH --partition=dcgp_usr_prod # Request a specific partition for the resource allocation #SBATCH --nodes=1 # Number of nodes to be allocated for the job #SBATCH --ntasks-per-node=4 # Number of MPI tasks invoked per node #SBATCH --cpus-per-task=4 # Number of OMP threads per task #SBATCH --gres=tmpfs:10g export OMP_NUM_THREADS=4 module purge module load profile/chem-phys module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0 mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>
To submit the job, use the sbatch
command:
$ sbatch job.sh Submitted batch job <JOBID>
To check the job status, use the squeue
command:
$ squeue -u <username> JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) <...> ... JOB username R 0:01 <N> <...>
If you need to cancel your job, do:
$ scancel <JOBID>
Open an interactive session
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation here):
$ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash srun: job 8338413 queued and waiting for resources srun: job 8338413 has been allocated resources [username@lrdn4735 ~]$
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.
username@lrdn4735$ cd $SCRATCH
Then, you need to manually load yambo
as in the batch script above. Please note that the serial version of the code is in a different directory and does not need spectrum_mpi
:
$ module purge $ module load profile/chem-phys $ module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
Finally, set the OMP_NUM_THREADS
environment variable to 4 (as in the --cpus-per-task
option):
$ export OMP_NUM_THREADS=4
To close the interactive session when you have finished, log out of the compute node with the exit
command, and then cancel the job:
$ exit
DAY 1 - Monday, October 18th
9:00 - 9:45 Density-Functional Theory: Basic concepts and approximations
14:00 - 14:45 Many-Body Perturbation theory: Basic concepts and approximations
14:45 - 15:00 Yambo Technical Introduction and Philosophy
15:00 - 17:30 First steps: a walk through from DFT to optical properties
DAY 2 - Tuesday, October 19th
16:00 - 17:30 GW in practice: how to obtain the quasi-particle band structure of a bulk material