Colombia 2024: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
mNo edit summary
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== General info ===
[[File:2024-10-16 08-37.png|thumb|smr 3966]]
[https://indico.ictp.it/event/10504/overview| Workshop on High Performance Computing for Materials Characterization, Design and Discovery]
12-26 October 2024, Barranquilla - Colombia
=== General instructions to run tutorials ===
=== General instructions to run tutorials ===


Computational resources are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.
 
 
Computational resources are managed on Cineca HPC by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials can be also run interactively. The two procedures are explained below.
 
=== Tutorials databases ===
 
Before proceeding, it is useful to know the different workspaces available on the machine, which can be accessed using environment variables. The main ones are:
* <code>$HOME</code>: it's the <code>home</code> directory associated to your username;
* <code>$WORK</code>: it's the <code>work</code> directory associated to the account where the computational resources dedicated to this school are allocated;
 
Please don't forget to '''run all tutorials in your scratch directory''':
$ echo $SCRATCH
/leonardo_scratch/large/usertrain/a08tra01
$ cd $SCRATCH
$ cp -R $WORK/YAMBO_TUTORIALS/ .


=== Run a job using a batch script ===
=== Run a job using a batch script ===
Line 7: Line 28:
One option is to submit the job using a batch script <code>job.sh</code>, whose generic structure is the following:
One option is to submit the job using a batch script <code>job.sh</code>, whose generic structure is the following:
  $ more job.sh
  $ more job.sh
#!/bin/bash
  #SBATCH --account=tra24_ictpcolo     # Charge resources used by this job to specified account
  #SBATCH --account=tra24_ictpcolo       # Charge resources used by this job to specified account
  #SBATCH --time=00:10:00               # Set a limit on the total run time of the job allocation in hh:mm:ss
  #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss
  #SBATCH --job-name=<JOB>              # Specify a name for the job allocation
  #SBATCH --job-name=JOB                 # Specify a name for the job allocation
  #SBATCH --partition=dcgp_usr_prod     # Request a specific partition for the resource allocation  
  #SBATCH --partition= dcgp_usr_prod     # Request a specific partition for the resource allocation
  #SBATCH --nodes=1                     # Number of nodes to be allocated for the job
#         
  #SBATCH --ntasks-per-node=4          # Number of MPI tasks invoked per node
  #SBATCH --nodes=<N>                     # Number of nodes to be allocated for the job
  #SBATCH --cpus-per-task=4            # Number of OMP threads per task
  #SBATCH --ntasks-per-node=<nt>          # Number of MPI tasks invoked per node
  #SBATCH --gres=tmpfs:10g     
#SBATCH --ntasks-per-socket=<nt/2>      # Tasks invoked on each socket
export OMP_NUM_THREADS=4
  #SBATCH --cpus-per-task=<nc>            # Number of OMP threads per task
   
  module purge
  module purge
  module load profile/chem-phys
  module load profile/chem-phys
  module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
  module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
   
  mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>
  export OMP_NUM_THREADS=<nc>
mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>


To submit the job, use the <code>sbatch</code> command:
To submit the job, use the <code>sbatch</code> command:
Line 40: Line 57:


This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]):
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]):
  $ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=8 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
  $ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
  srun: job 8338413 queued and waiting for resources
  srun: job 8338413 queued and waiting for resources
  srun: job 8338413 has been allocated resources
  srun: job 8338413 has been allocated resources
Line 69: Line 86:
'''14:45 - 15:00 Yambo [http://media.yambo-code.eu/educational/lectures/Yambo_Technical_Introduction.pdf Technical Introduction] and [http://media.yambo-code.eu/educational/lectures/Yambo_Philosophy.pdf Philosophy]'''  
'''14:45 - 15:00 Yambo [http://media.yambo-code.eu/educational/lectures/Yambo_Technical_Introduction.pdf Technical Introduction] and [http://media.yambo-code.eu/educational/lectures/Yambo_Philosophy.pdf Philosophy]'''  


'''15:00 - 17:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo'''
'''15:00 - 17:30 [[First steps: a walk through from DFT to optical properties]]
 
To get the tutorial files needed for the following tutorials, follow these steps:
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz
$ ls
hBN-2D.tar.gz  hBN.tar.gz
$ tar -xvf hBN-2D.tar.gz
$ tar -xvf hBN.tar.gz
$ ls
'''hBN-2D''' '''hBN''' hBN-2D.tar.gz  hBN.tar.gz
 
Now that you have all the files, you can proceed with the tutorial [[First steps: a walk through from DFT to optical properties]]


== DAY 2 - Tuesday, October 19<sup>th</sup> ==
== DAY 2 - Tuesday, October 19<sup>th</sup> ==


'''16:00 - 17:30 A tour through GW simulation in a complex material'''
'''16:00 - 17:30 [[GW on h-BN (standalone)| GW in practice: how to obtain the quasi-particle band structure of a bulk material ]]
 
To get all the tutorial files needed for the following tutorials, follow these steps:
 
wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz
wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz
tar -xvf hBN.tar.gz
tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz
cd hBN
 
Now you can start the first tutorial:
 
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]
 
If you have gone through the first tutorial, pass now to the second one:
cd $CINECA_SCRATCH
cd YAMBO_TUTORIALS
cd MoS2_HPC_tutorial

Latest revision as of 15:32, 31 October 2024

General info

smr 3966

Workshop on High Performance Computing for Materials Characterization, Design and Discovery 12-26 October 2024, Barranquilla - Colombia

General instructions to run tutorials

Computational resources are managed on Cineca HPC by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials can be also run interactively. The two procedures are explained below.

Tutorials databases

Before proceeding, it is useful to know the different workspaces available on the machine, which can be accessed using environment variables. The main ones are:

  • $HOME: it's the home directory associated to your username;
  • $WORK: it's the work directory associated to the account where the computational resources dedicated to this school are allocated;

Please don't forget to run all tutorials in your scratch directory:

$ echo $SCRATCH
/leonardo_scratch/large/usertrain/a08tra01
$ cd $SCRATCH
$ cp -R $WORK/YAMBO_TUTORIALS/ .

Run a job using a batch script

One option is to submit the job using a batch script job.sh, whose generic structure is the following:

$ more job.sh
#SBATCH --account=tra24_ictpcolo      # Charge resources used by this job to specified account
#SBATCH --time=00:10:00               # Set a limit on the total run time of the job allocation in hh:mm:ss
#SBATCH --job-name=<JOB>              # Specify a name for the job allocation
#SBATCH --partition=dcgp_usr_prod     # Request a specific partition for the resource allocation    
#SBATCH --nodes=1                     # Number of nodes to be allocated for the job
#SBATCH --ntasks-per-node=4           # Number of MPI tasks invoked per node
#SBATCH --cpus-per-task=4             # Number of OMP threads per task
#SBATCH --gres=tmpfs:10g       
export OMP_NUM_THREADS=4
module purge
module load profile/chem-phys
module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
mpirun  --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>

To submit the job, use the sbatch command:

$ sbatch job.sh
Submitted batch job <JOBID>

To check the job status, use the squeue command:

$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             <...>  ...      JOB username  R       0:01    <N> <...>

If you need to cancel your job, do:

$ scancel <JOBID>

Open an interactive session

This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation here):

$ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
srun: job 8338413 queued and waiting for resources
srun: job 8338413 has been allocated resources
[username@lrdn4735 ~]$

We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.

username@lrdn4735$ cd $SCRATCH

Then, you need to manually load yambo as in the batch script above. Please note that the serial version of the code is in a different directory and does not need spectrum_mpi:

$ module purge
$ module load profile/chem-phys
$ module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0


Finally, set the OMP_NUM_THREADS environment variable to 4 (as in the --cpus-per-task option):

$ export OMP_NUM_THREADS=4

To close the interactive session when you have finished, log out of the compute node with the exit command, and then cancel the job:

$ exit

DAY 1 - Monday, October 18th

9:00 - 9:45 Density-Functional Theory: Basic concepts and approximations

14:00 - 14:45 Many-Body Perturbation theory: Basic concepts and approximations

14:45 - 15:00 Yambo Technical Introduction and Philosophy

15:00 - 17:30 First steps: a walk through from DFT to optical properties

DAY 2 - Tuesday, October 19th

16:00 - 17:30 GW in practice: how to obtain the quasi-particle band structure of a bulk material