Colombia 2024: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(23 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== General info ===
[[File:2024-10-16 08-37.png|thumb|smr 3966]]
[https://indico.ictp.it/event/10504/overview| Workshop on High Performance Computing for Materials Characterization, Design and Discovery]
12-26 October 2024, Barranquilla - Colombia
=== General instructions to run tutorials ===
=== General instructions to run tutorials ===


Computational resources are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.


== Run a job using a batch script ==


This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script <code>job.sh</code>, whose generic structure is the following:
Computational resources are managed on Cineca HPC by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials can be also run interactively. The two procedures are explained below.
 
=== Tutorials databases ===
 
Before proceeding, it is useful to know the different workspaces available on the machine, which can be accessed using environment variables. The main ones are:
* <code>$HOME</code>: it's the <code>home</code> directory associated to your username;
* <code>$WORK</code>: it's the <code>work</code> directory associated to the account where the computational resources dedicated to this school are allocated;
 
Please don't forget to '''run all tutorials in your scratch directory''':
$ echo $SCRATCH
/leonardo_scratch/large/usertrain/a08tra01
$ cd $SCRATCH
$ cp -R $WORK/YAMBO_TUTORIALS/ .
 
=== Run a job using a batch script ===
 
One option is to submit the job using a batch script <code>job.sh</code>, whose generic structure is the following:
  $ more job.sh
  $ more job.sh
#!/bin/bash
  #SBATCH --account=tra24_ictpcolo     # Charge resources used by this job to specified account
  #SBATCH --account=tra24_ictpcolo       # Charge resources used by this job to specified account
  #SBATCH --time=00:10:00               # Set a limit on the total run time of the job allocation in hh:mm:ss
  #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss
  #SBATCH --job-name=<JOB>              # Specify a name for the job allocation
  #SBATCH --job-name=JOB                 # Specify a name for the job allocation
  #SBATCH --partition=dcgp_usr_prod     # Request a specific partition for the resource allocation  
  #SBATCH --partition= dcgp_usr_prod     # Request a specific partition for the resource allocation
  #SBATCH --nodes=1                     # Number of nodes to be allocated for the job
#         
  #SBATCH --ntasks-per-node=4          # Number of MPI tasks invoked per node
  #SBATCH --nodes=<N>                     # Number of nodes to be allocated for the job
  #SBATCH --cpus-per-task=4            # Number of OMP threads per task
  #SBATCH --ntasks-per-node=<nt>          # Number of MPI tasks invoked per node
  #SBATCH --gres=tmpfs:10g     
#SBATCH --ntasks-per-socket=<nt/2>      # Tasks invoked on each socket
export OMP_NUM_THREADS=4
  #SBATCH --cpus-per-task=<nc>            # Number of OMP threads per task
   
  module purge
  module purge
  module load profile/chem-phys
  module load profile/chem-phys
  module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
  module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
   
  mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>
export OMP_NUM_THREADS=<nc>
  mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>
 
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script <code>job.sh</code>. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find '''ready-to-use''' batch scripts in locations specified during the tutorials.


To submit the job, use the <code>sbatch</code> command:
To submit the job, use the <code>sbatch</code> command:
Line 37: Line 52:


If you need to cancel your job, do:
If you need to cancel your job, do:
  $ scancel <JOBID>  
  $ scancel <JOBID>


== Open an interactive session ==
=== Open an interactive session ===


This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]):
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]):
  $ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=8 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
  $ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
  srun: job 8338413 queued and waiting for resources
  srun: job 8338413 queued and waiting for resources
  srun: job 8338413 has been allocated resources
  srun: job 8338413 has been allocated resources
Line 63: Line 78:
  $ exit
  $ exit


== Tutorials ==
== DAY 1 - Monday, October 18<sup>th</sup> ==
 
You can download the needed files for the tutorial.
After you can open the interactive session and login into the node
 
salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 04:00:00
ssh '''PUT HERE THE ASSIGNED NODE NAME AFTER salloc COMMAND'''
module purge
module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary
export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH
cd $CINECA_SCRATCH
cd YAMBO_TUTORIALS
 
=== DAY 1 - Monday, October 18<sup>th</sup> ===
 
'''16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo'''
 
To get the tutorial files needed for the following tutorials, follow these steps:
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz
$ ls
hBN-2D.tar.gz  hBN.tar.gz
$ tar -xvf hBN-2D.tar.gz
$ tar -xvf hBN.tar.gz
$ ls
'''hBN-2D''' '''hBN''' hBN-2D.tar.gz  hBN.tar.gz
 
Now that you have all the files, you may open the interactive job session with <code>salloc</code> as explained above and proceed with the tutorials.
 
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]
 
=== DAY 2 - Tuesday, October 19<sup>th</sup> ===
 
'''14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)'''
 
To get all the tutorial files needed for the following tutorials, follow these steps:
 
wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz
wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz
tar -xvf hBN.tar.gz
tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz
cd hBN
 
Now you can start the first tutorial:
 
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]
 
If you have gone through the first tutorial, pass now to the second one:
cd $CINECA_SCRATCH
cd YAMBO_TUTORIALS
cd MoS2_HPC_tutorial
 
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]
 
== Lectures ==
 
=== DAY 1 - Monday, 22 May ===
 
* D. Varsano, [https://media.yambo-code.eu/educational/Schools/ROME2023/scuola_intro.pdf Description and goal of the school].
* G. Stefanucci, [https://media.yambo-code.eu/educational/Schools/ROME2023/Stefanucci.pdf The Many-Body Problem: Key concepts of the Many-Body Perturbation Theory]
* M. Marsili, [https://media.yambo-code.eu/educational/Schools/ROME2023/marghe_linear_response.pdf Beyond the independent particle scheme: The linear response theory]


=== DAY 2 - Tuesday, 23 May ===
'''9:00 - 9:45 [https://wiki.yambo-code.eu/wiki/images/6/60/Density-Functional_Theory_Basic_concepts_and_approximations.pdf Density-Functional Theory: Basic concepts and approximations]'''


* E. Perfetto, [https://media.yambo-code.eu/educational/Schools/ROME2023/Talk_Perfetto.pdf An overview on non-equilibrium Green Functions]
'''14:00 - 14:45 [https://wiki.yambo-code.eu/wiki/images/c/cb/Many-Body_Perturbation_theory_Basic_concepts_and_approximations.pdf Many-Body Perturbation theory: Basic concepts and approximations]'''
* R. Frisenda, [https://media.yambo-code.eu/educational/Schools/ROME2023/FRISENDA%20-%20ARPES%20spectroscopy,%20an%20experimental%20overview.pdf ARPES spectroscopy, an experimental overview]
* A. Marini, [https://media.yambo-code.eu/educational/Schools/ROME2023/GW_marini.pdf The Quasi Particle concept and the GW method]
* A. Guandalini, [https://media.yambo-code.eu/educational/Schools/ROME2023/alberto_guandalini.pdf The GW method: approximations and algorithms]
* D.A. Leon, C. Cardoso, [https://media.yambo-code.eu/educational/Schools/ROME2023/Cardoso_YamboSchool2023_Rome.pdf Frequency dependence in GW: origin, modelling and practical implementations]


=== DAY 3 - Wednesday, 24 May ===
'''14:45 - 15:00 Yambo [http://media.yambo-code.eu/educational/lectures/Yambo_Technical_Introduction.pdf Technical Introduction] and [http://media.yambo-code.eu/educational/lectures/Yambo_Philosophy.pdf Philosophy]'''


* A. Molina-Sánchez, [https://media.yambo-code.eu/educational/Schools/ROME2023/yambo-talk-alejandro.pdf Modelling excitons: from 2D materials to Pump and Probe experiments]
'''15:00 - 17:30 [[First steps: a walk through from DFT to optical properties]]
* M. Palummo, [https://media.yambo-code.eu/educational/Schools/ROME2023/Palummo_YSCHOOL2023.pdf The Bethe-Salpeter equation: derivations and main physical concepts]
* F. Paleari, [https://media.yambo-code.eu/educational/Schools/ROME2023/Yambo2023_FulvioPaleari.pdf Real time approach to the Bethe-Salpeter equation]
* D. Sangalli, [https://www.yambo-code.eu/wiki/index.php/File:RealTime_Propagation_Lecture.pdf TD-HSEX and real-time dynamics]


=== DAY 4 - Thursday, 25 May ===
== DAY 2 - Tuesday, October 19<sup>th</sup> ==


* S. Mor, [https://media.yambo-code.eu/educational/Schools/ROME2023/Yamboschool2023_mor.pdf Time resolved spectroscopy: an  experimental overview]
'''16:00 - 17:30 [[GW on h-BN (standalone)| GW in practice: how to obtain the quasi-particle band structure of a bulk material ]]
* M. Grüning, [https://media.yambo-code.eu/educational/Schools/ROME2023/myrta_Nonlinear_Yschool.pdf Nonlinear optics within Many-Body Perturbation Theory]
* N. Tancogne-Dejean, [https://media.yambo-code.eu/educational/Schools/ROME2023/Yamboschool2023_NicolasTancogne-Dejean.pdf Theory and simulation of High Harmonics Generation]
* Y. Pavlyukh, [https://media.yambo-code.eu/educational/Schools/ROME2023/yaroslav_Coherent_eph_dynamicsMS.pdf Coherent electron-phonon dynamics within a time-linear GKBA scheme]

Latest revision as of 15:32, 31 October 2024

General info

smr 3966

Workshop on High Performance Computing for Materials Characterization, Design and Discovery 12-26 October 2024, Barranquilla - Colombia

General instructions to run tutorials

Computational resources are managed on Cineca HPC by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials can be also run interactively. The two procedures are explained below.

Tutorials databases

Before proceeding, it is useful to know the different workspaces available on the machine, which can be accessed using environment variables. The main ones are:

  • $HOME: it's the home directory associated to your username;
  • $WORK: it's the work directory associated to the account where the computational resources dedicated to this school are allocated;

Please don't forget to run all tutorials in your scratch directory:

$ echo $SCRATCH
/leonardo_scratch/large/usertrain/a08tra01
$ cd $SCRATCH
$ cp -R $WORK/YAMBO_TUTORIALS/ .

Run a job using a batch script

One option is to submit the job using a batch script job.sh, whose generic structure is the following:

$ more job.sh
#SBATCH --account=tra24_ictpcolo      # Charge resources used by this job to specified account
#SBATCH --time=00:10:00               # Set a limit on the total run time of the job allocation in hh:mm:ss
#SBATCH --job-name=<JOB>              # Specify a name for the job allocation
#SBATCH --partition=dcgp_usr_prod     # Request a specific partition for the resource allocation    
#SBATCH --nodes=1                     # Number of nodes to be allocated for the job
#SBATCH --ntasks-per-node=4           # Number of MPI tasks invoked per node
#SBATCH --cpus-per-task=4             # Number of OMP threads per task
#SBATCH --gres=tmpfs:10g       
export OMP_NUM_THREADS=4
module purge
module load profile/chem-phys
module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
mpirun  --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>

To submit the job, use the sbatch command:

$ sbatch job.sh
Submitted batch job <JOBID>

To check the job status, use the squeue command:

$ squeue -u <username>
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             <...>  ...      JOB username  R       0:01    <N> <...>

If you need to cancel your job, do:

$ scancel <JOBID>

Open an interactive session

This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation here):

$ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash
srun: job 8338413 queued and waiting for resources
srun: job 8338413 has been allocated resources
[username@lrdn4735 ~]$

We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.

username@lrdn4735$ cd $SCRATCH

Then, you need to manually load yambo as in the batch script above. Please note that the serial version of the code is in a different directory and does not need spectrum_mpi:

$ module purge
$ module load profile/chem-phys
$ module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0


Finally, set the OMP_NUM_THREADS environment variable to 4 (as in the --cpus-per-task option):

$ export OMP_NUM_THREADS=4

To close the interactive session when you have finished, log out of the compute node with the exit command, and then cancel the job:

$ exit

DAY 1 - Monday, October 18th

9:00 - 9:45 Density-Functional Theory: Basic concepts and approximations

14:00 - 14:45 Many-Body Perturbation theory: Basic concepts and approximations

14:45 - 15:00 Yambo Technical Introduction and Philosophy

15:00 - 17:30 First steps: a walk through from DFT to optical properties

DAY 2 - Tuesday, October 19th

16:00 - 17:30 GW in practice: how to obtain the quasi-particle band structure of a bulk material