Colombia 2024: Difference between revisions
Line 39: | Line 39: | ||
$ scancel <JOBID> | $ scancel <JOBID> | ||
== Open an interactive session == | === Open an interactive session === | ||
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]): | This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation [https://slurm.schedmd.com/srun.html here]): |
Revision as of 11:27, 15 October 2024
General instructions to run tutorials
Computational resources are managed by the job scheduling system Slurm. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.
Run a job using a batch script
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script job.sh
, whose generic structure is the following:
$ more job.sh #!/bin/bash #SBATCH --account=tra24_ictpcolo # Charge resources used by this job to specified account #SBATCH --time=00:10:00 # Set a limit on the total run time of the job allocation in hh:mm:ss #SBATCH --job-name=JOB # Specify a name for the job allocation #SBATCH --partition= dcgp_usr_prod # Request a specific partition for the resource allocation # #SBATCH --nodes=<N> # Number of nodes to be allocated for the job #SBATCH --ntasks-per-node=<nt> # Number of MPI tasks invoked per node #SBATCH --ntasks-per-socket=<nt/2> # Tasks invoked on each socket #SBATCH --cpus-per-task=<nc> # Number of OMP threads per task module purge module load profile/chem-phys module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0 export OMP_NUM_THREADS=<nc> mpirun --rank-by core -np ${SLURM_NTASKS} yambo -F <input> -J <output>
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script job.sh
. The complete list of Slurm options can be found here. However you will find ready-to-use batch scripts in locations specified during the tutorials.
To submit the job, use the sbatch
command:
$ sbatch job.sh Submitted batch job <JOBID>
To check the job status, use the squeue
command:
$ squeue -u <username> JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) <...> ... JOB username R 0:01 <N> <...>
If you need to cancel your job, do:
$ scancel <JOBID>
Open an interactive session
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1:30 hour (complete documentation here):
$ srun --nodes=1 --ntasks-per-node=4 --cpus-per-task=8 --account=tra24_ictpcolo --partition=dcgp_usr_prod --time=1:30:00 --gres=tmpfs:10g --pty /bin/bash srun: job 8338413 queued and waiting for resources srun: job 8338413 has been allocated resources [username@lrdn4735 ~]$
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.
username@lrdn4735$ cd $SCRATCH
Then, you need to manually load yambo
as in the batch script above. Please note that the serial version of the code is in a different directory and does not need spectrum_mpi
:
$ module purge $ module load profile/chem-phys $ module load yambo/5.2.2--intel-oneapi-mpi--2021.10.0--oneapi--2023.2.0
Finally, set the OMP_NUM_THREADS
environment variable to 4 (as in the --cpus-per-task
option):
$ export OMP_NUM_THREADS=4
To close the interactive session when you have finished, log out of the compute node with the exit
command, and then cancel the job:
$ exit
Tutorials
You can download the needed files for the tutorial. After you can open the interactive session and login into the node
salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 04:00:00 ssh PUT HERE THE ASSIGNED NODE NAME AFTER salloc COMMAND module purge module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH cd $CINECA_SCRATCH cd YAMBO_TUTORIALS
DAY 1 - Monday, October 18th
16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo
To get the tutorial files needed for the following tutorials, follow these steps:
$ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz $ ls hBN-2D.tar.gz hBN.tar.gz $ tar -xvf hBN-2D.tar.gz $ tar -xvf hBN.tar.gz $ ls hBN-2D hBN hBN-2D.tar.gz hBN.tar.gz
Now that you have all the files, you may open the interactive job session with salloc
as explained above and proceed with the tutorials.
DAY 2 - Tuesday, October 19th
14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)
To get all the tutorial files needed for the following tutorials, follow these steps:
wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz tar -xvf hBN.tar.gz tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz cd hBN
Now you can start the first tutorial:
If you have gone through the first tutorial, pass now to the second one:
cd $CINECA_SCRATCH cd YAMBO_TUTORIALS cd MoS2_HPC_tutorial
Lectures
DAY 1 - Monday, 22 May
- D. Varsano, Description and goal of the school.
- G. Stefanucci, The Many-Body Problem: Key concepts of the Many-Body Perturbation Theory
- M. Marsili, Beyond the independent particle scheme: The linear response theory
DAY 2 - Tuesday, 23 May
- E. Perfetto, An overview on non-equilibrium Green Functions
- R. Frisenda, ARPES spectroscopy, an experimental overview
- A. Marini, The Quasi Particle concept and the GW method
- A. Guandalini, The GW method: approximations and algorithms
- D.A. Leon, C. Cardoso, Frequency dependence in GW: origin, modelling and practical implementations
DAY 3 - Wednesday, 24 May
- A. Molina-Sánchez, Modelling excitons: from 2D materials to Pump and Probe experiments
- M. Palummo, The Bethe-Salpeter equation: derivations and main physical concepts
- F. Paleari, Real time approach to the Bethe-Salpeter equation
- D. Sangalli, TD-HSEX and real-time dynamics
DAY 4 - Thursday, 25 May
- S. Mor, Time resolved spectroscopy: an experimental overview
- M. Grüning, Nonlinear optics within Many-Body Perturbation Theory
- N. Tancogne-Dejean, Theory and simulation of High Harmonics Generation
- Y. Pavlyukh, Coherent electron-phonon dynamics within a time-linear GKBA scheme