<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.yambo-code.eu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Matteo.dalessio</id>
	<title>The Yambo Project - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.yambo-code.eu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Matteo.dalessio"/>
	<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Special:Contributions/Matteo.dalessio"/>
	<updated>2026-05-17T16:08:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.8</generator>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8830</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8830"/>
		<updated>2025-05-26T11:19:34Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Lectures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:30 Linear response&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Introduction to Yambopy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;11:30 - 12:30 | 14:30 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_2Dquasiparticle_tutorial_Modena2025&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:00 Bethe-Salpeter (part 2)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:30 - 17:30 Nonlinear response with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to download again the tutorial files, follow these steps (or see the above instructions):&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_1_scuola_intro.pdf Description and goal of the school].&lt;br /&gt;
* C. Franchini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_2_TALK-FRANCHINI.pdf First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_3_DFT.pdf A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_4_linear_response_Elena.pdf Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_1_MBPT.pdf Introduction to Many-Body Perturbation Theory]&lt;br /&gt;
* C. Cardoso, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_2_CCardoso_YamboSchool2025_Modena.pdf Quasiparticles and the GW Approximation]&lt;br /&gt;
* A. Guandalini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_3_a_Alberto_Guandalini_2025.pdf GW in practice: algorithms and approximations]&lt;br /&gt;
* G. Sesti, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_3_b_Giacomo_Sesti_YamboSchool2025.pdf  GW advanced algorithms]&lt;br /&gt;
* M. Govoni, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_4_20250520_YAMBOSCHOOL_Govoni.pdf GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_1_Palummo_Yamboschool2025.pdf Optical absorption and excitons via the Bethe-Salpeter Equation]&lt;br /&gt;
* D. Sangalli, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_2_DAY3_RealTime_Propagation_DavideSangalli.pdf Real Time Spectroscopy]&lt;br /&gt;
* F. Paleari, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_3_Fulvio_Paleari_YamboSchool2025.pdf Introduction to YamboPy (automation and post-processing)]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d4_1_ModenaYambo2025.pdf An introduction to Non-linear spectroscopy]&lt;br /&gt;
* M. Grüning, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d4_2_Nonlinear_Yschool.pdf Non-linear spectroscopy in Yambo]&lt;br /&gt;
* F. Affinito, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d4_3_Affinito_Challenges_in_HPC.pdf Frontiers in High-Performance Computing]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
*N. Spallanzani [https://media.yambo-code.eu/educational/Schools/MODENA2025/d5_1_yambo_parallel.pdf Yambo in HPC environment]&lt;br /&gt;
*A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d5_2_e-p.pdf Electron-Phonon interaction]&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8825</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8825"/>
		<updated>2025-05-26T10:56:54Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Lectures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:30 Linear response&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Introduction to Yambopy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;11:30 - 12:30 | 14:30 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_2Dquasiparticle_tutorial_Modena2025&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:00 Bethe-Salpeter (part 2)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:30 - 17:30 Nonlinear response with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to download again the tutorial files, follow these steps (or see the above instructions):&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_1_scuola_intro.pdf Description and goal of the school].&lt;br /&gt;
* C. Franchini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_2_TALK-FRANCHINI.pdf First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_3_DFT.pdf A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_4_linear_response_Elena.pdf Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_1_MBPT.pdf Introduction to Many-Body Perturbation Theory]&lt;br /&gt;
* C. Cardoso, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_2_CCardoso_YamboSchool2025_Modena.pdf Quasiparticles and the GW Approximation]&lt;br /&gt;
* A. Guandalini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_3_a_Alberto_Guandalini_2025.pdf GW in practice: algorithms and approximations]&lt;br /&gt;
* G. Sesti, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_3_b_Giacomo_Sesti_YamboSchool2025.pdf  GW advanced algorithms]&lt;br /&gt;
* M. Govoni, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d2_4_20250520_YAMBOSCHOOL_Govoni.pdf GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_1_Palummo_Yamboschool2025.pdf Optical absorption and excitons via the Bethe-Salpeter Equation]&lt;br /&gt;
* D. Sangalli, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_2_DAY3_RealTime_Propagation_DavideSangalli.pdf Real Time Spectroscopy]&lt;br /&gt;
* F. Paleari, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d3_3_Fulvio_Paleari_YamboSchool2025.pdf Introduction to YamboPy (automation and post-processing)]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d4_2_Nonlinear_Yschool.pdf Non-linear spectroscopy in Yambo]&lt;br /&gt;
* F. Affinito, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d4_3_Affinito_Challenges_in_HPC.pdf Frontiers in High-Performance Computing]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
*N. Spallanzani [https://media.yambo-code.eu/educational/Schools/MODENA2025/d5_1_yambo_parallel.pdf Yambo in HPC environment]&lt;br /&gt;
*A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d5_2_e-p.pdf Electron-Phonon interaction]&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8824</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8824"/>
		<updated>2025-05-26T10:54:52Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Lectures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:30 Linear response&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Introduction to Yambopy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;11:30 - 12:30 | 14:30 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_2Dquasiparticle_tutorial_Modena2025&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:00 Bethe-Salpeter (part 2)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:30 - 17:30 Nonlinear response with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to download again the tutorial files, follow these steps (or see the above instructions):&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_1_scuola_intro.pdf Description and goal of the school].&lt;br /&gt;
* C. Franchini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_2_TALK-FRANCHINI.pdf First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_3_DFT.pdf A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_4_linear_response_Elena.pdf Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Introduction to Many-Body Perturbation Theory]&lt;br /&gt;
* C. Cardoso, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Quasiparticles and the GW Approximation]&lt;br /&gt;
* A. Guandalini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ GW in practice: algorithms and approximations]&lt;br /&gt;
* G. Sesti, [https://media.yambo-code.eu/educational/Schools/MODENA2025/  GW advanced algorithms]&lt;br /&gt;
* M. Govoni, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Optical absorption and excitons via the Bethe-Salpeter Equation]&lt;br /&gt;
* D. Sangalli, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Real Time Spectroscopy]&lt;br /&gt;
* F. Paleari, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Introduction to YamboPy (automation and post-processing)]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Non-linear spectroscopy in Yambo]&lt;br /&gt;
* F. Affinito, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Frontiers in High-Performance Computing]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
*N. Spallanzani [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Yambo in HPC environment]&lt;br /&gt;
*A. Marini, [https://media.yambo-code.eu/educational/Schools/MODENA2025/ Electron-Phonon interaction]&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8820</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8820"/>
		<updated>2025-05-26T10:51:47Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Lectures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:30 Linear response&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Introduction to Yambopy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;11:30 - 12:30 | 14:30 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial_Modena2025.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_2Dquasiparticle_tutorial_Modena2025&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:00 Bethe-Salpeter (part 2)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:30 - 17:30 Nonlinear response with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to download again the tutorial files, follow these steps (or see the above instructions):&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [https://media.yambo-code.eu/educational/Schools/MODENA2025/d1_1_scuola_intro.pdf Description and goal of the school].&lt;br /&gt;
* C. Franchini, [https://drive.google.com/file/d/1Z6GCjP4K1dM28ULsyYg2eckgUdYUSRph/view?usp=share_link First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [https://drive.google.com/file/d/1ITddkGTM12Gw5QxnZjAQpfZgYH0FvJL1/view?usp=share_link A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [https://drive.google.com/file/d/1mBTcPrnfoqwcA5wXE8gXQMO_qttClHAd/view?usp=share_link Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, [https://drive.google.com/file/d/1HTIPHkH2sBaVDLFwwS34T-fJ9x8FhVPq/view?usp=share_link Introduction to Many-Body Perturbation Theory]&lt;br /&gt;
* C. Cardoso, [https://drive.google.com/file/d/1SR9BtFKgz6Y1gaHSKF1s8xzb42D5C1Xg/view?usp=share_link Quasiparticles and the GW Approximation]&lt;br /&gt;
* A. Guandalini, [https://drive.google.com/file/d/1dgcdHMfA0b7jjyrCs4r9OrG6qpiu1v39/view?usp=share_link GW in practice: algorithms and approximations]&lt;br /&gt;
* G. Sesti, [https://drive.google.com/file/d/1te_85k9jgSymr3Av86rKOu0-tA-7sGWq/view?usp=sharing  GW advanced algorithms]&lt;br /&gt;
* M. Govoni, [https://drive.google.com/file/d/1XBa5RgmwKdYSy4mj_COXwbUQd3DPgRe4/view?usp=share_link GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, [https://drive.google.com/file/d/1pQ491hqpETVLchL92QPy4f_jWqfMK5xf/view?usp=share_link Optical absorption and excitons via the Bethe-Salpeter Equation]&lt;br /&gt;
* D. Sangalli, [https://drive.google.com/file/d/1QC9YBmLIFkIQ-GA_YuC43qTtytCckGAj/view?usp=share_link Real Time Spectroscopy]&lt;br /&gt;
* F. Paleari, [https://drive.google.com/file/d/1LodCoVF9N-11GCSTs-uPfdCzJ86WDX5E/view?usp=share_link Introduction to YamboPy (automation and post-processing)]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, [https://drive.google.com/file/d/1bZF0f3AD-WL3M3vCtvrnA_1W94SKt-Gf/view?usp=sharing Non-linear spectroscopy in Yambo]&lt;br /&gt;
* F. Affinito, [https://drive.google.com/file/d/103SIfHmvCFmT3QIOs6BWxm96H3J0fDrM/view?usp=share_link Frontiers in High-Performance Computing]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
*N. Spallanzani [https://drive.google.com/file/d/1_fVJa7lkUr5FyrAxiPwrb5EVZRi7DdxR/view?usp=share_link Yambo in HPC environment]&lt;br /&gt;
*A. Marini, [https://drive.google.com/file/d/1-X5MABoNH9-KSHso92Y0gmEkNW99Kjdp/view?usp=share_link Electron-Phonon interaction]&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Next_steps:_RPA_calculations_(standalone)&amp;diff=8789</id>
		<title>Next steps: RPA calculations (standalone)</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Next_steps:_RPA_calculations_(standalone)&amp;diff=8789"/>
		<updated>2025-05-20T14:33:23Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* LFEs in periodic direction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Optical absorption in hBN: independent particle approximation ==&lt;br /&gt;
&lt;br /&gt;
[[File:HBN-bulk-3x3-annotated.png|x200px|Atomic structure of bulk hBN]]&lt;br /&gt;
&lt;br /&gt;
=== Background ===&lt;br /&gt;
The dielectric function in the long-wavelength limit, at the independent particle level (RPA without local fields), is essentially given by the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\epsilon_{\alpha, \alpha}(\omega)=1+\frac{16 \pi}{\Omega} \sum_{c, v} \sum_{\mathbf{k}} \frac{1}{E_{c \mathbf{k}}-E_{v \mathbf{k}}} \frac{\left|\left\langle v \mathbf{k}\left|\mathbf{p}_{\alpha}+\mathrm i\left[V^{\mathrm{NL}}, \mathbf{r}_{\alpha}\right]\right| c \mathbf{k}\right\rangle\right|^{2}}{\left(E_{c \mathbf{k}}-E_{v \mathbf{k}}\right)^{2}-(\omega+\mathrm i \gamma)^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In practice, Yambo does not use this expression directly but solves the Dyson equation for the susceptibility &amp;lt;math&amp;gt;\chi&amp;lt;/math&amp;gt;, which is described in the [[Local fields]] module.&lt;br /&gt;
&lt;br /&gt;
=== Choosing input parameters ===&lt;br /&gt;
Enter the folder for bulk hBN that contains the &#039;&#039;SAVE&#039;&#039; directory, run the initialization and generate the input file.&lt;br /&gt;
You can type &amp;lt;code&amp;gt;yambo -h&amp;lt;/code&amp;gt; and see the available options for different run-levels.  For an RPA optical spectrum calculation the correct option is &amp;lt;code&amp;gt;yambo -optics c&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;yambo -o c&amp;lt;/code&amp;gt;). Let&#039;s add some command line options:&lt;br /&gt;
&lt;br /&gt;
 $ cd YAMBO_TUTORIALS/hBN/YAMBO&lt;br /&gt;
 $ yambo               &#039;&#039;(initialization)&#039;&#039;&lt;br /&gt;
 $ yambo -F yambo.in_IP -o c&lt;br /&gt;
This corresponds to optical properties in G-space at the independent particle level: in the input file this is indicated by (&amp;lt;code&amp;gt;Chimod= &amp;quot;IP&amp;quot;&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
===Optics runlevel===&lt;br /&gt;
For optical properties we are interested just in the long-wavelength limit &amp;lt;math&amp;gt;\mathbf q = 0&amp;lt;/math&amp;gt;. This always corresponds to the &#039;&#039;first&#039;&#039; &amp;lt;math&amp;gt;\mathbf q&amp;lt;/math&amp;gt;-point in the set of possible &amp;lt;math&amp;gt;\mathbf q =\mathbf k - \mathbf k&#039;&amp;lt;/math&amp;gt;-points. &lt;br /&gt;
Change the following variables in the input file to:&lt;br /&gt;
 % [[Variables#QpntsRX|QpntsRXd]]&lt;br /&gt;
  1 |  &#039;&#039;&#039;1&#039;&#039;&#039; |                   # [Xd] Transferred momenta&lt;br /&gt;
 %&lt;br /&gt;
 [[Variables#ETStpsX|ETStpsXd]]= &#039;&#039;&#039;1001&#039;&#039;&#039;               # [Xd] Total Energy steps&lt;br /&gt;
in order to select just the first &amp;lt;math&amp;gt;\mathbf q&amp;lt;/math&amp;gt;. The last variable ensures we generate a smooth spectrum. &lt;br /&gt;
Save the input file and launch the code, keeping the command line options as before (i.e., just remove the lower case options):&lt;br /&gt;
 $ yambo -F yambo.in_IP -J Full&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;---&amp;gt; [05] Optics&lt;br /&gt;
 &amp;lt;---&amp;gt; [LA] SERIAL linear algebra&lt;br /&gt;
 &amp;lt;---&amp;gt; [DIP] Checking dipoles header&lt;br /&gt;
 &amp;lt;---&amp;gt; [X-CG] R(p) Tot o/o(of R):    5499   52992     100&lt;br /&gt;
 &amp;lt;02s&amp;gt; Xo@q[1] |########################################| [100%] --(E) --(X)&lt;br /&gt;
 &amp;lt;02s&amp;gt; [06] Timing Overview&lt;br /&gt;
 &amp;lt;02s&amp;gt; [07] Memory Overview&lt;br /&gt;
 &amp;lt;02s&amp;gt; [08] Game Over &amp;amp; Game summary&lt;br /&gt;
  &lt;br /&gt;
 $ ls&lt;br /&gt;
 Full   SAVE  yambo.in_IP   r_setup&lt;br /&gt;
 o-Full.eel_q1_ip  o-Full.eps_q1_ip  r-Full_optics_chi&lt;br /&gt;
Let&#039;s take a moment to understand what Yambo has done inside the Optics runlevel:&lt;br /&gt;
# Compute the &amp;lt;math&amp;gt;[\mathbf r, V^\mathrm{NL}]&amp;lt;/math&amp;gt; term&lt;br /&gt;
# Read the wavefunctions from disc [WF]&lt;br /&gt;
# Compute the &#039;&#039;dipoles&#039;&#039;, i.e. matrix elements of &amp;lt;math&amp;gt;\mathbf p&amp;lt;/math&amp;gt;&lt;br /&gt;
# Write the dipoles to disk as &#039;&#039;Full/ndb.dip*&#039;&#039; databases. This you can see in the report file:&lt;br /&gt;
 $ grep -A20 &amp;quot;WR&amp;quot; r-Full_optics_*&lt;br /&gt;
 [WR./Full//ndb.dipoles]---------------------------------------------------------&lt;br /&gt;
  Brillouin Zone Q/K grids (IBZ/BZ)                :   14   72   14   72&lt;br /&gt;
  RL vectors                                       :  1491 [WF]&lt;br /&gt;
  Fragmentation                                    : yes&lt;br /&gt;
  Electronic Temperature                           :  0.000000 [K]&lt;br /&gt;
  Bosonic    Temperature                           :  0.000000 [K]&lt;br /&gt;
  DIP band range                                   :    1  100&lt;br /&gt;
  DIP band range limits                            :   8   9&lt;br /&gt;
  DIP e/h energy range                             : -1.000000 -1.000000 [eV]&lt;br /&gt;
  RL vectors in the sum                            :  1491&lt;br /&gt;
  [r,Vnl] included                                 : yes&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;5&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Finally, Yambo computes the non-interacting susceptibility &amp;lt;math&amp;gt;\chi&amp;lt;/math&amp;gt; for this &amp;lt;math&amp;gt;\mathbf q&amp;lt;/math&amp;gt;, and writes the dielectric function inside the &#039;&#039;o-Full.eps_q1_ip&#039;&#039; file for plotting&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Energy cut off===&lt;br /&gt;
&lt;br /&gt;
Before plotting the output, let&#039;s change a few more variables. The previous calculation used &#039;&#039;all&#039;&#039; the G-vectors in expanding the wavefunctions, up to 1491 (~1016 components). This corresponds roughly to the cut off energy of 40Ry we used in the DFT calculation. Generally, however, we can use a smaller value. We use the verbosity to switch on this variable, and a new &#039;&#039;-J&#039;&#039; flag to avoid reading the previous database:&lt;br /&gt;
 $ yambo -F yambo.in_IP &#039;&#039;&#039;-V RL&#039;&#039;&#039; -o c&lt;br /&gt;
Change the &#039;&#039;&#039;value&#039;&#039;&#039; of &amp;lt;code&amp;gt;[[Variables#FFTGvecs|FFTGvecs]]&amp;lt;/code&amp;gt; and also its &#039;&#039;&#039;unit&#039;&#039;&#039; from &amp;lt;code&amp;gt;RL&amp;lt;/code&amp;gt; (number of G-vectors) to &amp;lt;code&amp;gt;Ry&amp;lt;/code&amp;gt; (energy in Rydberg):&lt;br /&gt;
 [[Variables#FFTGvecs|FFTGvecs]]= &#039;&#039;&#039;6&#039;&#039;&#039;           &#039;&#039;&#039;Ry&#039;&#039;&#039;    # [FFT] Plane-waves&lt;br /&gt;
Save the input file and launch the code again:&lt;br /&gt;
  $ yambo -F yambo.in_IP &#039;&#039;&#039;-J 6Ry &#039;&#039;&#039;&lt;br /&gt;
and then plot the &#039;&#039;o-Full.eps_q1_ip&#039;&#039; and &#039;&#039;o-6Ry.eps_q1_ip&#039;&#039; files:&lt;br /&gt;
 $ gnuplot&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;quot;o-Full.eps_q1_ip&amp;quot; w l,&amp;quot;o-6Ry.eps_q1_ip&amp;quot; w p&lt;br /&gt;
&lt;br /&gt;
[[File:CH-hBN-6Ry.png|none|500px|Yambo tutorial image]]&lt;br /&gt;
&lt;br /&gt;
There is very little difference between the two spectra. This highlights an important point in calculating excited state properties: generally, fewer G-vectors are needed than what are needed in DFT calculations. Regarding the spectrum itself, the first peak occurs at about 4.4eV. This is consistent with the minimum direct gap reported by Yambo: 4.28eV. The comparison with experiment (not shown) is very poor however. &lt;br /&gt;
&lt;br /&gt;
If you made some mistake and cannot reproduce this figure, you should check the value of &amp;lt;code&amp;gt;[[Variables#FFTGvecs|FFTGvecs]]&amp;lt;/code&amp;gt; in the input file, delete the &#039;&#039;6Ry&#039;&#039; folder, and try again - taking care to plot the right file! (e.g. &#039;&#039;o-6Ry.eps_q1_ip_01&#039;&#039;. The &amp;quot;_01&amp;quot; suffix means that while writing the output Yambo found another existing file with the name &amp;quot;o-6Ry.eps_q1_ip&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
===q-direction===&lt;br /&gt;
Now let&#039;s select a different component of the dielectric tensor:&lt;br /&gt;
 $ yambo -F yambo.in_IP -V RL -o c&lt;br /&gt;
 ...&lt;br /&gt;
 % [[Variables#LongDrXd|LongDrXd]]&lt;br /&gt;
 &#039;&#039;&#039;0.000000&#039;&#039;&#039; | 0.000000 | &#039;&#039;&#039;1.000000&#039;&#039;&#039; |        # [Xd] [cc] Electric Field&lt;br /&gt;
 %&lt;br /&gt;
 ...&lt;br /&gt;
 $ yambo -F yambo.in_IP -J 6Ry&lt;br /&gt;
This time yambo reads from the &#039;&#039;6Ry&#039;&#039; folder, so it does not need to compute the dipole matrix elements again, and the calculation is fast. Plotting gives:&lt;br /&gt;
 $ gnuplot&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;quot;o-6Ry.eps_q1_ip&amp;quot; t &amp;quot;q || x-axis&amp;quot; w l,&amp;quot;o-6Ry.eps_q1_ip_01&amp;quot; t &amp;quot;q || c-axis&amp;quot; w l&lt;br /&gt;
&lt;br /&gt;
[[File:CH-hBN-ac.png|none|500px|Yambo tutorial image]]&lt;br /&gt;
The absorption is suppressed in the stacking direction. As the interplanar spacing is increased, we would eventually arrive at the absorption of the BN sheet (see [[Local fields]] tutorial).&lt;br /&gt;
&lt;br /&gt;
===Non-local commutator===&lt;br /&gt;
Last, we show the effect of switching off the non-local commutator term (the term with &amp;lt;math&amp;gt; V^\mathrm{NL} &amp;lt;/math&amp;gt; in the equation at the start of this tutorial) due to the pseudopotential. As there is no option to do this inside yambo, you need to hide the database file. Change back to the &#039;&#039;q || (1 0 0)&#039;&#039; direction, and launch yambo with a different &amp;lt;code&amp;gt;-J&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ mv SAVE/ns.kb_pp_pwscf SAVE/ns.kb_pp_pwscf_&#039;&#039;&#039;OFF&#039;&#039;&#039;&lt;br /&gt;
 $ yambo -F yambo.in_IP -J &#039;&#039;&#039;6Ry_NoVnl&#039;&#039;&#039; -o c             &lt;br /&gt;
&lt;br /&gt;
Change &lt;br /&gt;
&lt;br /&gt;
  %LonDrXd&lt;br /&gt;
&lt;br /&gt;
back to &lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;1.000000&#039;&#039;&#039; | 0.000000 | &#039;&#039;&#039;0.000000&#039;&#039;&#039; | &lt;br /&gt;
&lt;br /&gt;
and run&lt;br /&gt;
 &lt;br /&gt;
 $ yambo -F yambo.in_IP -J 6Ry_NoVnl&lt;br /&gt;
&lt;br /&gt;
Note the warning in the output:&lt;br /&gt;
 &amp;lt;---&amp;gt; [WARNING] [r,Vnl^pseudo] not included in position and velocity dipoles&lt;br /&gt;
which also appears in the report file &amp;lt;code&amp;gt;r-6Ry_NoVnl_optics_dipoles_chi&amp;lt;/code&amp;gt; as &amp;lt;code&amp;gt;[r,Vnl] included       :no&amp;lt;/code&amp;gt;. The difference is tiny:&lt;br /&gt;
[[File:CH-hBN-Vnl.png|none|500px|Yambo tutorial image]]&lt;br /&gt;
&lt;br /&gt;
However, when your system is larger, with more projectors in the pseudopotential or more k-points (see the BSE tutorial), the inclusion of &amp;lt;math&amp;gt;V^\mathrm{NL}&amp;lt;/math&amp;gt; can make a huge difference in the computational load, so it is always worth checking to see if the terms are important in your system.&lt;br /&gt;
&lt;br /&gt;
==Optical absorption in 2D BN: local field effects ==&lt;br /&gt;
&lt;br /&gt;
[[File:HBN2.png|x200px|Atomic structure of 2D hBN]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background ===&lt;br /&gt;
[[File:Yambo-Cheatsheet-5.0_P7.png|thumb|Cheatsheet on LFE|150px]]&lt;br /&gt;
The macroscopic dielectric function is obtained by including the so-called local field effects (LFE) in the calculation of the response function. Within the time-dependent DFT formalism this is achieved by solving the Dyson equation for the susceptibility &amp;lt;math&amp;gt;\chi&amp;lt;/math&amp;gt;. In reciprocal space this is given by:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi_{\mathbf{G}, \mathbf{G}^{\prime}}(\mathbf{q}, \omega) = \chi_{\mathbf{G}, \mathbf{G}^{\prime}}^{0}(\mathbf{q}, \omega)+\sum_{\mathbf{G}_{1}, \mathbf{G}_{2}} \chi_{\mathbf{G}, \mathbf{G}_{1}}^{0}(\mathbf{q}, \omega)\left[v_{\mathbf{G}_{1}}(\mathbf{q}) \delta_{\mathbf{G}_{1}, \mathbf{G}_{2}}+f_{\mathbf{G}_{1}, \mathbf{G}_{2}}^{x c}\right] \chi_{\mathbf{G}_{2}, \mathbf{G}^{\prime}}(\mathbf{q}, \omega)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The microscopic dielectric function is related to &amp;lt;math&amp;gt;\chi&amp;lt;/math&amp;gt; by:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\epsilon_{\mathbf{G}, \mathbf{G}^{\prime}}^{-1}(\mathbf{q}, \omega)=\delta_{\mathbf{G}, \mathbf{G}^{\prime}}+v_{\mathbf{G}}(\mathbf{q}) \chi_{\mathbf{G}, \mathbf{G}^{\prime}}(\mathbf{q}, \omega)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the macroscopic dielectric function is obtained by taking the (0,0) component of the inverse microscopic one:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\epsilon_{M}(\omega)=\lim _{\mathrm{q} \rightarrow 0} \frac{1}{\epsilon_{\mathrm{G}=0, \mathrm{G}^{\prime}=0}^{-1}(\mathbf{q}, \omega)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Experimental observables like the optical absorption and the electron energy loss can be obtained from the macroscopic dielectric function:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\operatorname{Abs}(\omega)=\operatorname{Im} \epsilon_{M}(\omega) \quad \operatorname{EELS}(\omega)=-\operatorname{Im} \frac{1}{\epsilon_{M}(\omega)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the following we will neglect the &amp;lt;math&amp;gt;f^{xc}&amp;lt;/math&amp;gt; term: we perform the calculation at the RPA level and consider just the Hartree term (from &amp;lt;math&amp;gt;v_G&amp;lt;/math&amp;gt;) in the kernel. If we also neglect the Hartree term, we arrive back at the independent particle approximation, since there is no kernel and &amp;lt;math&amp;gt;\chi = \chi_0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Choosing input parameters ===&lt;br /&gt;
Enter the folder for 2D hBN that contains the SAVE directory, and generate the input file. To include local the local fields variables in the input file the correct option is &amp;lt;code&amp;gt;yambo -o c -k hartree&amp;lt;/code&amp;gt; (once again you can check it with &amp;lt;code&amp;gt;yambo -h&amp;lt;/code&amp;gt;). Let&#039;s start by running the calculation for light polarization &#039;&#039;q&#039;&#039; in the plane of the BN sheet:&lt;br /&gt;
 $ cd YAMBO_TUTORIALS/hBN-2D/YAMBO&lt;br /&gt;
 $ yambo        &#039;&#039;(Initialization)&#039;&#039;&lt;br /&gt;
 $ yambo -F yambo.in_RPA -V RL -o c -k hartree&lt;br /&gt;
We thus use a new input file &#039;&#039;yambo.in_RPA&#039;&#039;, switch on the &amp;lt;code&amp;gt;FFTGvecs&amp;lt;/code&amp;gt; variable, and label all outputs/databases with a &#039;&#039;q100&#039;&#039; tag. Make sure to set/modify all of the following variables to:&lt;br /&gt;
 [[Variables#FFTGvecs|FFTGvecs]]=     &#039;&#039;&#039;6        Ry&#039;&#039;&#039;    # [FFT] Plane-waves&lt;br /&gt;
 [[Variables#Chimod|Chimod]]= &amp;quot;Hartree&amp;quot;            # [X] IP/Hartree/ALDA/LRC/BSfxc&lt;br /&gt;
 [[Variables#NGsBlkXd|NGsBlkXd]]= &#039;&#039;&#039;    3        Ry&#039;&#039;&#039;    # [Xd] Response block size&lt;br /&gt;
 % [[Variables#QpntsRXd|QpntsRXd]]&lt;br /&gt;
  1 |  &#039;&#039;&#039;1&#039;&#039;&#039; |                   # [Xd] Transferred momenta&lt;br /&gt;
 %&lt;br /&gt;
 % [[Variables#EnRngeXd|EnRngeXd]]&lt;br /&gt;
  0.00000 | &#039;&#039;&#039;20.00000&#039;&#039;&#039; | eV    # [Xd] Energy range&lt;br /&gt;
 %&lt;br /&gt;
 % [[Variables#DmRngeXd|DmRngeXd]]&lt;br /&gt;
 &#039;&#039;&#039;0.200000&#039;&#039;&#039; | &#039;&#039;&#039;0.200000&#039;&#039;&#039; | eV    # [Xd] Damping range&lt;br /&gt;
 %&lt;br /&gt;
 [[Variables#ETStpsXd|ETStpsXd]]= 2001               # [Xd] Total Energy steps&lt;br /&gt;
 % [[Variables#LongDrXd|LongDrXd]]&lt;br /&gt;
 1.000000 | 0.000000 | 0.000000 |        # [Xd] [cc] Electric Field&lt;br /&gt;
 %&lt;br /&gt;
In this input file:&lt;br /&gt;
* We evaluate the &amp;lt;math&amp;gt;\mathbf q \rightarrow 0&amp;lt;/math&amp;gt; response function choosing the direction for the limit parallel in the plane of the hBN sheet; &lt;br /&gt;
* We set a wider energy range than before, and a larger broadening;&lt;br /&gt;
* We select the Hartree kernel, and expanded G-vectors in the screening up to 3 Ry (about 85 G-vectors);&lt;br /&gt;
&lt;br /&gt;
===LFEs in periodic direction===&lt;br /&gt;
Now let&#039;s run the code with this new input file (CECAM in serial: about 2mins; parallel 4 tasks: 50s)&lt;br /&gt;
 $ yambo -F yambo.in_RPA  -J q100 &lt;br /&gt;
and let&#039;s compare the absorption with and without the local fields included. By inspecting the &#039;&#039;o-q100.eps_q1_inv_rpa_dyson&#039;&#039; file we find that this information is given in the 2&amp;lt;math&amp;gt;^\mathrm{nd}&amp;lt;/math&amp;gt; and 4&amp;lt;math&amp;gt;^\mathrm{th}&amp;lt;/math&amp;gt; columns, respectively:&lt;br /&gt;
 $ head -n 40 o-q100.eps_q1_inv_rpa_dyson&lt;br /&gt;
 # Absorption @ Q(1) [q-&amp;gt;0 direction] : 1.0000000  0.0000000  0.0000000&lt;br /&gt;
 #  E/ev[1]     EPS-Im[2]   EPS-Re[3]   EPSo-Im[4]  EPSo-Re[5]&lt;br /&gt;
Plot the result:&lt;br /&gt;
 $ gnuplot&lt;br /&gt;
 $ gnuplot&amp;gt; plot &amp;quot;o-q100.eps_q1_inv_rpa_dyson&amp;quot; u 1:2 w l t &amp;quot;RPA-LFE&amp;quot;,&amp;quot;o-q100.eps_q1_inv_rpa_dyson&amp;quot; u 1:4 w l t &amp;quot;noLFE&amp;quot;, &amp;quot;o-q100.eel_q1_inv_rpa_dyson&amp;quot; u 1:4 w l ls 7 dt 2 t &amp;quot;EELS&amp;quot;&lt;br /&gt;
[[File:CH-LFE4.png|none|600px|Yambo tutorial image]]&lt;br /&gt;
There is little influence of local fields in this case. This is generally the case for semiconductors or materials with a smoothly varying electronic density. We have also shown the EELS spectrum (&#039;&#039;o-q100.eel_q1_inv_rpa_dyson&#039;&#039;) for comparison.&lt;br /&gt;
&lt;br /&gt;
===LFEs in non-periodic direction===&lt;br /&gt;
Now let&#039;s switch to &#039;&#039;q&#039;&#039; perpendicular to the BN plane:&lt;br /&gt;
 $ yambo -F yambo.in_RPA -V RL -o c -k hartree        &#039;&#039;and set&#039;&#039;&lt;br /&gt;
 ...&lt;br /&gt;
 % [[Variables#LongDrXd|LongDrXd]]&lt;br /&gt;
 0.000000 | 0.000000 | &#039;&#039;&#039;1.000000&#039;&#039;&#039; |        # [Xd] [cc] Electric Field&lt;br /&gt;
 %&lt;br /&gt;
 &lt;br /&gt;
You can try out the default parallel usage now, or run again in serial, i.e.&lt;br /&gt;
 $ yambo -F yambo.in_RPA  -J &#039;&#039;&#039;q001&#039;&#039;&#039;       &#039;&#039;(serial)&#039;&#039;&lt;br /&gt;
 $ mpirun -np 4 yambo -F yambo.in_RPA  -J &#039;&#039;&#039;q001&#039;&#039;&#039; &amp;amp;      &#039;&#039;(parallel, MPI only, 4 tasks)&#039;&#039;&lt;br /&gt;
As noted previously, the &#039;&#039;log&#039;&#039; files in parallel appear in the LOG folder, you can follow the execution with &amp;lt;code&amp;gt;tail -F LOG/l-q001_optics_chi_CPU_1&amp;lt;/code&amp;gt; .&lt;br /&gt;
&lt;br /&gt;
Plotting the output file:&lt;br /&gt;
 $ gnuplot&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;quot;o-q001.eps_q1_inv_rpa_dyson&amp;quot; u 1:2 w l,&amp;quot;o-q001.eps_q1_inv_rpa_dyson&amp;quot; u 1:4 w l&lt;br /&gt;
[[File:CH-LFE6.png|none|600px|Yambo tutorial image]]&lt;br /&gt;
In this case, the absorption is strongly blueshifted with respect to the in-plane absorption. Furthermore, the influence of local fields is striking, and quenches the spectrum strongly. This is the well known depolarization effect. Local field effects are much stronger in the perpendicular direction because the charge inhomogeneity is dramatic. Many G-vectors are needed to account for the sharp change in the potential across the BN-vacuum interface.&lt;br /&gt;
&lt;br /&gt;
===Absorption versus EELS===&lt;br /&gt;
In order to understand this further, we plot the electron energy loss spectrum for this component and compare with the absorption:&lt;br /&gt;
 $ gnuplot&lt;br /&gt;
 $ gnuplot &amp;gt; plot &amp;quot;o-q001.eps_q1_inv_rpa_dyson&amp;quot; w l,&amp;quot;o-q001.eel_q1_inv_rpa_dyson&amp;quot; w l&lt;br /&gt;
[[File:CH-LFE5.png|none|600px|Yambo tutorial image]]&lt;br /&gt;
The conclusion is that the absorption and EELS coincide for isolated systems. &lt;br /&gt;
To understand why this is, you need to consider the role of the &#039;&#039;macroscopic&#039;&#039; screening in the response function and the long-range part of the Coulomb potential. &lt;br /&gt;
See e.g.&amp;lt;ref&amp;gt;TDDFT from molecules to solids: The role of long‐range interactions, F. Sottile et al, International journal of quantum chemistry 102 (5), 684-701 (2005)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
* Back to [[ICTP 2022#Tutorials]]&lt;br /&gt;
* Back to [[CECAM VIRTUAL 2021#Tutorials]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| style=&amp;quot;width:100%&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
|style=&amp;quot;width:25%; text-align:left&amp;quot;|Prev: CECAM School Home -&amp;gt; [[First_steps:_walk_through_from_DFT(standalone)|First steps]] &lt;br /&gt;
|style=&amp;quot;width:40%; text-align:center&amp;quot;|Now: CECAM School Home -&amp;gt; [[Next steps: RPA calculations (standalone)|Next steps]]&lt;br /&gt;
|style=&amp;quot;width:35%; text-align:right&amp;quot;|Back to: [[CECAM_VIRTUAL_2021#Tutorials|CECAM School Home]] &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8784</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8784"/>
		<updated>2025-05-20T12:18:28Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:30 Linear response&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Introduction to Yambopy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;11:30 - 12:30 | 14:30 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:30 - 16:00 Bethe-Salpeter (part 2)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:30 - 17:30 Nonlinear response with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to download again the tutorial files, follow these steps (or see the above instructions):&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [https://drive.google.com/file/d/1lbY6zF04WCcvZZhQy4TAIBca9wXHVmGG/view?usp=share_link Description and goal of the school].&lt;br /&gt;
* C. Franchini, [https://drive.google.com/file/d/1Z6GCjP4K1dM28ULsyYg2eckgUdYUSRph/view?usp=share_link First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [https://drive.google.com/file/d/1ITddkGTM12Gw5QxnZjAQpfZgYH0FvJL1/view?usp=share_link A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [https://drive.google.com/file/d/1mBTcPrnfoqwcA5wXE8gXQMO_qttClHAd/view?usp=share_link Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, [https://drive.google.com/file/d/1HTIPHkH2sBaVDLFwwS34T-fJ9x8FhVPq/view?usp=share_link Introduction to Many-Body Perturbation Theory]&lt;br /&gt;
* C. Cardoso, [https://drive.google.com/file/d/1SR9BtFKgz6Y1gaHSKF1s8xzb42D5C1Xg/view?usp=share_link Quasiparticles and the GW Approximation]&lt;br /&gt;
* A. Guandalini, [https://drive.google.com/file/d/1dgcdHMfA0b7jjyrCs4r9OrG6qpiu1v39/view?usp=share_link GW in practice: algorithms and approximations]&lt;br /&gt;
* G. Sesti, [https://drive.google.com/file/d/1te_85k9jgSymr3Av86rKOu0-tA-7sGWq/view?usp=sharing  GW advanced algorithms]&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, [https://drive.google.com/file/d/1pQ491hqpETVLchL92QPy4f_jWqfMK5xf/view?usp=share_link Optical absorption and excitons via the Bethe-Salpeter Equation]&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, [https://drive.google.com/file/d/1bZF0f3AD-WL3M3vCtvrnA_1W94SKt-Gf/view?usp=sharing Non-linear spectroscopy in Yambo]&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8723</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8723"/>
		<updated>2025-05-19T07:55:27Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 4 - Thursday, May 22 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALSypp_nl -y ypypp_nl -y ypp_nl -y ypp_nl -y p_nl -y &lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8722</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8722"/>
		<updated>2025-05-19T07:55:03Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 3 - Wednesday, 21 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALSypp_nl -y ypypp_nl -y ypp_nl -y ypp_nl -y p_nl -y &lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8721</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8721"/>
		<updated>2025-05-19T07:54:23Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 3 - Wednesday, 21 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALSypp_nl -y ypypp_nl -y ypp_nl -y ypp_nl -y p_nl -y &lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8720</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8720"/>
		<updated>2025-05-19T07:53:33Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 2 - Tuesday, 20 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALSypp_nl -y ypypp_nl -y ypp_nl -y ypp_nl -y p_nl -y &lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 $ cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8719</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8719"/>
		<updated>2025-05-19T07:52:20Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 19 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Be aware that when running the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command, after typing &amp;quot;yes&amp;quot; at the prompt, you might see an error message like the one shown below. Don’t worry—just follow the instructions provided in this CINECA [https://wiki.u-gov.it/confluence/display/SCAIUS/FAQ#FAQ-Ikeepreceivingtheerrormessage%22WARNING:REMOTEHOSTIDENTIFICATIONHASCHANGED!%22evenifImodifyknown_hostfile guide to resolve the issue]. Once done, run the &amp;lt;code&amp;gt;ssh-copy-id&amp;lt;/code&amp;gt; command again.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
 /usr/bin/ssh-copy-id: &lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @&lt;br /&gt;
 ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&lt;br /&gt;
 ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALSypp_nl -y ypypp_nl -y ypp_nl -y ypp_nl -y p_nl -y &lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $SCRATCH&lt;br /&gt;
 $ mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ rsync -avzP /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS/yambopy_tutorial_Modena_2025.tar.gz .&lt;br /&gt;
 $ tar --strip-components=1 -xvzf yambopy_tutorial_Modena_2025.tar.gz&lt;br /&gt;
&lt;br /&gt;
Then, follow part 1 of the tutorial, which is related to DFT band structures, YAMBO initialization and linear response.&lt;br /&gt;
* [[Modena 2025 : Yambopy part 1]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
As for yambopy, the tutorial related to GW calculations is contained in the first section of Part 2&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#GW calculations| Modena 2025 : Yambopy part 2 (GW calculations)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Modena 2025 : Yambopy part 2#Excitons| Modena 2025 : Yambopy part 2 (BSE calculations)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo (5.3)|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics (5.3)|Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN (5.3)|Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8638</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8638"/>
		<updated>2025-05-16T15:55:12Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first activate the python environment:&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module load python/3.11.7&lt;br /&gt;
 source /leonardo_work/tra25_yambo/env_yambopy/bin/activate&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod --reservation=s_tra_yambo -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
set the environment variable for openMP &lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
and load yambo or yambopy as explained above in the general instructions.&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 cd YAMBOPY_TUTORIALS&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar.gz&lt;br /&gt;
 tar -xvf databases_yambopy.tar.gz&lt;br /&gt;
 cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8632</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8632"/>
		<updated>2025-05-14T17:55:44Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 ###SBATCH --reservation=TBD             # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hours:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 cd&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda init bash&lt;br /&gt;
 source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda activate !!!!!!!! TBD !!!!!!!!&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 cd $SCRATCH/YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 cd YAMBOPY_TUTORIALS&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar.gz&lt;br /&gt;
 tar -xvf databases_yambopy.tar.gz&lt;br /&gt;
 cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8631</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8631"/>
		<updated>2025-05-14T17:54:41Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 ###SBATCH --reservation=TBD             # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh username  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hour:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 cd&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda init bash&lt;br /&gt;
 source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda activate !!!!!!!! TBD !!!!!!!!&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 cd $SCRATCH/YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 cd YAMBOPY_TUTORIALS&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar.gz&lt;br /&gt;
 tar -xvf databases_yambopy.tar.gz&lt;br /&gt;
 cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8630</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8630"/>
		<updated>2025-05-14T17:53:37Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP partition. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 ssh username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to Leonardo. To do so, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 ssh-keygen -t rsa -b 4096 -f ~/.ssh/leonardo_rsa&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Created directory &#039;/home/username/.ssh&#039;.&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in /home/username/.ssh/leonardo_rsa&lt;br /&gt;
 Your public key has been saved in /home/username/.ssh/leonardo_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 [...]&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 [...]&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to Leonardo. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 ssh-copy-id -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to Leonardo without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -i ~/.ssh/leonardo_rsa username@login.leonardo.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting the username:&lt;br /&gt;
 Host leonardo &lt;br /&gt;
  HostName login.leonardo.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile ~/.ssh/leonardo_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on Leonardo, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/4%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 echo $SCRATCH&lt;br /&gt;
 /leonardo_scratch/large/userexternal/username&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on Leonardo are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra25_yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= dcgp_usr_prod      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --gres=tmpfs:10g                # List of generic consumable resources&lt;br /&gt;
 #SBATCH --qos=normal                    # Quality of service &lt;br /&gt;
 ###SBATCH --reservation=TBD             # Reservation specific to this school&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;n&amp;gt;           # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;n/2&amp;gt;       # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;c&amp;gt;             # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
 mpirun -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section#DCGPSection-SLURMpartitions resources]. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sbatch job.sh&lt;br /&gt;
 Submitted batch job 15696508&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 squeue --me&lt;br /&gt;
            JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
         15696508 dcgp_usr_   job.sh nspallan  R       0:01      1 lrdn4135&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 4 hour:&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 srun: job 15694182 queued and waiting for resources&lt;br /&gt;
 srun: job 15694182 has been allocated resources&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task (-c) because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above:&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 using the appropriate Slurm environment variable:&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on Leonardo, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to Leonardo enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 ssh -X leonardo&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on Leonardo, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 cd&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda init bash&lt;br /&gt;
 source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 module load anaconda3/2023.09-0&lt;br /&gt;
 conda activate !!!!!!!! TBD !!!!!!!!&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on Leonardo, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh leonardo&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBO_TUTORIALS &#039;&#039;&#039;#(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
Since the compute nodes are not connected to the external network, the tarballs must be downloaded before starting the interactive session.&lt;br /&gt;
Alternatively, once the interactive session has started, it is possible to access the tarballs by copying them from the following directories:&lt;br /&gt;
&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBO_TUTORIALS&lt;br /&gt;
 /leonardo_work/tra25_yambo/YAMBOPY_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
After that you can start the interactive session&lt;br /&gt;
&lt;br /&gt;
 srun -A tra25_yambo -p dcgp_usr_prod -q normal --reservation=TBD -N 1 -n 1 -c 4 -t 04:00:00 --gres=tmpfs:10g --pty /bin/bash&lt;br /&gt;
 [...]&lt;br /&gt;
 module purge&lt;br /&gt;
 module load profile/candidate&lt;br /&gt;
 module use /leonardo/pub/userexternal/nspallan/spack-0.22.2-06/modules&lt;br /&gt;
 module load yambo/5.3.0--intel-oneapi-mpi--2021.12.1--oneapi--2024.1.0&lt;br /&gt;
 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 cd $SCRATCH/YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 mkdir -p YAMBOPY_TUTORIALS&lt;br /&gt;
 cd YAMBOPY_TUTORIALS&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar.gz&lt;br /&gt;
 tar -xvf databases_yambopy.tar.gz&lt;br /&gt;
 cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 cd $SCRATCH&lt;br /&gt;
 cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [First principles and data-driven correlated materials]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8598</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8598"/>
		<updated>2025-05-07T17:41:19Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP. You can find info about Leonardo-DCGP [https://wiki.u-gov.it/confluence/display/SCAIUS/DCGP+Section here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo-DCGP via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh ...&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FROM HERE IS A PLACEHOLDER: NICOLA WILL FILL THIS PART&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 username@&#039;&#039;&#039;login02&#039;&#039;&#039;$ ssh r256n01&lt;br /&gt;
 ...&lt;br /&gt;
 username@&#039;&#039;&#039;r256n01&#039;&#039;&#039;$ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on m100, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 $ cd&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda init bash&lt;br /&gt;
 $ source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda activate /m100_work/tra23_Yambo/softwares/YAMBO/env_yambopy&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on m100, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh m100&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 mkdir YAMBO_TUTORIALS &#039;&#039;&#039;(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
At this point you can download the needed files for the tutorial.&lt;br /&gt;
After that you can open the interactive session and login into the node&lt;br /&gt;
&lt;br /&gt;
 salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 04:00:00&lt;br /&gt;
 ssh &#039;&#039;&#039;PUT HERE THE ASSIGNED NODE NAME AFTER salloc COMMAND&#039;&#039;&#039;&lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS &lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar&lt;br /&gt;
 $ tar -xvf databases_yambopy.tar&lt;br /&gt;
 $ cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [TBD]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8597</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8597"/>
		<updated>2025-05-07T17:31:22Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP. You can find info about Leonardo-DCGP [https://www.hpc.cineca.it/systems/hardware/leonardo/ here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo-DCGP via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh ...&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FROM HERE IS A PLACEHOLDER: NICOLA WILL FILL THIS PART&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 username@&#039;&#039;&#039;login02&#039;&#039;&#039;$ ssh r256n01&lt;br /&gt;
 ...&lt;br /&gt;
 username@&#039;&#039;&#039;r256n01&#039;&#039;&#039;$ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on m100, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 $ cd&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda init bash&lt;br /&gt;
 $ source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda activate /m100_work/tra23_Yambo/softwares/YAMBO/env_yambopy&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on m100, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh m100&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 mkdir YAMBO_TUTORIALS &#039;&#039;&#039;(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
At this point you can download the needed files for the tutorial.&lt;br /&gt;
After that you can open the interactive session and login into the node&lt;br /&gt;
&lt;br /&gt;
 salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 04:00:00&lt;br /&gt;
 ssh &#039;&#039;&#039;PUT HERE THE ASSIGNED NODE NAME AFTER salloc COMMAND&#039;&#039;&#039;&lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS &lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar&lt;br /&gt;
 $ tar -xvf databases_yambopy.tar&lt;br /&gt;
 $ cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [TBD]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8596</id>
		<title>Modena 2025</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Modena_2025&amp;diff=8596"/>
		<updated>2025-05-05T16:57:19Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2025/01/17/yambo-school-modena-2025/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the Leonardo-DCGP. You can find info about Leonardo-DCGP [https://www.hpc.cineca.it/systems/hardware/leonardo/ here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access Leonardo-DCGP via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh ...&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FROM HERE IS A PLACEHOLDER: NICOLA WILL FILL THIS PART&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 username@&#039;&#039;&#039;login02&#039;&#039;&#039;$ ssh r256n01&lt;br /&gt;
 ...&lt;br /&gt;
 username@&#039;&#039;&#039;r256n01&#039;&#039;&#039;$ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Set up yambopy &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to run yambopy on m100, you must first setup the conda environment (to be done only once):&lt;br /&gt;
 $ cd&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda init bash&lt;br /&gt;
 $ source .bashrc&lt;br /&gt;
&lt;br /&gt;
After this, every time you want to use yambopy you need to load its module and environment:&lt;br /&gt;
 $ module load anaconda/2020.11&lt;br /&gt;
 $ conda activate /m100_work/tra23_Yambo/softwares/YAMBO/env_yambopy&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
Quick recap.&lt;br /&gt;
Before every tutorial, if you run on m100, do the following steps&lt;br /&gt;
&lt;br /&gt;
 ssh m100&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 mkdir YAMBO_TUTORIALS &#039;&#039;&#039;(Only if you didn&#039;t before)&#039;&#039;&#039;&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
&lt;br /&gt;
At this point you can download the needed files for the tutorial.&lt;br /&gt;
After you can open the interactive session and login into the node&lt;br /&gt;
&lt;br /&gt;
 salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 04:00:00&lt;br /&gt;
 ssh &#039;&#039;&#039;PUT HERE THE ASSIGNED NODE NAME AFTER salloc COMMAND&#039;&#039;&#039;&lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS &lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
&lt;br /&gt;
At this point, you may learn about the python pre-postprocessing capabilities offered by yambopy, our python interface to yambo and QE. First of all, let&#039;s create a dedicated directory, download and extract the related files.&lt;br /&gt;
 &lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBOPY_TUTORIALS&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/databases_yambopy.tar&lt;br /&gt;
 $ tar -xvf databases_yambopy.tar&lt;br /&gt;
 $ cd databases_yambopy&lt;br /&gt;
&lt;br /&gt;
Then, follow &#039;&#039;&#039;the first three sections&#039;&#039;&#039; of this link, which are related to initialization and linear response.&lt;br /&gt;
* [[Yambopy tutorial: Yambo databases|Reading databases with yambopy]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get all the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 wget https://media.yambo-code.eu/educational/tutorials/files/MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 tar -xvf hBN.tar.gz&lt;br /&gt;
 tar -xvf MoS2_2Dquasiparticle_tutorial.tar.gz&lt;br /&gt;
 cd hBN&lt;br /&gt;
&lt;br /&gt;
Now you can start the first tutorial:&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
If you have gone through the first tutorial, pass now to the second one:&lt;br /&gt;
 &lt;br /&gt;
 cd $CINECA_SCRATCH&lt;br /&gt;
 cd YAMBO_TUTORIALS&lt;br /&gt;
 cd MoS2_HPC_tutorial&lt;br /&gt;
&lt;br /&gt;
* [[Quasi-particles of a 2D system | Quasi-particles of a 2D system ]]&lt;br /&gt;
&lt;br /&gt;
To conclude, you can learn an other method to plot the band structure in Yambo&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy tutorial: band structures| Yambopy tutorial: band structures]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Bethe-Salpeter equation (BSE)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz # NOTE: YOU SHOULD ALREADY HAVE THIS FROM DAY 1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-convergence-kpoints.tar.gz &lt;br /&gt;
 $ tar -xvf hBN-convergence-kpoints.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; and proceed with the following tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[Calculating optical spectra including excitonic effects: a step-by-step guide|Perform a BSE calculation from beginning to end ]]&lt;br /&gt;
* [[How to analyse excitons - ICTP 2022 school|Analyse your results (exciton wavefunctions in real and reciprocal space, etc.) ]]&lt;br /&gt;
* [[BSE solvers overview|Solve the BSE eigenvalue problem with different numerical methods]]&lt;br /&gt;
* [[How to choose the input parameters|Choose the input parameters for a meaningful converged calculation]]&lt;br /&gt;
Now, go into the yambopy tutorial directory to learn about python analysis tools for the BSE:&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ cd YAMBOPY_TUTORIALS/databases_yambopy&lt;br /&gt;
&lt;br /&gt;
* [[Yambopy_tutorial:_Yambo_databases#Exciton_intro_1:_read_and_sort_data|Visualization of excitonic properties with yambopy]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Bethe-Salpeter equation in real time (TD-HSEX)&#039;&#039;&#039; Fulvio Paleari (CNR-Nano, Italy), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
The files needed for the following tutorials can be downloaded following these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Introduction_to_Real_Time_propagation_in_Yambo#Time_Dependent_Equation_for_the_Reduced_One--Body_Density--Matrix|Read the introductive section to real-time propagation for the one-body density matrix]] (the part about time-dependent Schrödinger equation will be covered on DAY 4 and you can skip it for now)&lt;br /&gt;
* [[Prerequisites for Real Time propagation with Yambo|Perform the setup for a real-time calculation]]&lt;br /&gt;
* [[Linear response from real time simulations (density matrix only)|Calculate the linear response in real time]]&lt;br /&gt;
* [[Real time Bethe-Salpeter Equation (density matrix only)|Calculate the BSE in real time]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 22 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (Queen&#039;s University Belfast), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the tutorials we will use first the &amp;lt;code&amp;gt;hBN-2D-RT&amp;lt;/code&amp;gt; folder (k-sampling 10x10x1) and then the &amp;lt;code&amp;gt;hBN-2D&amp;lt;/code&amp;gt; folder (k-sampling 6x6x1)&lt;br /&gt;
You may already have them in the &amp;lt;code&amp;gt;YAMBO_TUTORIALS&amp;lt;/code&amp;gt; folder&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN-2D-RT&#039;&#039;&#039; hBN-2D.tar.gz  hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
If you need to downoload again the tutorial files, follow these steps:&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D-RT.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D-RT.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Dielectric function from Bloch-states dynamics]]&lt;br /&gt;
* [[Second-harmonic generation of 2D-hBN]]&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (additional tutorial)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (additional tutorial)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 19 May ===&lt;br /&gt;
&lt;br /&gt;
* D. Varsano, [Description and goal of the school].&lt;br /&gt;
* C. Franchini, [TBD]&lt;br /&gt;
* F. Mohamed, [A tour on Density Functional Theory]&lt;br /&gt;
* E. Cannuccia, [Electronic screening and linear response theory]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 20 May ===&lt;br /&gt;
&lt;br /&gt;
* A. Marini, Introduction to Many-Body Perturbation Theory&lt;br /&gt;
* C. Cardoso, Quasiparticles and the GW Approximation&lt;br /&gt;
* A. Guandalini,G. Sesti,  GW in practice: algorithms, approximations and W-averaged GW in metals&lt;br /&gt;
* M. Govoni, GW without empty states and investigation of neutral excitations by embedding full configuration interaction in DFT+GW&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 21 May ===&lt;br /&gt;
&lt;br /&gt;
* M. Palummo, Optical absorption and excitons via the Bethe-Salpeter Equation&lt;br /&gt;
* D. Sangalli, Real-time simulations&lt;br /&gt;
* F. Paleari, Introduction to YamboPy (automation and post-processing)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, 22 May ===&lt;br /&gt;
&lt;br /&gt;
* E. Luppi, An introduction to Non-linear spectroscopy&lt;br /&gt;
* M. Grüning, Non-linear spectroscopy in Yambo&lt;br /&gt;
* F. Affinito, Frontiers in High-Performance Computing&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6676</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6676"/>
		<updated>2023-05-19T10:15:42Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 username@&#039;&#039;&#039;login02&#039;&#039;&#039;$ ssh r256n01&lt;br /&gt;
 ...&lt;br /&gt;
 username@&#039;&#039;&#039;r256n01&#039;&#039;&#039;$ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 ...&lt;br /&gt;
 Terminal type is now &#039;...&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6675</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6675"/>
		<updated>2023-05-19T10:14:28Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 username@&#039;&#039;&#039;login02&#039;&#039;&#039;$ ssh r256n01&lt;br /&gt;
 username@&#039;&#039;&#039;r256n01&#039;&#039;&#039;$ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@&#039;&#039;&#039;login01&#039;&#039;&#039;$ gnuplot&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 Terminal type is now &#039;&amp;lt;...&amp;gt;&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6674</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6674"/>
		<updated>2023-05-19T10:11:41Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes:&lt;br /&gt;
 username@login01$ cd &amp;lt;directory_with_data&amp;gt;&lt;br /&gt;
 username@login01$ gnuplot&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 Terminal type is now &#039;&amp;lt;...&amp;gt;&#039;&lt;br /&gt;
 gnuplot&amp;gt; plot &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6673</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6673"/>
		<updated>2023-05-19T10:08:46Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Plot results with gnuplot &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
During the tutorials you will often need to plot the results of the calculations. In order to do so on M100, &#039;&#039;&#039;open a new terminal window&#039;&#039;&#039; and connect to M100 enabling X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -X m100&lt;br /&gt;
&lt;br /&gt;
Please note that &amp;lt;code&amp;gt;gnuplot&amp;lt;/code&amp;gt; can be used in this way only from the login nodes.&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6662</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6662"/>
		<updated>2023-05-19T09:50:48Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, do:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6661</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6661"/>
		<updated>2023-05-19T09:50:16Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt;  m100_...      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6659</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6659"/>
		<updated>2023-05-19T09:49:07Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6658</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6658"/>
		<updated>2023-05-19T09:48:44Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;. Please note that the instructions in the batch script must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However you will find &#039;&#039;&#039;ready-to-use&#039;&#039;&#039; batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-cpu/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6619</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6619"/>
		<updated>2023-05-18T17:05:31Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc -A tra23_Yambo -p m100_sys_test -q qos_test --reservation=s_tra_yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 -t 01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6618</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6618"/>
		<updated>2023-05-18T17:02:13Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition= m100_sys_test      # Request a specific partition for the resource allocation&lt;br /&gt;
 #SBATCH --qos=qos_test                  # qos = quality of service &lt;br /&gt;
 #SBATCH --reservation=s_tra_yambo       # Reservation specific to this school &lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --partition=m100_sys_test --qos=qos_test --reservation=s_tra_yambo --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6617</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6617"/>
		<updated>2023-05-18T16:55:26Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial (relatively to MPI parallelization) from the command line. Use the command below to open an interactive session of 1 hour (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
We ask for 4 cpus-per-task because we can exploit OpenMP parallelization with the available resources.&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6616</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6616"/>
		<updated>2023-05-18T16:50:06Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 4 (as in the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; option):&lt;br /&gt;
 $ export OMP_NUM_THREADS=4&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6598</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6598"/>
		<updated>2023-05-18T13:21:08Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 1:&lt;br /&gt;
 $ export OMP_NUM_THREADS=1&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 $ ls&lt;br /&gt;
 &#039;&#039;&#039;hBN-2D&#039;&#039;&#039; &#039;&#039;&#039;hBN&#039;&#039;&#039; hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6597</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6597"/>
		<updated>2023-05-18T13:19:52Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 1:&lt;br /&gt;
 $ export OMP_NUM_THREADS=1&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
--&amp;gt; &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6596</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6596"/>
		<updated>2023-05-18T13:13:53Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
Finally, set the &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt; environment variable to 1:&lt;br /&gt;
 $ export OMP_NUM_THREADS=1&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6594</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6594"/>
		<updated>2023-05-18T10:37:47Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;salloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6593</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6593"/>
		<updated>2023-05-18T10:36:55Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory and does not need &amp;lt;code&amp;gt;spectrum_mpi&amp;lt;/code&amp;gt;:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;alloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6592</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6592"/>
		<updated>2023-05-18T10:33:04Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the files, you may open the interactive job session with &amp;lt;code&amp;gt;alloc&amp;lt;/code&amp;gt; as explained above and proceed with the tutorials.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6591</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6591"/>
		<updated>2023-05-18T10:32:17Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Now that you have all the tutorial files, you may open the interactive job session with &amp;lt;code&amp;gt;alloc&amp;lt;/code&amp;gt; as explained above.&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6590</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6590"/>
		<updated>2023-05-18T10:30:07Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6589</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6589"/>
		<updated>2023-05-18T10:28:56Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir YAMBO_TUTORIALS&lt;br /&gt;
 $ cd YAMBO_TUTORIALS&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ ls&lt;br /&gt;
 hBN-2D.tar.gz  hBN.tar.gz&lt;br /&gt;
 $ tar -xvf hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 $ tar -xvf hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6588</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6588"/>
		<updated>2023-05-18T10:25:25Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To get the tutorial files needed for the following tutorials, follow these steps:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir DAY1&lt;br /&gt;
 $ cd DAY1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6587</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6587"/>
		<updated>2023-05-18T10:24:06Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* DAY 1 - Monday, 22 May */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, get the tutorial files needed for the following tutorials:&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
 $ mkdir DAY1&lt;br /&gt;
 $ cd DAY1&lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN.tar.gz                         100%[================================================================&amp;gt;]  10.81M  52.6MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 $ wget https://media.yambo-code.eu/educational/tutorials/files/hBN-2D.tar.gz&lt;br /&gt;
 ...&lt;br /&gt;
 Saving to: ‘hBN-2D.tar.gz’&lt;br /&gt;
 &lt;br /&gt;
 hBN-2D.tar.gz                      100%[================================================================&amp;gt;]   8.56M  46.7MB/s    in 0.2s    &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6586</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6586"/>
		<updated>2023-05-18T10:09:22Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to manually load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6585</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6585"/>
		<updated>2023-05-17T15:57:09Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session when you have finished, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6584</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6584"/>
		<updated>2023-05-17T15:56:39Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need to cancel your job, use:&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
To close the interactive session, log out of the compute node with the &amp;lt;code&amp;gt;exit&amp;lt;/code&amp;gt; command, and then cancel the job:&lt;br /&gt;
 $ exit&lt;br /&gt;
 $ scancel &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6583</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6583"/>
		<updated>2023-05-17T15:53:29Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                 # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6582</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6582"/>
		<updated>2023-05-17T15:49:24Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6581</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6581"/>
		<updated>2023-05-17T15:49:03Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* General instructions to run tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6580</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6580"/>
		<updated>2023-05-17T15:48:29Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Connect to the cluster using ssh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6579</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6579"/>
		<updated>2023-05-17T15:48:11Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Connect to the cluster using ssh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
	<entry>
		<id>https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6578</id>
		<title>Rome 2023</title>
		<link rel="alternate" type="text/html" href="https://wiki.yambo-code.eu/wiki/index.php?title=Rome_2023&amp;diff=6578"/>
		<updated>2023-05-17T15:48:01Z</updated>

		<summary type="html">&lt;p&gt;Matteo.dalessio: /* Connect to the cluster using ssh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A general description of the goal(s) of the school can be found on the [https://www.yambo-code.eu/2023/02/18/yambo-school-2023/ Yambo main website]&lt;br /&gt;
&lt;br /&gt;
== Use CINECA computational resources ==&lt;br /&gt;
Yambo tutorials will be run on the MARCONI100 (M100) accelerated cluster. You can find info about M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide here].&lt;br /&gt;
In order to access computational resources provided by CINECA you need your personal username and password that were sent you by the organizers.&lt;br /&gt;
&lt;br /&gt;
=== Connect to the cluster using ssh ===&lt;br /&gt;
&lt;br /&gt;
You can access M100 via &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; protocol in different ways.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using username and password &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use the following command replacing your username:&lt;br /&gt;
 $ ssh username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
However, in this way you have to type your password each time you want to connect.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Connect using ssh key &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can setup a ssh key pair to avoid typing the password each time you want to connect to M100. To do so, go to your &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory (usually located in the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory):&lt;br /&gt;
 $ cd $HOME/.ssh&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have this directory, you can create it with &amp;lt;code&amp;gt;mkdir $HOME/.ssh&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you are in the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory, run the &amp;lt;code&amp;gt;ssh-keygen&amp;lt;/code&amp;gt; command to generate a private/public key pair:&lt;br /&gt;
 $ ssh-keygen&lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key: m100_id_rsa&lt;br /&gt;
 Enter passphrase (empty for no passphrase): &lt;br /&gt;
 Enter same passphrase again: &lt;br /&gt;
 Your identification has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
 Your public key has been saved in &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
 The key&#039;s randomart image is:&lt;br /&gt;
 &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to copy the &#039;&#039;&#039;public&#039;&#039;&#039; key to M100. You can do that with the following command (for this step you need to type your password):&lt;br /&gt;
 $ ssh-copy-id -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa.pub &amp;lt;username&amp;gt;@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
Once the public key has been copied, you can connect to M100 without having to type the password using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option:&lt;br /&gt;
 $ ssh -i &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa username@login.m100.cineca.it&lt;br /&gt;
&lt;br /&gt;
To simplify even more, you can paste the following lines in a file named &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; located inside the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; directory adjusting username and path:&lt;br /&gt;
 Host m100 &lt;br /&gt;
  HostName login.m100.cineca.it&lt;br /&gt;
  User username&lt;br /&gt;
  IdentityFile &amp;lt;your_.ssh_dir&amp;gt;/m100_id_rsa&lt;br /&gt;
&lt;br /&gt;
With the &amp;lt;code&amp;gt;config&amp;lt;/code&amp;gt; file setup you can connect simply with&lt;br /&gt;
 $ ssh m100&lt;br /&gt;
&lt;br /&gt;
=== General instructions to run tutorials ===&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is useful to know the different workspaces you have available on M100, which can be accessed using environment variables. The main ones are:&lt;br /&gt;
* &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;home&amp;lt;/code&amp;gt; directory associated to your username; &lt;br /&gt;
* &amp;lt;code&amp;gt;$WORK&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; directory associated to the account where the computational resources dedicated to this school are allocated;&lt;br /&gt;
* &amp;lt;code&amp;gt;$CINECA_SCRATCH&amp;lt;/code&amp;gt;: it&#039;s the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory associated to your username.&lt;br /&gt;
You can find more details about storage and FileSystems [https://wiki.u-gov.it/confluence/display/SCAIUS/UG2.5%3A+Data+storage+and+FileSystems here].&lt;br /&gt;
&lt;br /&gt;
Please don&#039;t forget to &#039;&#039;&#039;run all tutorials in your scratch directory&#039;&#039;&#039;:&lt;br /&gt;
 $ echo $CINECA_SCRATCH&lt;br /&gt;
 /m100_scratch/userexternal/username&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Computational resources on M100 are managed by the job scheduling system [https://slurm.schedmd.com/overview.html Slurm]. Most part of Yambo tutorials during this school can be run in serial, except some that need to be executed on multiple processors. Generally, Slurm batch jobs are submitted using a script, but the tutorials here are better understood if run interactively. The two procedures that we will use to submit interactive and non interactive jobs are explained below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Run a job using a batch script &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for the tutorials and examples that need to be run in parallel. In these cases you need to submit the job using a batch script &amp;lt;code&amp;gt;job.sh&amp;lt;/code&amp;gt;, whose generic structure is the following:&lt;br /&gt;
 $ more job.sh&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --account=tra23_Yambo           # Charge resources used by this job to specified account&lt;br /&gt;
 #SBATCH --time=00:10:00                  # Set a limit on the total run time of the job allocation in hh:mm:ss&lt;br /&gt;
 #SBATCH --job-name=JOB                  # Specify a name for the job allocation&lt;br /&gt;
 #SBATCH --partition=m100_usr_prod       # Request a specific partition for the resource allocation&lt;br /&gt;
 #          &lt;br /&gt;
 #SBATCH --nodes=&amp;lt;N&amp;gt;                     # Number of nodes to be allocated for the job&lt;br /&gt;
 #SBATCH --ntasks-per-node=&amp;lt;nt&amp;gt;          # Number of MPI tasks invoked per node&lt;br /&gt;
 #SBATCH --ntasks-per-socket=&amp;lt;nt/2&amp;gt;      # Tasks invoked on each socket&lt;br /&gt;
 #SBATCH --cpus-per-task=&amp;lt;nc&amp;gt;            # Number of OMP threads per task&lt;br /&gt;
 &lt;br /&gt;
 module purge&lt;br /&gt;
 module load hpc-sdk/2022--binary&lt;br /&gt;
 module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-mpi/bin:$PATH&lt;br /&gt;
 &lt;br /&gt;
 export OMP_NUM_THREADS=&amp;lt;nc&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mpirun --rank-by core -np ${SLURM_NTASKS} \&lt;br /&gt;
        yambo -F &amp;lt;input&amp;gt; -J &amp;lt;output&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please note that these instructions must be compatible with the specific M100 [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-SystemArchitecture architecture] and [https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+MARCONI100+UserGuide#UG3.2:MARCONI100UserGuide-Accounting accounting] systems. The complete list of Slurm options can be found [https://slurm.schedmd.com/sbatch.html here]. However, you will find ready-to-use batch scripts in locations specified during the tutorials. &lt;br /&gt;
&lt;br /&gt;
To submit the job, use the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ sbatch job.sh&lt;br /&gt;
 Submitted batch job &amp;lt;JOBID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To check the job status, use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
              &amp;lt;...&amp;gt; m100_usr_      JOB username  R       0:01    &amp;lt;N&amp;gt; &amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; - Open an interactive session &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This procedure is suggested for most of the tutorials, since the majority of these is meant to be run in serial from the command line. With the command below, you can open an interactive session of 1 hour to execute commands in serial (complete documentation [https://slurm.schedmd.com/salloc.html here]):&lt;br /&gt;
 $ salloc --account=tra23_Yambo --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --partition=m100_usr_prod --qos=m100_qos_dbg --time=01:00:00&lt;br /&gt;
 salloc: Granted job allocation 10164647&lt;br /&gt;
 salloc: Waiting for resource configuration&lt;br /&gt;
 salloc: Nodes r256n01 are ready for job&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; you can see that there is now a job running:&lt;br /&gt;
 $ squeue -u username&lt;br /&gt;
              JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
           10164647 m100_usr_ interact username  R       0:02      1 r256n01&lt;br /&gt;
&lt;br /&gt;
To run the tutorial, &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; into the node specified by the job allocation and &amp;lt;code&amp;gt;cd&amp;lt;/code&amp;gt; to your scratch directory:&lt;br /&gt;
 $ ssh r256n01&lt;br /&gt;
 $ cd $CINECA_SCRATCH&lt;br /&gt;
&lt;br /&gt;
Then, you need to load &amp;lt;code&amp;gt;yambo&amp;lt;/code&amp;gt; as in the batch script above. Please note that the serial version of the code is in a different directory:&lt;br /&gt;
 $ module purge&lt;br /&gt;
 $ module load hpc-sdk/2022--binary&lt;br /&gt;
 $ module load spectrum_mpi/10.4.0--binary&lt;br /&gt;
 $ export PATH=/m100_work/tra23_Yambo/softwares/YAMBO/5.2-ser/bin:$PATH&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May === &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;16:15 - 18:30 From the DFT ground state to the complete setup of a Many Body calculation using Yambo&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[First steps: walk through from DFT(standalone)|First steps: Initialization and more ]]&lt;br /&gt;
* [[Next steps: RPA calculations (standalone)|Next steps: RPA calculations ]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 1 (BN)]]&lt;br /&gt;
* [[Yambopy tutorial: band structures|Band structures with yambopy: Tutorial 2 (Iron)]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 2 - Tuesday, 23 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 A tour through GW simulation in a complex material (from the blackboard to numerical computation: convergence, algorithms, parallel usage)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[GW tutorial Rome 2023 | GW computations on practice: how to obtain the quasi-particle band structure of a bulk material ]]&lt;br /&gt;
&lt;br /&gt;
=== DAY 3 - Wednesday, 24 May ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;17:00 - 18:30 Real-time Bethe-Salpeter equation&#039;&#039;&#039; Fulvio Paleari (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_rt)&lt;br /&gt;
&lt;br /&gt;
=== DAY 4 - Thursday, May 25 ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;14:00 - 16:30 Real-time approach with the time dependent berry phase&#039;&#039;&#039; Myrta Gruning (), Davide Sangalli (CNR-ISM, Italy)&lt;br /&gt;
&lt;br /&gt;
* [[Linear response from real time simulations]] (extract from this the part with yambo_nl)&lt;br /&gt;
* [[Non-linear response TD-HSEX: hBN]] (to be created, inputs from Ignacio)&lt;br /&gt;
&lt;br /&gt;
* [[Real time approach to non-linear response]] (to decide if we do also AlAS part 1)&lt;br /&gt;
* [[Correlation effects in the non-linear response]] (to decide if we do also AlAS part 2)&lt;br /&gt;
&lt;br /&gt;
=== DAY 5 - Friday, 26 May ===&lt;br /&gt;
&lt;br /&gt;
== Lectures ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DAY 1 - Monday, 22 May ===&lt;/div&gt;</summary>
		<author><name>Matteo.dalessio</name></author>
	</entry>
</feed>