Tutorials

From The Yambo Project
Revision as of 13:03, 25 April 2017 by Davide (talk | contribs)
Jump to navigation Jump to search

Tutorial files and use of CECAM cluster

To follow the tutorials, you must first download or copy data files for each system. Files are distributed as gzipped tarballs. Always extract the tarballs in the same place.
Available systems are: hBN.tar.gz, hBN-2D.tar.gz. You will need both hBN and hBN-2D tarballs.

CECAM students: The tutorials will be run on the CECAM linux cluster.

  • If connecting from the CECAM iMac, your username is indicated on the terminal (tutoXY).

Standard tutorials: cecampc4 cluster

Log into the cluster via:

ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next you must log into the linux cluster directly, using the node node0RS that is associated with the username (link), and set up the tutorial as follows:

$ ssh -Y node0RS 
$ pwd
/nfs_home/tutoXY
$ which pw.x yambo
/nfs_home/tutoadmin/bin/pw.x
/nfs_home/tutoadmin/bin/yambo
$ cd /home/scratch/                 (NB: do not run on the /nfs_home partition!)
$ mkdir yambo_YOUR_NAME             (there are more participants than accounts!)
$ cd yambo_YOUR_NAME
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN.tar.gz .
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN-2D.tar.gz  .
$ tar -zxvf hBN.tar.gz 
$ tar -zxvf hBN-2D.tar.gz   
$ ls 
YAMBO_TUTORIALS

If you used "ssh -Y", X-forwarding, for plotting with gnuplot, should work. If not, try set DISPLAY:0.0 on your local machine; it might also help to keep one terminal open for plotting and the other for running codes. If all else fails, try the cool gnuplot trick gnuplot> set terminal dumb.

Parallel tutorial: bellatrix cluster

This cluster is equipped with 16-core nodes based on Intel processors. A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes.

First log into the cecam4 cluster via:

ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next move into the bellatrix cluster via:

ssh -Y cecam.schoolXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

This cluster is equipped with 16-core nodes based on Intel processors.

The Unix environment (already set by default), can be obtained by loading the following modules:

 module load gcc/5.3.0
 module load openmpi/1.10.2

A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes.

In order to submit to this queue use:

 srun  -N 2 -n 8 --reservation=cecam_course  job.sh

where job.sh is a submission script. In the example above, 2 nodes are requested with -N, together with 8 total MPI tasks (4 per node), and, implicitly, OMP_NUM_THREADS is set to 4 (in order to fill each node with 16 threads).


Full tutorials

If you are starting out with Yambo, or even an experienced user, we recommend that you complete the following tutorials before trying to use Yambo for your system. Each tutorial is fairly standalone, although some require that you have completed previous ones.

Introduction

Quasiparticles in the GW approximation

Using Yambo in Parallel

  • Parallel GW: strategies for running Yambo in parallel
  • GW on diamond: use Yambo in parallel to converge a GW calculation for diamond

Excitons and the Bethe-Salpeter Equation

Yambo-python driver



Modules

An alternative way to learn Yambo is through a more detailed look at our documentation modules. These provide a focus on the input parameters, run time behaviour, and underlying physics behind each yambo task or runlevel. Although they can be followed separately, they are better followed as part of the more structured tutorials given above. The modules are grouped as follows:

Other stuff and old stuff


Prev: Now: Tutorials Home Next: First steps