Tutorials: Difference between revisions

From The Yambo Project
Jump to navigation Jump to search
Line 51: Line 51:
This cluster is equipped with 16-core nodes based on Intel processors.  
This cluster is equipped with 16-core nodes based on Intel processors.  


The Unix environment (already set by default), can be obtained by loading the following modules:
The Unix intel environment can be obtained by loading the following modules:
  module load gcc/5.3.0
module purge
  module load openmpi/1.10.2
module load intel/16.0.3
module load intelmpi/5.1.3


A tutorial-dedicated queue (''cecam_course'') allows participants to access up to 20 nodes.
A tutorial-dedicated queue (''cecam_course'') allows participants to access up to 20 nodes.
 
In order to submit to this queue, use the script ''job.sh'' you will find in the tarball provided for the tutorials.
In order to submit to this queue use:
To use such script you just need to run
  srun  -N 2 -n 8 --reservation=cecam_course  job.sh
  sbatch job.sh
where ''job.sh'' is a submission script.
In the example above, 2 nodes are requested with -N, together with 8 total MPI tasks (4 per node), and, implicitly,
OMP_NUM_THREADS is set to 4 (in order to fill each node with 16 threads).  
 
 
<!-- * Download: from [this link] -->


== Full tutorials ==  
== Full tutorials ==  

Revision as of 15:47, 25 April 2017

Tutorial files and use of CECAM cluster

To follow the tutorials, you must first download or copy data files for each system. Files are distributed as gzipped tarballs. Always extract the tarballs in the same place.
Available systems are: hBN.tar.gz, hBN-2D.tar.gz. You will need both hBN and hBN-2D tarballs.

CECAM students: The tutorials will be run on the CECAM linux cluster.

  • If connecting from the CECAM iMac, your username is indicated on the terminal (tutoXY).

Standard tutorials: cecampc4 cluster

Log into the cluster via:

ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next you must log into the linux cluster directly, using the node node0RS that is associated with the username (link), and set up the tutorial as follows:

$ ssh -Y node0RS 
$ pwd
/nfs_home/tutoXY
$ which pw.x yambo
/nfs_home/tutoadmin/bin/pw.x
/nfs_home/tutoadmin/bin/yambo
$ cd /home/scratch/                 (NB: do not run on the /nfs_home partition!)
$ mkdir yambo_YOUR_NAME             (there are more participants than accounts!)
$ cd yambo_YOUR_NAME
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN.tar.gz .
$ cp /nfs_home/tutoadmin/yambo-2017/tutorials/hBN-2D.tar.gz  .
$ tar -zxvf hBN.tar.gz 
$ tar -zxvf hBN-2D.tar.gz   
$ ls 
YAMBO_TUTORIALS

If you used "ssh -Y", X-forwarding, for plotting with gnuplot, should work. If not, try set DISPLAY:0.0 on your local machine; it might also help to keep one terminal open for plotting and the other for running codes. If all else fails, try the cool gnuplot trick gnuplot> set terminal dumb.

Parallel tutorial: bellatrix cluster

This cluster is equipped with 16-core nodes based on Intel processors. A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes.

First log into the cecam4 cluster via:

ssh -Y tutoXY@cecampc4.epfl.ch

replacing XY with the appropriate number.

Next move into the bellatrix cluster via:

ssh -Y cecam.schoolXY@bellatrix.epfl.ch

replacing XY with the appropriate number.

This cluster is equipped with 16-core nodes based on Intel processors.

The Unix intel environment can be obtained by loading the following modules:

module purge
module load intel/16.0.3
module load intelmpi/5.1.3

A tutorial-dedicated queue (cecam_course) allows participants to access up to 20 nodes. In order to submit to this queue, use the script job.sh you will find in the tarball provided for the tutorials. To use such script you just need to run

sbatch job.sh

Full tutorials

If you are starting out with Yambo, or even an experienced user, we recommend that you complete the following tutorials before trying to use Yambo for your system. Each tutorial is fairly standalone, although some require that you have completed previous ones.

Day 1: Introduction

Day 2: Quasiparticles in the GW approximation

Day 3: Using Yambo in Parallel

  • Parallel GW: strategies for running Yambo in parallel
  • GW on diamond: use Yambo in parallel to converge a GW calculation for diamond

Day 4: Excitons and the Bethe-Salpeter Equation

Day 5: Yambo-python driver

Modules

An alternative way to learn Yambo is through a more detailed look at our documentation modules. These provide a focus on the input parameters, run time behaviour, and underlying physics behind each yambo task or runlevel. Although they can be followed separately, they are better followed as part of the more structured tutorials given above.

Other stuff and old stuff


Prev: Now: Tutorials Home Next: First steps