Difference between revisions of "Using the IRIDIA Cluster"

From IridiaWiki
Jump to navigationJump to search
Line 38: Line 38:
   
 
#!/bin/bash
 
#!/bin/bash
#$ -N name_of_the_short_job
+
#$ -N test_short
#$ -l complex_name
+
#$ -l opteron244
 
#$ -l shorttime
 
#$ -l shorttime
 
#$ -cwd
 
#$ -cwd
Line 47: Line 47:
   
 
#!/bin/bash
 
#!/bin/bash
#$ -N name_of_the_medium_job
+
#$ -N test_medium
#$ -l complex_name
+
#$ -l opteron244
 
#$ -l mediumtime
 
#$ -l mediumtime
 
#$ -cwd
 
#$ -cwd
Line 56: Line 56:
   
 
#!/bin/bash
 
#!/bin/bash
#$ -N name_of_the_long_job
+
#$ -N test_long
#$ -l complex_name
+
#$ -l opteron244
 
#$ -l longtime
 
#$ -l longtime
 
#$ -cwd
 
#$ -cwd
   
   
To submit a job that runs in the parallel environment you have to specify -l COMPLEX_NAME -l parallel in the shell script passed at the qsub command, like in this example:
+
To submit a job that runs in the parallel environment you have to specify -l COMPLEX_NAME -l parallel -pe PARALLEL_ENV NUM_PROCESS in the shell script passed at the qsub command, like in this example:
   
 
#!/bin/bash
 
#!/bin/bash
#$ -N name_of_the_long_job
+
#$ -N test_parallel
#$ -l complex_name
+
#$ -l opteron244
 
#$ -l parallel
 
#$ -l parallel
#$ -pe parallel_environment_name
+
#$ -pe pvm 10
 
#$ -cwd
 
#$ -cwd
   

Revision as of 11:23, 4 December 2006

Cluster composition

Currently the IRIDIA cluster is composed by 2 servers (majorana and polyphemus) and 32 rack units (computational nodes). Each rack unit has 2 CPUs AMD Opteron244 working at 1,75GHz and 2GB of RAM (nodes from r02 to r17 have 4 modules of 512MB each 400MHz DDR ECC REG DIMM, nodes from r18 to r33 have 8 modules of 256MB each, 400MHz DDR ECC REG DIMM). In total the cluster is composed of 64 CPUs dedicated to computations and 3 CPU for administrative purposes.


COMPLEX_NAME: opteron244

- AMD Opteron244 (2 CPU @ 1,75GHz)

nodes: r02, r03, r04, r05, r06, r07, r08, r09, r10, r11, r12, r13, r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31, r32, r33


Queues

Each computational node has the following 4 queues:


  • <machine>.short: max 2 jobs can run in the queue concurrently at nice-level 2. Each job can only run for maximum 24h of CPU time (real execution of the program, without counting the time needed by the system for multitasking, etc). If a job still runs after the 24th hour, it will receive a signal SIGUSR1 and after some more time a SIGKILL that will terminate it.
  • <machine>.medium: max 2 jobs can run in the queue concurrently at nice-level 3 (lower priority than the short ones). Each job can only run for maximum 72h of CPU time (real execution of the program, without counting the time needed by the system for multitasking, etc). If a job still runs after the 72nd hour, it will receive a signal SIGUSR1 and after some more time a SIGKILL that will terminate it.
  • <machine>.long: only 1 job at a time can run in this queue at nice-level 3 (lower priority than the short ones). The job can only run for maximum 168h of CPU time (real execution of the program, without counting the time needed by the system for multitasking, etc). If a job still runs after the 168th hour, it will receive a signal SIGUSR1 and after some more time a SIGKILL that will terminate it.
  • <machine>.par: only 1 job at a time can run in this queue at nice-level 3 (lower priority than the short ones).

Summarizing: on each node can run concurrently up to 6 jobs (distributed on 2 CPUs) with an average space in RAM of 341MB per job. The queueing system can run max 192 concurrent jobs on the whole cluster.


YOU HAVE TO DESIGN YOUR COMPUTATIONS IN SUCH A WAY THAT EACH SINGLE JOB DOESN'T RUN FOR MORE THAN 7 DAYS (of CPU time).


THE SCHEDULER CANNOT PUT IN EXECUTION MORE THAN 64 JOBS OF THE SAME USER AT THE SAME TIME. IF YOU SUBMIT MORE THAN 64 JOBS, MAXIMUM 64 WILL BE RUNNING AT THE SAME TIME.

How to submit a job

To submit a job that lasts up to 1 day you have to specify -l COMPLEX_NAME -l shorttime in the shell script passed at the qsub command, like in this example:

#!/bin/bash
#$ -N test_short
#$ -l opteron244
#$ -l shorttime
#$ -cwd


To submit a job that lasts up to 3 days you have to specify -l COMPLEX_NAME -l mediumtime in the shell script passed at the qsub command, like in this example:

#!/bin/bash
#$ -N test_medium
#$ -l opteron244
#$ -l mediumtime
#$ -cwd


To submit a job that lasts up to 7 days you have to specify -l COMPLEX_NAME -l longtime in the shell script passed at the qsub command, like in this example:

#!/bin/bash
#$ -N test_long
#$ -l opteron244
#$ -l longtime
#$ -cwd


To submit a job that runs in the parallel environment you have to specify -l COMPLEX_NAME -l parallel -pe PARALLEL_ENV NUM_PROCESS in the shell script passed at the qsub command, like in this example:

#!/bin/bash
#$ -N test_parallel
#$ -l opteron244
#$ -l parallel
#$ -pe pvm 10
#$ -cwd

Submission tips for the cluster

If your job lasts less than 1 day it doesn't matter in which queue it will end up because no time constraint will be violated. In this case you might want that it gets the first queue available, no matter which. To do so, simply remove the -l queue_name from your script.


If your job lasts less than 1 day and you want it to run in the short time queue or in the medium time queue, no matter which of the two, write -l shortmedium as queue type.


If your job lasts less than 3 days and you want it to run in the medium time queue or in the long time queue, no matter which of the two, write -l mediumlong as queue type.

Programming tips for the cluster

If the jobs needs to read/write quite much and often, it is better to copy the input files to the /tmp directory (which is in the local harddrive of the node) and to write the output files also there, moving them in the /home/user_name directory only when the computation is over. In this way your job does not have to use NFS for each read/write operation relieving majorana of some weight (the /home partition is exported from there to all the nodes), making it more fast (Prasanna measured a speedup of 2-3x on his code).