Table of contents


Slurm SBATCH command and slurm job submission Scripts

The command "sbatch" should be the default command for running batch jobs. 

With "sbatch" you can run simple batch jobs from a command line, or you can execute complicated jobs from a prepared batch script.

First, you should understand the basic options you can add to the sbatch command in order to request the right allocation of resources for your jobs:

Commonly used options in #srun, #sbatch, #salloc:

-p partitionName

submit a job to queue queueName

-o output.log

Append job's output to output.log instead of slurm-%j.out in the current directory

-e error.log

Append job's STDERR to error.log instead of job output file (see -o above)


Email submitter on job state changes. Valid type values are BEGIN, END,FAIL, REQUEUE and ALL (any state change).


User to receive email notification of state changes (see –mail-type above)

-n N

--ntasks N

Set number of processors (cores) to N(default=1), the cores will be allocated to cores chosen by SLURM

-N N

--nodes N

Set number of nodes that will be part of a job.

On each node there will be --ntasks-per-node processes started.

If the option --ntasks-per-node is not given, 1 process per node will

be started

--ntasks-per-node N

How many tasks per allocated node to start (see -N above)

--cpus-per-task N

Needed for multithreaded (e.g. OpenMP) jobs. The option tells SLURM to allocate N cores per task allocated; typically N should be equal to the number of threads the program spawns, e.g. it should be set to the same number as OMP_NUM_THREADS



Set job name which is shown in the queue. The job name (limited to first 24 characters) is used in emails sent to the


-w node1,node2,...

Restrict job to run on specific nodes only

-x node1,node2,...

Exclude specific nodes from job

Below example of running simple batch job directly from a command line/terminal of Hive with allocation of minimum resources(1 node and 1 CPU/core):

$ sbatch -N1 -n1 --wrap '. /etc/profile.d/ ; module load blast/2.2.30 ; blastn -query sinv_traA.fa -db sinv_genome.fa -out sinv_traA.blastn'
Submitted batch job 214726


-N1 is request for 1 node

-n1 is request for 1 CPU/core
. /etc/profile.d/ ; module load blast/2.2.30
 is a command that will load software that your job is using(in the example blast program)

blastn -query sinv_traA.fa -db sinv_genome.fa -out sinv_traA.blastn is your job command

* The job will start on default partition named hive1d, if your job is going to run more then 1 day, then you should add the -p option to your command and specify hive7d or hiveunlim partition that will allow your job to run up to 7 or 31 days.

Below is an example of running a simple job on hive7d partition with time limit for 5 days:

$ sbatch -N1 -n1 -p hive7d --time=5-00:00:00 --wrap '. /etc/profile.d/ ; module load blast/2.2.30 ; blastn -query sinv_traA.fa -db sinv_genome.fa -out sinv_traA.blastn'
Submitted batch job 214726

So basically you should execute the job like that formula: [sbatch command] [allocation of resources] [--wrap] ['your job command']

the --wrap option has to be AFTER the allocation of needed resources and not before, you have to use the --wrap option in order to allow executing jobs from the command line because the standard of sbatch command is to run batch jobs from a batch script and not from the command line.

Running jobs from prepared slurm submission script:

The following is a typical Slurm submission script example.

* Please note that there is a few types of thin compute nodes(bee's) on hive. Dell(bee-001-032) compute nodes has 20 cores/CPU's while HP(bee033-063) compute nodes has 24 cores/CPU's

#SBATCH --ntasks 20           # use 20 cores
#SBATCH --ntasks-per-node=20 # use 20 cpus per each node #SBATCH --time 1-03:00:00 # set job timelimit to 1 day and 3 hours #SBATCH --partition hive1d # partition name #SBATCH -J my_job_name # sensible name for the job # load up the correct modules, if required . /etc/profile.d/ module load openmpi/1.8.4 RAxML/8.1.15
# launch the code mpirun...

How to submit a job from a batch script

To submit this, run the following command:


Warning: do not execute the script

The job submission script file is written to look like a bash shell script. However, you do NOT submit the job to the queue by executing the script.

In particular, the following is INCORRECT:

# this is the INCORRECT way to submit a job
./  # wrong! this will not submit the job!

The correct way is noted above (sbatch

Please, refer to the manual of SBATCH (man sbatch) to see more helpful information about how to use the sbatch command in the right way.