How to scatter jobs over vnodes?


#1

Hi folks,

How can I scatter (i.e. round-robin) multiple jobs over the vnodes?

With our current configuration, when I submit 5 jobs that uses a little amount of shared resource, they are assigned to a single vnode together:

  • vnode1: job1, job2, job3, job4, job5
  • vnode2: (vacant)
  • vnode3: (vacant)

I want them to be assigned to different vnodes so as to avoid speeddown caused by resource contention:

  • vnode1: job1, job4
  • vnode2: job2, job5
  • vnode3: job3

I know I can scatter multiple chunks in a single job, but I failed to find the way to scatter multiple independent jobs.
Any comments or suggestions would greatly be appreciated.
Thank you,


#2

Hi @Ikki,

I think node_sort_key can help you. This option can be set in $PBS_HOME/sched_priv/sched_config:

node_sort_key: “<resource> LOW assigned” ALL

Do not forget to kill -HUP the scheduler after saving the file. More info on node_sort_key can be found in the admin guide: “4.8.50.1 node_sort_key Syntax”.

Vasek


#3

Hi @vchlum,

That’s exactly what I wanted!
Thank you for your kind support :smiley:

Regards,


#4

I think add place line after your select line in jobscript
i.e.
#PBS -l place=scatter
would also do this if you want to control specific jobs rather than global configuation


#5

Hi @source,

Thank you for your reply.

When I tried e.g. doing
$ qsub -lselect=ncpus=1 -lplace=scatter test.sh
three times, then those jobs were placed on to a single machine, which was what I did not wanted.

Regards,


#6

Yes, you are right, I made a mistake. place=scatter only affect chunks within a single job. :frowning:


#7

Use place=scatter:excl


#8

Hi @pcebull,

Thank you for your comment.
“:excl” prevents multiple jobs to be assigned to a single node (even if the node has enough amount of resource) when I submit more jobs than the number of nodes.

Regards,