PP-337: Multiple schedulers servicing the PBS cluster



We have added couple of more error messages to Interface 5: Changes to PBS Nodes objects.. Requesting the community to review the updated EDD and provide your feedback.



Hi All,

Added notes on Server’s backfill_depth behavior in Notes section of EDD.
Requesting the community to review the updated EDD and provide your feedback.



@visheshh as fas as I remember backfill_depth will be associated with policy object. It will not be part of server anymore. I understand that what you have written will be the behavior until we get a policy object but it would change once design for PP-748 gets published.



Have modified the Multisched EDD to some extent which reflects the interface changes suggested during the review and hence requesting the community to review the updated EDD and provide your feedback.



For easy access, here is the link to Design document. I find it hard to go to the top of the page to get the link.


The current design doesn’t talk about the pbsfs command. Each of the schedulers has its own fairshare tree. How does pbsfs know which scheduler to modify?

Right now the pbsfs command doesn’t care whether or not PBS is running. It modifies the scheduler’s usage database directly. It can do this because the sched_priv directory is well known.

I see two ways to go about doing this.

First is to have the admin supply a scheduler name to pbsfs. This will require pbsfs to talk to the server and get the sched_priv path for that scheduler. This will require the server to be running to run pbsfs.

The second way is to provide pbsfs the sched_priv path you want to modify. This keeps the freedom of running pbsfs without PBS running, but isn’t nearly as user friendly.




I think that this is a reasonable thing to require. We require this for many of our commands (qstat, qsub, qmgr, qalter, etc).


I’ve added interface 10 to the design document talking about fairshare. There is a new -I option to pbsfs to specify a scheduler. I chose -I because that’s the same option to pbs_sched to specify a scheduler name. I figured being consistent is good.

Please review



Hi @bhroam ,

How would "Configuring Entity shares” work ? Now that, we have fair share tree entity allocations on “per scheduler” basis ?
“Sorting jobs by entity shares” will no longer be “per whole PBS complex “ ?



Interface 2 lists the resource_group as one of the files in the scheduler’s sched_priv directory. This is how an admin defines the fairshare tree. It will be per-scheduler. Nothing has changed fairshare wise. Before there was a resource_group file in the scheduler’s sched_priv directory. This is still true. It just so happens that now there are more than one sched_priv directory.

If you think it is needed, I can add another bullet point to Interface 10 saying as much.

Yes, anything fairshare will no longer be per-complex. It will be per-scheduler. I tried to cover this in interface 10 bullet 1. This means anything to do with fairshare will be per-scheduler now.



Any other comments about Interface 10 on Fairshare?



Interface 10 looks good to me. If it turns out to be a problem that the server needs to be running (which I don’t think it will be) we can always add a new option to specify a specific sched’s sched_priv directory since the server is not there to be queried.


Thanks for posting the changes. Are you recommending -I or -l (L)? My suggestion is that we use an option such as -N instead of -l (L) or -I (i) since they can be confusing. But it is only a recommendation.


It’s an uppercase ‘i’. I believe it’s for id. I used it for consistency with the new option to pbs_sched. I don’t really care what option we choose, but I think it should be consistent between binaries.

If we keep the options consistent, then -N is already taken. It leaves the scheduler running in the foreground (it’s actually an option to all daemons).


Enhancing pbs_snapshot for multi-sched

Hi All,

Couple of minor changes are added to the following EDD. Can you please have a look and give your comments if any.



Looks fine Suresh, thanks for updating the page with accurate information.