Using Variable_List to store hook info on jobs?


Renowned Wizards!

I am (still) writing several hooks to deal with jobs in a reservation-like system. (BTW, all hooks will be OSS /GPL2 at end-of-work, if any are interested).

Presently, I have a runjob hook, which will force jobs onto a low-priority node group (nodetype=io) if the high-prio group (nodetype=compute) is “reserved” and the job does not explicitly select nodegroup (if a low-prio job selects nodetype=compute under a reservation, then the job will be held). That works fine.

However, some jobs will not be able to run on the io-nodes. This is fine, they will just remain queued (or I’ll move them somewhere), and later remove the forced nodetype again. However, when the scheduler figures out that the forced nodetype results in the job not running, the server/scheduler will change the job comments to something like:

Can Never Run: can't fit in the largest placement set, and can't span psets

That means that I cannot rely on the job comment to check if the (runjob) hook have changed select (and thus if I later should “revert” the change). Thus, I am looking for somewhere else to store/flag the information that the job select was modified. Right now the Variable_List (as a python dict) looks promising, although I know that this is not the intended use of the list.

I realize that if I stick a variable in the dict like eg.:


then I may later test for it (in hooks and/or cron-based scripts) and deal with the job. I realize, that


may later be exported to the running jobs environment. But that should not pose a problem for me. I just wonder if this is altogether a bad way to do this. Are there obvious side effects, or maybe I have overlooked some “more correct” way to store this kind of information?

Many thanks,



Hello Bjarne,

PBS Pro hooks are effectively stateless. Subsequent invocations of a hook have no knowledge of the prior state of jobs, nodes, etc. If you are trying to retain state across hook invocations, you might consider writing the data out to a file in a well known location, perhaps under PBS_HOME. Upon invocation, the hook would read in the data. Prior to exit, the hook could update the data for the next invocation. The format of the data is up to you… pickle, JSON, etc. That way you could retain the data without having to store it in the jobs themselves.

Just a suggestion. Hope it helps.




Hi Mike,

Thank you for the input.

I am trying to avoid the external storage of info for several reasons - among which is speed (or lack thereof) of the disk system, possible parallel race condition issues (I don’t know the core implementation of PBS pro - and how many hooks may run at once - but presumably only one per job), and the increased complexity of the entire solution.

Quite so. And I will seriously consider storing the info outside the job. Presently, it does seem feasible to store this kind of “simplistic info” in the job Variable_List, so it is just “part of the job”. Only I was wondering if this path could lead to problems down the way. I guess time will show.

Thanks again!



Hi Mike,

I encountered a snag.

The obvious choice (I guess) would be to have one file per job, and let the file naming be unique to the job id (job id as file name works).
However, for a queuejob hook, the job id is not even set yet, so it is not at all clear what a “well known location” (including file name) would be, And if I do not use the job id as “identifier” in relation to the file, then I will need to store the identifier with the job - which brings me back to the original problem.

As a queuejob hook is running before the job even reach the server, I guess there is no way for the hook to know what the job id would be - should the job be accepted. (If there was, then I could purge the data store for said job id - and then store the info for the job).




Hi Bjarne,

If locking is a concern, I have created a small class to handle this in the yet-to-be-released cgroup hook update…

# CLASS lock
class lock:

    def __init__(self, path):
        self.path = path
        self.fd = None

    def __enter__(self):
        self.fd = open(self.path, 'w')
        fcntl.flock(self.fd, fcntl.LOCK_EX)

    def __exit__(self, exc, val, traceback):
        if self.fd:
        fcntl.flock(self.fd, fcntl.LOCK_UN)

Enclose any critical sections of code as follows:

with lock(mylockfile):
    # Place code here

As far as the queuejob hook goes, there’s no way to know what the job ID will be. It may be possible for you to assign your own temporary job identifier for jobs that don’t have one yet. You would then have to have a way to correlate between the temporary ID you assigned and the real job ID that PBS Pro assigned. I’m just trying to think of approaches here. I don’t have a complete solution to suggest. Hoping that gets the gears moving in the right direction.




Hello Bjarne,

Another approach to consider would be to add your own custom string resource that your hooks could use to store metadata within each job. While the scheduler will modify the job comment, it won’t touch the new custom resource.




I don’t think that locking is a problem - as long as the server won’t execute two hooks for the same job at once (which would be bad form and a scenario I won’t ponder upon).

Right. Thanks.
Really, I would have to get from the actual job (with the real ID) back to the “fake” ID, to read the info. That means (I think) storing info with the job. And since I am only want to store a very limited amount of information in the first place, going the extra mile to place it in an external file seems an unattractive solution for my particular problem.

That might be a better way than now, where I put it in the Variable_List, although the two methods will look a lot alike. I’ll have to read up on how to add custom string resources to jobs without affecting the job placement.

I guess it would be best if the resources cannot be set from user-land, but for things to work, I need the resource to be visible from qstat -f <JOBID> - at least when executed by root. I’ll have a look at it, but I might not go that way, as my scripts/hooks presently seem to work. Updating to a custom resource would, however, make the system more robust.

Thanks again, Mike!