Kerberos support

OK, thank you @salvet and @subhasisb.

I will add the cn_ready_func() and move the server part of TCP handshake into it, and the PBS_BATCH_AuthExternal will not handle the handshake. It will just inform the server that the TCP connection is authenticated.

I will also start the work on comm connection encryption. Since we are in threads, the handshake will be in a tight loop on this layer (I don’t see other option anyway).


Hi Vaclav,

In the TPP encryption side, can we use/extend the TPP_CTL_AUTH part? We use that for the Munge authentication; however that is straightforward and there is no need of more than one exchange. In case of gss encryption, we can add another subheader for TPP_CTL_AUTH and put some “STAGE” in that, just like how the server and clients do? Then, we do not need to put a tight loop of handshake?


Hi @subhasisb,

yes, I think it is possible. As I go deeper into the tpp code, I have already realized that the tight loop is not necessary.


Awesome, thanks! Also, another question, We will need to do the handshake only when the leaf daemons connect to comm right? TPP is designed to use persistent TCP connections, so unless there is moms/server/network going down, the connections are only one time…

I think so, we only need to establish the gss context on connecting to comm. After a successful establishing, the gss context is valid until mom/server/comm going down.

1 Like

Hi @subhasisb,

Concerning the TPP_CTL_AUTH part, we could use the handlers get_ext_auth_data() and validate_ext_auth_data() as with munge. It should be possible to postpone the TPP_CTL_JOIN until the handshake is finished. The issue with this solution is that the GSS layer would be mixed with tpp layer.

The second solution would be to replace handlers in tpp_transport_set_handlers() with gss_handler*(). gss_handlers*() would call the leaf*() or router*() handlers. This way the GSS layer will be nicely isolated in its own layer, which I prefer.

What do you think?

Hi @vchlum,

The second solution sounds fine to me.


Hi @subhasisb,

I am back with some news. I have done the implementation as agreed. The new code can be found on the branch kerberos_support_3. The design doc was also altered. The following was done:

  • The GSS code was unified and generalized. Redundant code was removed. Same GSS code is used for both the TCP and TPP.

  • The TPP encryption was fully changed as agreed. The connection with comm is now encrypted. It should also work between routers - Anything that connects to comm will use encryption (like server, moms, scheduler, other comm). So you need to have host keytab on those nodes. With GSS code enabled, it is forbidden to use cleartext with comm now. The implementation replaces the regular tpp handlers with new gss_* handlers, and the gss_* handlers call regular leaf or router handlers. The asynchronous handshake is always expected at the beginning of communication.

  • TCP was improved. If the client wants to connect to the server with encryption, the auth batch request is sent, which initiates the handshake. The new cn_ready_func notices that handshake is in progress and processes the handshake tokens asynchronously. Once the handshake is finished, the cn_ready_func returns true (after unwrapping data) and data are processed by regular process_request(). The GSS layer is also isolated in its own layer here. It means that dis_* handlers are replaced with gss_dis_* handlers and the interface was extended as needed (e.g. the tcp_read was exposed to gss_dis_* layer via a new handler).

  • The tool for renewing credentials “renew-test” was added into unsupported directory.

  • Miscellaneous improvements.

TCP allows using cleartext, which means that it is possible to use regular clients with the GSS enabled server. It is nice, and it also means that you can move a job between the regular server and GSS enabled server. Peer scheduling should also work. Adding the encryption on TCP between server and scheduler should be quite easy now, but let’s keep it in TODO for future commit.

It is also possible to enable encryption from hooks. It is very well ready for it. It actually already works. The pbs_python just needs to have a valid Kerberos ticket in the default location. Let’s keep this in TODO because here we have also hooks run as users, which should be (maybe) addressed with proper user credentials.

I am quite happy with the changes. Let me know what do you think. I am ready to address more comments.


Hi @vchlum, i looked through the design changes and the code as well.

Many thanks. This is indeed a huge improvement. And quite exciting to me. The way it is structured now, i think it will be quite simpler for us to add TLS, for example. I like the layering of the dis_gss* functions and the gss_tpp_handlers. The negotiations are asynchronous all across and that is great. And communications via comm is totally encrypted, end to end.

I think you are very close to raising a PR. I can’t think of anything else to ask you right now, and I agree completely to the items that you mentioned as TODO for the short future.

Thank you @subhasisb for looking into it. I am very happy to be closer to PR.

I will go again through the code carefully and I will try to find out what can be improved/cleaned/commented/… I am quite happy with the code, and If you do not have major comments (or anybody else, of course), I assume no major changes.

@vchlum i do not have any major changes needed in the code - after cleanup you may raise PR. Then we can start detailed code review, which, of course, will take a bit of time, but that is usual.

Since the Kerberos feature is merged (thank you), I have started to work on automated tests and I have some more ideas on what to do next and I would like to inform you about it:

  • I started the work on the tests. As a phase one, I would like to add Kerberos builds into Travis CI and I realized there are 5 concurrent runs now. Is it OK to add new runs? My thought is to have a smoke test with Kerberos support in Travis in the end (ideally for both MIT and Heimdal). Another possibility would be to add only building with Kerberos support in order to keep the two extra runs short. What do you think? …or just let me know if the Kerberos build for Travis CI is not suitable because the limit is 5 concurrent runs.

  • If the server is not available, the jobs are not renewed right now. If the unavailability of the PBS server lingers for a significant period of time (it is configurable already) the jobs could fail. Kerberos allows issuing ‘renewable tickets’. The idea is to renew a ‘renewable ticket’ on mom as long as the renewing allows. (this is usually max. a few days) If the ticket wouldn’t be renewable this has no effect of course. If the server is available this feature could be also without effect because the ticket will be renewed in time (depends on configuration). This will increase the robustness of renewing credentials.

  • The renew tool demands new credentials every time no suitable cached credentials are available. if renewing fails for a user, the credentials are demanded again for another job and it will likely fail again. The new idea is to demand credentials - if it fails - for the same user only once per check renew period (which is 5 minutes right now). If it fails for some user the credentials will not be demanded for this user for next 5 minutes. This will eliminate unnecessary calls of the renew tool. The tool can e.g. access the KDC directly, this will help to reduce the load on KDC caused by failing demands. If the demands timeout for some reason, this will help to prevent the server from sticking in renew timeouts.

Those features don’t need to add new interfaces so far. If you have any comments, I would be happy to discuss it.


1 Like

i @vchlum - thanks for taking on the kerberos tests. About exactly how to add them to CI, we need to find out how much extra duration it is going to be. I think the community will like to keep the overall duration to a manageable duration, so that every PR does not have take too long and then PR’s pile up.

Looks like we have just the build phase and no tests added. And currently, that extra run is taking about 5 minutes since there are only 5 concurrent runs. Just 5 mins might be quite acceptable, but if we added tests how much longer will it take?

I understand to keep the Travis CI tests duration reasonable @subhasisb .

I don’t know the duration with tests yet. The tests are not implemented yet. We could add only simple tests to Travis CI. E.g. test whether the ticket is supplied to the job. This could be quite fast. The rest would be run manually.

I would also love to test regularly whether the ticket is renewed. Right now, the main renew task runs only once in 5 minutes and the actual renew of a job is randomly scheduled within the next 5 minutes. It means up to 10 minutes for this test :frowning: - this is probably not acceptable even for a manual test… This can be resolved by adding a new server attribute cred_renew_task_period, which will control this period. So, we can set it to 5 seconds or so.

hi @vchlum

Yes we need to keep the total execution time low - i like the idea of doing some basic tests only for now, and doing rest of the things manually.

Actually, i see this message on the Travis CI details page:

Jobs and Stages

This build has seven jobs , running in parallel.

So looks like all 7 run in parallel not 5. If so, we are still okay.

The jobs run parallel but on 5 slots. The first 5 jobs run parallel in the beginning. But the last two (the new ones) were in the queue until one of the 5 jobs finished - then the 6th started and 7th was still in queue until another job finished. The 6th and 7th had finished before the sanitized (5th) job finished.

Ah got it. In that case perhaps we can reduce the last two to a single run. Instead of testing both MIT and Heimdal, can we test just one?

Yes, of course. It will help to reduce the makespan. I suppose MIT would be preferred, right?

Yes, sounds good to me.