Can I perform MPI communication using a network interface to which the PBS server does not belong?


Please tell me how to separate the interface for MPI communication from the interface for PBS communication.

Five compute nodes belong to InfiniBand and GbE segments, and PBS server belong only to GbE segments. (PBS server provides NFS to compute nodes via GbE.)

pbsserver 10GbE:
(NFS Server for compute nodes)

node00 10GbE:
node01 10GbE:
node02 10GbE:
node03 10GbE:
node04 10GbE:

node00ib IB:
node01ib IB:
node02ib IB:
node03ib IB:
node04ib IB:

(All nodes have the same /etc/hosts.)

Submit a job from PBS server. In this case, can I use InfiniBand in MPI communication by rewriting the host name described in PBS_NODEFILE of the job script with the host name assigned to the InfiniBand interface?

I use Intel MPI.

Initially, I thought that using PBS_LEAF_NAME would reflect the alias of the host name of the execution node in PBS_NODEFILE, but I seem to have misunderstood…


Modern MPI should be able to pick up the fastest interface automatically, if not you can ask it to do that using mpirun options or environment variable.

The last one, is to create a hostfile that contains node names for IB out of PBS_NODEFILE.

Henry Wu|吴光宇


Thank you for your immediate reply!

I’ll try to measure it once my work is over!