[Mpi-forum] use of communicator in MPI_Pack and MPI_Unpack

Michael Rezny michael.rezny at monash.edu
Wed Feb 17 18:09:57 CST 2016


Hi,
I have a question regarding the use of a communicator in MPI_Pack and
MPI_Unpack.
I have googled and search the MPI mailing lists but haven't found an answer.

The OpenMPI man page for MPI_Pack states:
       The comm argument is the  communicator  that  will  be
       subsequently used for sending the packed message.

And something similar for MPI_Unpack.

Which seems to reflect what is defined in the MPI standards.

Basically, what I would like to do is to have an MPI rank as a coupler.
That is, receiving a packed message from one application and
passing it to another application.

Comm_A would be the ranks of application A plus the coupler and
Comm_B would be the ranks of application B plus the coupler.

In this case, MPI_COMM_WORLD = Comm_A U Comm_B.

I would like to have a solution that does not have the coupler unpacking
the buffer received from application A and then packing it to send it to
application B

I just found a very nice explanation of the need for a communicator in Pack
/ Unpack on page 135 of Using MPI Portable Parallel Programming with the
Message-Passing Interface
second edition by Gropp, Lusk, and Skjellum.

If I understand this correctly, it would be legal to use MPI_COMM_WORLD as
the communicator
for MPI_Pack and MPI_Unpack. The only downside is that one might not get
the optimal performance.

The alternative we are using is to use MPI_Pack_external and
MPI_Unpack_external
which requires no communicator. But the downside is that on homogeneous
little-endian HPCs,
this will always entail byte swapping on packing and unpacking.

So my question is: What is the correct and optimal way to implement passing
packed messages
in the coupling example described above?

kindest regards
Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20160218/c889a161/attachment.html>


More information about the mpi-forum mailing list