[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
hritzdorf at hpce.nec.com
Mon Feb 21 05:17:43 CST 2011
It would be possible to create an attach function corresponding to the MPI_Intercomm_create interface.
MPI_Comm_shm_attach (MI_Comm local_comm, int local_leader, MPI_Shm local_shm,
MPI_Comm bridge_comm, int remote_leader,
int tag, MPI_Comm *comm_out, MPI_Shm *shm_out)
The function would attach
(*) the shared memory region of the remote communicator,
(*) create an (intra or inter) communicator containing the remote
communicator and local communicator, and
(*) would return a new shmem descriptor "shm_out" to the shared
memory of the remote communicator.
In this case, you could use MPI_Comm_shm_free still for freeing the shared memory. It's already collective and MPI can internally count the number of attachments.
You would need a new constant MPI_SHM_NULL (null handle for MPI_Shm) which is a valid input for input argument "local_shmem" of MPI_Comm_shm_attach. In addition, there should be a function
MPI_Shm_comm (MPI_Shm shm, MPI_Comm *comm)
which returns the communicator to a MPI_shm handle and a function
MPI_Shm_baseptr (MPI_shm shm, void *baseptr)
which returns the baseptr to a MPI_shm handle. These functions would be required by external libraries and to get the "baseptr" of handle "shm_out" above.
PS: The last argument of MPI_Comm_shm_alloc in the draft (version 1.0, Page 19, Line 17) must be MPI_shm *shm (instead of MPI_shm shm).
>From: mpi3-hybridpm-bounces at lists.mpi-forum.org [mailto:mpi3-hybridpm-
>bounces at lists.mpi-forum.org] On Behalf Of James Dinan
>Sent: Friday, February 18, 2011 12:14 AM
>To: mpi3-hybridpm at lists.mpi-forum.org
>Subject: Re: [Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
>Right, there is a mismatch between MPI handles and shared memory
>handles. IPC shared memory relies on having a portable handle
>(filename, etc) in order to support asynchronous attach/detach to/from
>shared allocations. In order to fully support this, we would also need
>to have a portable handle. There was an earlier suggestion of letting
>the programmer supply a handle. We didn't seem to like it, but that
>would get around the problem of needing to share and MPI handle.
>Another alternative would be to make attaching to a shmem region
>collective. This is tricky too since it would be collective across a
>new communicator that includes the added processes. We would also need
>a collective detach.
>On 2/17/11 2:27 PM, Jeff Squyres wrote:
>> In fact, chapter 2 explicitly says that handles only have meaning in the
>> On Feb 17, 2011, at 3:19 PM, Hubert Ritzdorf wrote:
>>>>> MPI_Comm_shm_attach() is a little tricky - in the current proposal I
>>>>> don't think we've defined the shm handle to be portable. Can we make
>>>>> valid to send this to another node-local process in order for them to
>>>>> call attach?
>>>> Is there an example in MPI where we allow sending an MPI handle to
>>>> object between ranks? This seems like a bad idea to me.
>>> No, there isn't.
>>> Mpi3-hybridpm mailing list
>>> Mpi3-hybridpm at lists.mpi-forum.org
>Mpi3-hybridpm mailing list
>Mpi3-hybridpm at lists.mpi-forum.org
More information about the mpiwg-hybridpm