[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces

James Dinan dinan at mcs.anl.gov
Thu Feb 17 17:14:23 CST 2011


Right, there is a mismatch between MPI handles and shared memory 
handles.  IPC shared memory relies on having a portable handle 
(filename, etc) in order to support asynchronous attach/detach to/from 
shared allocations.  In order to fully support this, we would also need 
to have a portable handle.  There was an earlier suggestion of letting 
the programmer supply a handle.  We didn't seem to like it, but that 
would get around the problem of needing to share and MPI handle.

Another alternative would be to make attaching to a shmem region 
collective.  This is tricky too since it would be collective across a 
new communicator that includes the added processes.  We would also need 
a collective detach.

  ~Jim.

On 2/17/11 2:27 PM, Jeff Squyres wrote:
> In fact, chapter 2 explicitly says that handles only have meaning in the local process.
>
>
> On Feb 17, 2011, at 3:19 PM, Hubert Ritzdorf wrote:
>
>>>
>>>>
>>>> MPI_Comm_shm_attach() is a little tricky - in the current proposal I
>>>> don't think we've defined the shm handle to be portable.  Can we make it
>>>> valid to send this to another node-local process in order for them to
>>>> call attach?
>>>>
>>>
>>> Is there an example in MPI where we allow sending an MPI handle to
>>> object between ranks? This seems like a bad idea to me.
>>>
>>
>> No, there isn't.
>>
>> Hubert
>>
>> _______________________________________________
>> Mpi3-hybridpm mailing list
>> Mpi3-hybridpm at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
>




More information about the mpiwg-hybridpm mailing list