[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces

James Dinan dinan at mcs.anl.gov
Thu Feb 17 10:18:36 CST 2011


Hi Hubert,

I tend to agree with this - MPI_COMM_SHM seems like it's standing in for 
more sophisticated functionality to help the programmer build 
communicators based on system topology.  There was similar feedback in 
the forum as well (although the discussion digressed into "what if we 
have DSM and how should MPI deal with that" which also raises 
interesting questions).

On future and even current systems the on-node topology is becoming more 
complex.  We could, for example, want to map shared memory on a 
per-socket basis rather than across all sockets on the node to avoid 
sending data over the intra-node interconnect.

MPI_Comm_shm_attach() is a little tricky - in the current proposal I 
don't think we've defined the shm handle to be portable.  Can we make it 
valid to send this to another node-local process in order for them to 
call attach?

Best,
  ~Jim.


On 2/17/11 3:58 AM, Hubert Ritzdorf wrote:
> Yes, this is the problem. I'm working also in a European project for
> Climate modeling which use
coupled applications/models for simulation.
> They may start with a single process, which dynamically spawns new
parallel applications/models depending on the simulation. Coupled
application exchange 3d volume data and could put common chemistry data
into shared memory. Thus, it makes sense to place processes of different
applications/models on the same node (or on another node on systems
which provide shared memory over several nodes).
> In order to create the proposed MPI_COMM_SHM for MPI_COMM_WORLD in
MPI_Init(), the mpi implementer would probably use also MPI_Comm_split()
with an appropriate color. Thus, we should provide the corresponding
functionality to the user and he/she can create the communicator.
>
> For dynamic applications, an additional MPI_Comm_shm_attach ()
function would be useful in order to attach to already existing shared
memory regions.
>
> Hubert
>
>
>
>> -----Original Message-----
>> From: Snir, Marc [mailto:snir at illinois.edu]
>> Sent: Thursday, February 17, 2011 4:21 AM
>> To: Hubert Ritzdorf
>> Cc: mpi3-hybridpm at lists.mpi-forum.org; Hubert Ritzdorf
>> Subject: Re: [Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
>>
>> Could explain the problem with dynamic process management? The only
>> potential problem would be if dynamic process management is used to add
>> new processes on existing nodes; if the allocation is in full nodes, I see
>> no problem.
>>
>> Marc Snir
>> 4323 Siebel Center, 201 N Goodwin, IL 61801
>> Tel (217) 244 6568
>> Web http://www.cs.uiuc.edu/homes/snir
>>
>> On 2/16/11 11:32 AM, "Hubert Ritzdorf"
>> <hritzdorf at hpce.nec.com<mailto:hritzdorf at hpce.nec.com>>  wrote:
>>
>> Hi,
>>
>> I have 2 notes to chapter MPI and Shared Memory in External Interface
>> draft "ei-2-v1.0.pdf":
>>
>> (*) I am missing a function MPI_Shmem_flush(shm)
>>       which flushes the shared memory on not cache-coherent systems
>>       (corresponding to shmem functions shmem_udcflush() or
>> shmem_udcflush_line()).
>>
>> (*) The concept of
>>       the intracommunicator MPI_COMM_SHM of all processes the local process
>> can share
>>       doesn't work for dynamic process management. I would propose
>>
>>          (*) to define a special constant MPI_COLOR_SHM and
>>          (*) to create a communicator of all processes the local process
>> can share
>>              in communicator comm. with function
>>
>>                 MPI_Comm_split (comm., MPI_COLOR_SHM, key, comm._shm)
>>
>> Best regards
>>
>> Hubert Ritzdorf
>> NEC Germany
>
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm




More information about the mpiwg-hybridpm mailing list