[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces

William Gropp wgropp at illinois.edu
Tue Feb 22 16:43:26 CST 2011


I think that we should think more expandably - there are likely to be  
a number of different constraints that we need to satisfy,  
particularly 5 or 10 years from now.  So I'd like to avoid either  
predefined names (like MPI_COMM_SHM) or a routine to create a  
communicator based on a single constraint.  One simple extension would  
be

MPI_Comm_create_shm( MPI_Comm incomm, MPI_Info properties, MPI_Comm  
*outcomm )

with the desired properties (e.g., samenode=true, samesocket=true,  
samel3cache=true, sametile=true, ...) in MPI_Info.

Bill

On Feb 22, 2011, at 2:24 PM, James Dinan wrote:

> Hi Ron,
>
> On 2/17/11 12:34 PM, Ron Brightwell wrote:
>> On 02/17/2011 09:18 AM, James Dinan wrote:
>>> I tend to agree with this - MPI_COMM_SHM seems like it's standing  
>>> in for
>>> more sophisticated functionality to help the programmer build
>>> communicators based on system topology. There was similar feedback  
>>> in
>>> the forum as well (although the discussion digressed into "what if  
>>> we
>>> have DSM and how should MPI deal with that" which also raises
>>> interesting questions).
>>>
>>> On future and even current systems the on-node topology is  
>>> becoming more
>>> complex. We could, for example, want to map shared memory on a
>>> per-socket basis rather than across all sockets on the node to avoid
>>> sending data over the intra-node interconnect.
>>
>> This is somewhat where the proposal started. We tossed around the  
>> idea
>> of providing MPI_COMM_SOCKET, MPI_COMM_NODE, MPI_COMM_CACHE, etc.  
>> Given
>> that you can't assume any kind of affinity - processes may move  
>> freely
>> between sockets - the complexity of figuring out a portable way to
>> expose hierarchy was significant. Our options seemed to be to say
>> nothing about how the application discovers which processes can share
>> memory or only say as little as possible, since anything else is  
>> just a
>> potential optimization. People weren't happy with the former since it
>> impacted portability.
>
> Should we consider a function like MPI_Comm_split_shm(MPI_Comm parent,
> MPI_Comm *comm_shm) since a constant MPI_COMM_SHM will not be able to
> capture dynamic processes joining/leaving the computation?
>
>  ~Jim.
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm

William Gropp
Deputy Director for Research
Institute for Advanced Computing Applications and Technologies
Paul and Cynthia Saylor Professor of Computer Science
University of Illinois Urbana-Champaign







More information about the mpiwg-hybridpm mailing list