[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
Ron Brightwell
rbbrigh at sandia.gov
Thu Feb 17 12:34:57 CST 2011
On 02/17/2011 09:18 AM, James Dinan wrote:
> Hi Hubert,
>
> I tend to agree with this - MPI_COMM_SHM seems like it's standing in for
> more sophisticated functionality to help the programmer build
> communicators based on system topology. There was similar feedback in
> the forum as well (although the discussion digressed into "what if we
> have DSM and how should MPI deal with that" which also raises
> interesting questions).
>
> On future and even current systems the on-node topology is becoming more
> complex. We could, for example, want to map shared memory on a
> per-socket basis rather than across all sockets on the node to avoid
> sending data over the intra-node interconnect.
This is somewhat where the proposal started. We tossed around the idea
of providing MPI_COMM_SOCKET, MPI_COMM_NODE, MPI_COMM_CACHE, etc. Given
that you can't assume any kind of affinity - processes may move freely
between sockets - the complexity of figuring out a portable way to
expose hierarchy was significant. Our options seemed to be to say
nothing about how the application discovers which processes can share
memory or only say as little as possible, since anything else is just a
potential optimization. People weren't happy with the former since it
impacted portability.
>
> MPI_Comm_shm_attach() is a little tricky - in the current proposal I
> don't think we've defined the shm handle to be portable. Can we make it
> valid to send this to another node-local process in order for them to
> call attach?
>
Is there an example in MPI where we allow sending an MPI handle to
object between ranks? This seems like a bad idea to me.
-Ron
More information about the mpiwg-hybridpm
mailing list