[Mpi3-hybridpm] External interfaces chapter updates
Ron Brightwell
rbbrigh at sandia.gov
Mon Nov 1 11:20:10 CDT 2010
On 11/01/2010 07:11 AM, Douglas Miller wrote:
> [...]
>
> Page 17, section 12.5 Shared Memory. Rather than be collective, could
> these calls reflect the API of something like shm_open() whereby they
> have a "key" parameter that uniquely identifies the segment of shared
> memory? Our experience with DCMF (where we did all shmem allocations in
> a ordered, synchronized "collective" manner) was that it is fraught with
> problems and restrictions. We're moving to using an API that takes a
> string "key" so that we need not force such semantics. Are there any OS
> shmem APIs that require ordered, collective allocation? I know UPC does
> not use a "key", but wouldn't this allow for better implementations? Are
> there platforms where these semantics would NOT work? [probably a topic
> for our meeting]
Yes, this would be a good topic for discussion at the meeting. The
collective nature of shared memory allocation reflects the expected
usage model and offers up several opportunities for optimization. I'd be
interested to hear more about the limitations you describe.
>
> [also another topic for the meeting] Should we say something about how
> to get a communicator of appropriate ranks for shmem allocation? Many
> platforms do not support global shared memory (only shmem local to a
> node), and I don't think there are any MPI mechanisms for testing or
> selecting ranks that are node-local.
>
I don't think we should say anything about this until we can. As you
point out, there is no standard mechanism for discovering ranks that can
allocate shared memory, yet all current MPI implementations are able to
do this. I think MPI needs some way of exposing hierarchy, but this is a
hard problem. Until we have a solution, I think it's sufficient to say
that the discovery of the communicator for allocating shared memory is
outside the scope of MPI. This isn't very portable, but neither is the
existing MPI memory allocation capability.
-Ron
More information about the mpiwg-hybridpm
mailing list