[Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
hritzdorf at hpce.nec.com
Thu Feb 17 03:58:47 CST 2011
Yes, this is the problem.
I'm working also in a European project for Climate modeling which use coupled applications/models for simulation.
They may start with a single process, which dynamically spawns new parallel applications/models depending on the simulation. Coupled application exchange 3d volume data and could put common chemistry data into shared memory. Thus, it makes sense to place processes of different applications/models on the same node (or on another node on systems which provide shared memory over several nodes).
In order to create the proposed MPI_COMM_SHM for MPI_COMM_WORLD in MPI_Init(), the mpi implementer would probably use also MPI_Comm_split() with an appropriate color. Thus, we should provide the corresponding functionality to the user and he/she can create the communicator.
For dynamic applications, an additional MPI_Comm_shm_attach () function would be useful in order to attach to already existing shared memory regions.
> -----Original Message-----
> From: Snir, Marc [mailto:snir at illinois.edu]
> Sent: Thursday, February 17, 2011 4:21 AM
> To: Hubert Ritzdorf
> Cc: mpi3-hybridpm at lists.mpi-forum.org; Hubert Ritzdorf
> Subject: Re: [Mpi3-hybridpm] SHMEM Interface in Draft External Interfaces
> Could explain the problem with dynamic process management? The only
> potential problem would be if dynamic process management is used to add
> new processes on existing nodes; if the allocation is in full nodes, I see
> no problem.
> Marc Snir
> 4323 Siebel Center, 201 N Goodwin, IL 61801
> Tel (217) 244 6568
> Web http://www.cs.uiuc.edu/homes/snir
> On 2/16/11 11:32 AM, "Hubert Ritzdorf"
> <hritzdorf at hpce.nec.com<mailto:hritzdorf at hpce.nec.com>> wrote:
> I have 2 notes to chapter MPI and Shared Memory in External Interface
> draft "ei-2-v1.0.pdf":
> (*) I am missing a function MPI_Shmem_flush(shm)
> which flushes the shared memory on not cache-coherent systems
> (corresponding to shmem functions shmem_udcflush() or
> (*) The concept of
> the intracommunicator MPI_COMM_SHM of all processes the local process
> can share
> doesn't work for dynamic process management. I would propose
> (*) to define a special constant MPI_COLOR_SHM and
> (*) to create a communicator of all processes the local process
> can share
> in communicator comm. with function
> MPI_Comm_split (comm., MPI_COLOR_SHM, key, comm._shm)
> Best regards
> Hubert Ritzdorf
> NEC Germany
More information about the mpiwg-hybridpm