[Mpi-22] determine if running in a heterogenous environment

Richard Treumann treumann at [hidden]
Mon Mar 3 11:01:12 CST 2008


The real concern here is:

Does the MPI implementation need to provide data conversion services
between any pair of tasks in the communiator?  If I have some tasks on a
slow node and other tasks on a fast one we could debate whether that is
heterogeneous.

I think the proposal has merit but we need to be specific that only data
representation conversion for data transfer routines is involved in the
meaning of "heterogeneous".

Dick Treumann  -  MPI Team/TCEM
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363

mpi-22-bounces_at_[hidden] wrote on 03/03/2008 11:09:13 AM:

> I think that coming up with a precise definition for "heterogeneous"
> could be problematic...
>
>
> On Mar 2, 2008, at 7:01 AM, Dries Kimpe wrote:
>
> > Below: proposal for MPI-2.2 that adds support for querying if a
> > communicator is heterogenous.
> >
> >
----------------------------------------------------------------------------

> > Proposal:
> >
----------------------------------------------------------------------------

> >
> > Provide some method to determine if a communicator is heterogenous.
> >
> > There are a number of different possibilities to provide this
> > capability:
> >
> > 1) Provide a predefined integer valued attribute which can be used
> > to query a
> > communicator.  (for example MPI_HETEROGENOUS)
> >
> > -- or --
> >
> > 2) Create a seperate function: MPI_Comm_is_heterogenous (MPI_Comm
> > comm,
> > int * flag);
> >
> >
----------------------------------------------------------------------------

> > Rationale:
> >
----------------------------------------------------------------------------

> >
> > Currently, MPI-2.0 does not provide a portable way for an
> > application to
> > determine if it is running in a heterogenous environment.
> >
> > Some applications are not written with heterogenous environments in
> > mind;
> > They do not (always) use correct datatype descriptions when sending or
> > receiving data, but instead treat the data as an array of bytes,
> > relying on all datatypes having the same memory representation on both
> > sender and receiver. Most often, this is done to avoid the added
> > complexity
> > of creating the correct datatypes.
> >
> > Although the pack/unpack functions provide an alternative, the do
> > not have
> > the same memory requirements (need an extra buffer to receive the
> > data) or
> > performance characteristics (need an extra copy).
> >
> > Adding a way for an application to test if it is currently running
> > in a
> > heterogenous environment enables it take appropriate action (aborting,
> > switching to type-safe functions, ...)
> >
> > The MPI implementation -- if it supports heterogenous environments --
> > already needs to determine this information because it is
> > responsible for
> > performing type conversions in heterogenous communicators.
> >
> > Providing this information on a per-communicator base instead of
> > returning
> > it for the whole MPI_UNIVERSE / MPI_COMM_WORLD enables both the MPI
> > implementation and the user application to avoid overhead
> > in case of a homogenous communicator that is a subset of an
> > inhomogenous
> > MPI_COMM_WORLD.
> >
> >
----------------------------------------------------------------------------

> > Alternative ways to get the same information without modifying the
> > standard:
> >
----------------------------------------------------------------------------

> >
> > * Store information about the architecture when compiling the
> > application;
> > Compare this information at runtime with all other members of the
> > communicator.
> >
> > * (not 100% correct): calculate at run time the size of a number of
> > elementary datatypes, compare information with the other ranks in the
> > communicator.
> >
> > _______________________________________________
> > Mpi-22 mailing list
> > Mpi-22_at_[hidden]
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> _______________________________________________
> Mpi-22 mailing list
> Mpi-22_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22





* 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-22/attachments/20080303/c05dbc2b/attachment.html>


More information about the Mpi-22 mailing list