[Mpi-22] Memory footprint concern

Underwood, Keith D keith.d.underwood at [hidden]
Fri May 9 21:59:05 CDT 2008


I think Alexander hit on the answer to this:  how is the user to know
what is going to offer an implementation some advantage?  What may look
like it could offer a lot of benefit from a users perspective (ready
send) could be considered low priority by the implementer.  Especially
in the constrained environments example, every implementer for a
constrained environment could have a different perception of what they
can make fast and small.  So, an application that runs fine on a Cell
SPE could blow up the code memory footprint on a Clearspeed.  This is
ultimately bad for the user.  That is to say nothing of the dynamic
memory footprint, which I am sure an implementer could trim with
appropriate guarantees.

 

I'm not sure official subsetting is the right answer or what the right
approach for getting that functionality would be, but I am pretty sure
that MPI on a Cell SPE is going to be a subset - whether it is official
or not :-)

 

Keith

 

________________________________

From: mpi-22-bounces_at_[hidden]
[mailto:mpi-22-bounces_at_[hidden]] On Behalf Of Richard
Treumann
Sent: Friday, May 09, 2008 3:48 PM
To: MPI 2.2
Subject: Re: [Mpi-22] Memory footprint concern

 

Hi Keith 

Say the MPI implementor for the Cell SPE version of MPI did a good job
of organizing his functionality (perhaps by making subsetting decisions
that work for the structure of his implementation). Say there is an MPI
application that only makes calls to the 6 essential MPI routines plus
MPI_Allreduce and MPI_Bcast. When that application is statically bound
to the MPI implementation by a smart linker that follows dependancy
chains and only brings in needed subroutines don't you get the desired
result? 

What more do you get in this case from having the MPI standard dictate
the subsets and asking the user to declare what subsets he will need? If
you leave it to the implementor to devise the subsets and change them as
some ways of slicing and dicing libmpi prove more useful than others it
may be better than having the standard lock down the dividing lines
based on today's best guess. 

It does occur to me that the linker cannot tell if the MPI_Bcast call
will need the code that supports MPI_Bcast on intercommunicators even
though that is rarely used. That means if the application has an
MPI_Bcast call, both (probably needed) intracomm- and (probably not
needed) intercomm support enter the footprint. 

Dick 

Dick Treumann - MPI Team/TCEM 
IBM Systems & Technology Group
Dept 0lva / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

mpi-22-bounces_at_[hidden] wrote on 05/09/2008 02:34:33 PM:

> Imagine, for example, MPI running on a Cell SPE.  Yes, it sounds 
> crazy, but people are working on it.  If you look at the state in 
> MPI-2 today, and assume it will grow proportionally with complexity,
> MPI-3 could be really nasty in that context.  So, while I agree that
> an arbitrarily large number of permutations is a terrible idea for 
> everyone involved (implementers wouldn't leverage all of the 
> options, ISVs wouldn't test them, people who write third party 
> libraries would have a huge headache dealing with the arbitrary 
> combination of features chosen by the application), it seems like it
> would be prudent to try to figure out how to provide some mechanisms
> in this direction. I don't know if this overlaps with your idea 
> about assertions or not, but they do seem to be related.
>  
> Keith
>  



* 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-22/attachments/20080509/87a70328/attachment.html>


More information about the Mpi-22 mailing list