[Mpi3-bwcompat] Ticket 265 ready for review

Jeff Squyres jsquyres at [hidden]
Thu Feb 3 06:57:24 CST 2011



Just FYI, the debates surrounding how to do MPI_Count have been going around in circles for about 2 years.  Just ask Dave.  :-)

(MPI_Aint isn't really the right type -- it would need to be a new type; MPI_Count)

On Feb 1, 2011, at 11:24 AM, Geoffrey Paulsen wrote:

> Well, I agree with Rajeev, and would like to see both this approach AND
> a whole new set of MPI_<foo> functions with MPI_Aint Count (or some new
> 64bit integer possibly implementation specific MPI_Count datatype)
> argument for convenience.  I would think a new major MPI revision (such
> as MPI 3.0) is the time to introduce sweeping new classes of APIs,
> otherwise it'll be another ten years or so until the next major revision
> of the standard.
> 
> Of course providing backwards compatibility with MPI 2.2 (hopefully at
> the binary level of a particular implementation) is a priority as well.
> It would be nice if we could have both.
> 
> Geoff Paulsen
> Platform MPI - http://www.platform.com/cluster-computing/platform-mpi
> 
> -----Original Message-----
> From: mpi3-bwcompat-bounces_at_[hidden]
> [mailto:mpi3-bwcompat-bounces_at_[hidden]] On Behalf Of Jeff
> Squyres
> Sent: Friday, January 28, 2011 3:33 PM
> To: MPI-3 backwards compatability WG
> Subject: Re: [Mpi3-bwcompat] Ticket 265 ready for review
> 
> Rajeev --
> 
> The problem with your proposal is that it very, very quickly becomes a
> slippery slope of making a new MPI_<Foo>() with an MPI_Count argument
> for every value of <Foo>.
> 
> The Forum has soundly rejected every form of that.  This proposal is a
> (return to) just proposing absolute minimal functionality that is
> required for correctness.
> 
> 
> On Jan 28, 2011, at 12:07 PM, Fab Tillier wrote:
> 
>> Rajeev Thakur wrote on Fri, 28 Jan 2011 at 08:09:29
>> 
>>> Yes, it is not absolutely required, but becomes a convenience
> feature.
>> 
>> I believe Quincy will be bringing forth a proposal to address this,
> but we wanted to get the minimum functionality to provide full support
> for large datatypes captured in a single ticket without adding
> convenience features.
>> 
>> -Fab
>> 
>>> 
>>> Rajeev
>>> 
>>> On Jan 27, 2011, at 11:33 AM, Fab Tillier wrote:
>>> 
>>>> Hi Rajeev,
>>>> 
>>>> Rajeev Thakur wrote on Wed, 26 Jan 2011 at 21:59:33
>>>> 
>>>>> OK, thanks for the explanation.
>>>>> 
>>>>> If count is encapsulated in derived datatypes, we might need new
>>>>> datatype constructor functions that take MPI_Count, or at least a
> new
>>>>> MPI_Type_contiguous. Let's say the user wants to send an array of
> size
>>>>> X integers, where X is some weird number greater than 2G. If there
> is a
>>>>> new Type_contiguous, we have to see how it affects
> Type_get_envelope
>>>>> and Type_get_contents.
>>>> 
>>>> We shouldn't need new datatype creator functions for this to work -
> a user
>>> can nest types, for example by creating a struct type of contiguous
> types to
>>> achieve the length desired.  In this case, the
>>> MPI_Type_get_envelope/contents would still work as currently defined.
>>>> 
>>>> Does that make sense?
>>>> 
>>>> Do we want to capture this discussion as comments on the ticket?
>>>> 
>>>> Thanks for the feedback!
>>>> -Fab
>>>> 
>>>>> Rajeev
>>>>> 
>>>>> 
>>>>> On Jan 26, 2011, at 4:56 PM, Fab Tillier wrote:
>>>>> 
>>>>>> Hi Rajeev,
>>>>>> 
>>>>>> Rajeev Thakur wrote on Wed, 26 Jan 2011 at 14:31:06
>>>>>> 
>>>>>>> I wasn't at the last Forum meeting, so may have missed some of
> the
>>>>>>> background.
>>>>>>> 
>>>>>>> Is ticket #224 obsolete now? If so, you may want to indicate that
> in
>>>>> 224.
>>>>>> 
>>>>>> Sorry, I've resolved it as withdrawn, with a comment that it was
>>>>>> supersceded by 265.
>>>>>> 
>>>>>>> Do MPI_Send/Recv etc. remain unchanged, i.e., no MPI_Count in
> them?
>>>>>>> If so, why do we need a new MPI_Get_count?
>>>>>> 
>>>>>> MPI_Send/Recv remain unchanged, and users are expected to create
>>>>> derived datatypes to express data structures that are larger than
> 2^31
>>>>> basic elements.  Now that you point it out, though, I would think
>>>>> MPI_Get_elements is sufficient, as MPI_Get_count should be using
> the
>>>>> same datatype as used in the operation that transferred the data.
>>>>> Hoping that Jeff, Quincy, or David can chime in here and clarify
> why we
>>>>> need it.
>>>>>> 
>>>>>>> Are the new "w" versions of the collectives specifically related
> to
>>>>>>> this ticket (i.e. to address the 64-bit count requirement) or are
> they
>>>>>>> a separate issue (i.e. a general need for array of datatypes
> instead
>>>>>>> of one datatype)?
>>>>>> 
>>>>>> By addressing the need for large counts via derived datatypes, we
>>>>> effectively encapsulate the 'count' in the 'datatype' parameters.
> As
>>>>> an example, if you want to gather different 'counts' from different
>>>>> ranks where there is no common denominator, you would need to
> derive a
>>>>> datatype for each source rank, and specify those individual
> datatypes.
>>>>> That can't be done today, we can only specify different counts, but
>>>>> are limited by the 2^31 range of the count fields.  So the missing
> 'w'
>>>>> functions allow the datatype to be used to encapsulate the count.
>>>>>> 
>>>>>>> Minor typos: Two of the prototypes for Scatterw say Scatterv
>>>>> instead.
>>>>>> 
>>>>>> Fixed, thanks.
>>>>>> 
>>>>>> -Fab
>>>>>> 
>>>>>>> 
>>>>>>> Rajeev
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Jan 26, 2011, at 3:38 PM, Fab Tillier wrote:
>>>>>>> 
>>>>>>>> Ok folks, not a whole lot of time before the meeting, so it
> would
>>>>> be
>>>>>>> great if we could get everyone to read through the ticket and
> make
>>>>>>> sure I didn't miss something.  I'd like to have David Solt
> generate a
>>>>>>> PDF sometime next week, in time for me to read it at the forum
> the
>>>>>>> following week (our time slot for this is 'working lunch' on
>>>>> Tuesday.
>>>>>>>> 
>>>>>>>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/265
>>>>>>>> 
>>>>>>>> Thanks for your help,
>>>>>>>> -Fab
>>>>>>>> 
>>>>>>>> _______________________________________________
>>>>>>>> Mpi3-bwcompat mailing list
>>>>>>>> Mpi3-bwcompat_at_[hidden]
>>>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> Mpi3-bwcompat mailing list
>>>>>>> Mpi3-bwcompat_at_[hidden]
>>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Mpi3-bwcompat mailing list
>>>>>> Mpi3-bwcompat_at_[hidden]
>>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Mpi3-bwcompat mailing list
>>>>> Mpi3-bwcompat_at_[hidden]
>>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>>>> 
>>>> _______________________________________________
>>>> Mpi3-bwcompat mailing list
>>>> Mpi3-bwcompat_at_[hidden]
>>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>>> 
>>> 
>>> _______________________________________________
>>> Mpi3-bwcompat mailing list
>>> Mpi3-bwcompat_at_[hidden]
>>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
>> 
>> _______________________________________________
>> Mpi3-bwcompat mailing list
>> Mpi3-bwcompat_at_[hidden]
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
> 
> 
> -- 
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> Mpi3-bwcompat mailing list
> Mpi3-bwcompat_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat
> 
> _______________________________________________
> Mpi3-bwcompat mailing list
> Mpi3-bwcompat_at_[hidden]
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-bwcompat


-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




More information about the Mpi3-bwcompat mailing list