[Mpi3-rma] notes from 5pm Tuesday meeting @ July 2012 Forum

Jeff Hammond jhammond at alcf.anl.gov
Wed Jul 18 10:50:12 CDT 2012


has x changed below?

{
double x = 0.3;
x *= 7;
x /= 7;
}

do you really want to add a paragraph on how floating point arithmetic
works just so that users understand what you mean by "has not
changed"?

jeff

On Wed, Jul 18, 2012 at 10:25 AM, Underwood, Keith D
<keith.d.underwood at intel.com> wrote:
> Because that would really suck for the user? :-)  The == operator is quite sensible from the "has this changed" perspective...  And, meaningless mathematically for floating-point
>
> Keith
>
>> -----Original Message-----
>> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
>> Sent: Wednesday, July 18, 2012 11:22 AM
>> To: MPI 3.0 Remote Memory Access working group
>> Subject: Re: [Mpi3-rma] notes from 5pm Tuesday meeting @ July 2012
>> Forum
>>
>> i meant make the user cast.  i think you underestimate how often scientists
>> falsely think that == acts on floating point numbers in a mathematically
>> sensible way.
>>
>> On Wed, Jul 18, 2012 at 10:18 AM, Underwood, Keith D
>> <keith.d.underwood at intel.com> wrote:
>> > Yes, the MPI library could certainly make that choice in how it used the
>> hardware...
>> >
>> >> -----Original Message-----
>> >> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> >> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
>> >> Sent: Wednesday, July 18, 2012 11:09 AM
>> >> To: MPI 3.0 Remote Memory Access working group
>> >> Subject: Re: [Mpi3-rma] notes from 5pm Tuesday meeting @ July 2012
>> >> Forum
>> >>
>> >> So your A==B comparison is at the level of bits?  Couldn't you just
>> >> cast double to long then?
>> >>
>> >> Jeff
>> >>
>> >> On Wed, Jul 18, 2012 at 10:03 AM, Underwood, Keith D
>> >> <keith.d.underwood at intel.com> wrote:
>> >> > success=0;
>> >> > while (!success) {
>> >> >         A=B; // B is remote, obtained using RMA
>> >> >         C=(A*D+E*F) / G; // Everything local, all are doubles
>> >> >         if (B == A) {B=C; success = 1}; // This is CAS }
>> >> >
>> >> > And now you have done a relatively complicated thing on B
>> >> > atomically,
>> >> though you will suffer if there is contention...
>> >> >
>> >> > Keith
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> >> >> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
>> >> >> Sent: Tuesday, July 17, 2012 11:03 PM
>> >> >> To: MPI 3.0 Remote Memory Access working group
>> >> >> Subject: Re: [Mpi3-rma] notes from 5pm Tuesday meeting @ July 2012
>> >> >> Forum
>> >> >>
>> >> >> What do you mean by "arbitrary atomics"?  Are you proposing to do
>> >> >> something evil with bitwise comparison on floating-point numbers
>> >> >> to realize something other than the obvious meaning of CAS on a
>> >> >> float type?  If so, just say it, then do it the right way.
>> >> >>
>> >> >> We should consider how users will interpret CAS on doubles, for
>> >> >> example, and not what some low-level network person can dream up
>> >> >> to do with this operation.
>> >> >>
>> >> >> What architecture implements remote CAS on floating-point types in
>> >> >> hardware right now?  Who is proposing it?  I invoke the standard
>> >> >> (and a bit worn out) argument about implementing MPI-3 entirely in
>> >> >> hardware and say that unless someone knows how to do CAS for
>> >> >> doubles, it cannot be in the standard.
>> >> >>
>> >> >> And to be perfectly honest, I have utterly no idea what you really
>> >> >> mean right now so it would be very helpful if you could be very
>> >> >> explicit about what you mean by CAS on floating point types.  What
>> >> >> measure are you using for comparison?  Does this have any reason
>> >> >> meaning in the context of floating point arithmetic?
>> >> >>
>> >> >> Jeff
>> >> >>
>> >> >> On Tue, Jul 17, 2012 at 9:40 PM, Underwood, Keith D
>> >> >> <keith.d.underwood at intel.com> wrote:
>> >> >> > Floating-point CAS is valid as a way to implement "if this
>> >> >> > hasn't changed,
>> >> >> put the results of this operation in place".  It gives you a
>> >> >> (moderately expensive, not very fair) way to build a form of
>> >> >> arbitrary
>> >> atomics.
>> >> >> >
>> >> >> > Keith
>> >> >> >
>> >> >> >> -----Original Message-----
>> >> >> >> From: mpi3-rma-bounces at lists.mpi-forum.org [mailto:mpi3-rma-
>> >> >> >> bounces at lists.mpi-forum.org] On Behalf Of Jeff Hammond
>> >> >> >> Sent: Tuesday, July 17, 2012 7:26 PM
>> >> >> >> To: MPI 3.0 Remote Memory Access working group
>> >> >> >> Subject: Re: [Mpi3-rma] notes from 5pm Tuesday meeting @ July
>> >> >> >> 2012 Forum
>> >> >> >>
>> >> >> >> I don't know that floating-point compare is well-defined.  You
>> >> >> >> really have ask "if abs(x-y)<tolerance" and not "if x==y".
>> >> >> >>
>> >> >> >> I think only fixed-point types should be valid for CAS.
>> >> >> >>
>> >> >> >> Jeff
>> >> >> >>
>> >> >> >> On Tue, Jul 17, 2012 at 6:14 PM, Dave Goodell
>> >> >> >> <goodell at mcs.anl.gov>
>> >> >> wrote:
>> >> >> >> > Discussed the complex types in COMPARE_AND_SWAP issue.
>> >> "Fortran
>> >> >> >> Integer" category is permitted, but "Complex" category is not,
>> >> >> >> primarily because of width.  Since "Fortran Integer" contains
>> >> >> >> wide types, shouldn't we just permit "Complex" and "Floating
>> >> >> >> point" as well?  Consensus was to stick with the existing text
>> >> >> >> which permits only "C integer, Fortran integer, Logical,
>> >> >> >> Multi-language types, or
>> >> Byte".
>> >> >> >> >
>> >> >> >> > Group review (esp. Jim & Sreeram):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/187
>> >> >> >> >
>> >> >> >> > incorporate Jim suggested change (Torsten):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/271
>> >> >> >> >
>> >> >> >> > we think we are unaffected, but need a second check (Jim):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/278
>> >> >> >> >
>> >> >> >> > Double-check that C++ is not referenced in the RMA chapter
>> >> (Pavan):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/281
>> >> >> >> >
>> >> >> >> > Needs review (Dave):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/300
>> >> >> >> >
>> >> >> >> > Think unaffected, but slim chance of Rput/Rget being affected
>> >> (Pavan):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/317
>> >> >> >> >
>> >> >> >> > Need to check implementation of various (4?) "flush is non-local"
>> >> >> >> > changes (Dave):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/270
>> >> >> >> >
>> >> >> >> > Need to check disp_unit change (Jim & Sreeram):
>> >> >> >> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/284
>> >> >> >> >
>> >> >> >> > After the above items have all been dealt with, all chapter
>> >> >> >> > committee
>> >> >> >> members should re-read the whole chapter in the *clean*
>> >> >> >> document (the one _without_ the changebars and colored text) to
>> >> >> >> look for obvious typos and inconsistencies.
>> >> >> >> >
>> >> >> >> > -Dave
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > _______________________________________________
>> >> >> >> > mpi3-rma mailing list
>> >> >> >> > mpi3-rma at lists.mpi-forum.org
>> >> >> >> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> --
>> >> >> >> Jeff Hammond
>> >> >> >> Argonne Leadership Computing Facility University of Chicago
>> >> >> >> Computation Institute jhammond at alcf.anl.gov / (630)
>> >> >> >> 252-5381 http://www.linkedin.com/in/jeffhammond
>> >> >> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>> >> >> >>
>> >> >> >> _______________________________________________
>> >> >> >> mpi3-rma mailing list
>> >> >> >> mpi3-rma at lists.mpi-forum.org
>> >> >> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > mpi3-rma mailing list
>> >> >> > mpi3-rma at lists.mpi-forum.org
>> >> >> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Jeff Hammond
>> >> >> Argonne Leadership Computing Facility University of Chicago
>> >> >> Computation Institute jhammond at alcf.anl.gov / (630)
>> >> >> 252-5381 http://www.linkedin.com/in/jeffhammond
>> >> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>> >> >> _______________________________________________
>> >> >> mpi3-rma mailing list
>> >> >> mpi3-rma at lists.mpi-forum.org
>> >> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >> >
>> >> > _______________________________________________
>> >> > mpi3-rma mailing list
>> >> > mpi3-rma at lists.mpi-forum.org
>> >> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >>
>> >>
>> >>
>> >> --
>> >> Jeff Hammond
>> >> Argonne Leadership Computing Facility University of Chicago
>> >> Computation Institute jhammond at alcf.anl.gov / (630)
>> >> 252-5381 http://www.linkedin.com/in/jeffhammond
>> >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>> >> _______________________________________________
>> >> mpi3-rma mailing list
>> >> mpi3-rma at lists.mpi-forum.org
>> >> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>> >
>> > _______________________________________________
>> > mpi3-rma mailing list
>> > mpi3-rma at lists.mpi-forum.org
>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>>
>>
>>
>> --
>> Jeff Hammond
>> Argonne Leadership Computing Facility
>> University of Chicago Computation Institute jhammond at alcf.anl.gov / (630)
>> 252-5381 http://www.linkedin.com/in/jeffhammond
>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>> _______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>
> _______________________________________________
> mpi3-rma mailing list
> mpi3-rma at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma



-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond



More information about the mpiwg-rma mailing list