[Mpi-forum] MPI_Count
Hubert Ritzdorf
hritzdorf at hpce.nec.com
Tue Jan 26 11:50:08 CST 2010
I can remember that we had Fortran users and C users
who had problems 32-bit count arguments on NEC SX systems.
The Fortran users used the extended width option
of the Fortran compiler and the corresponding MPI library
to overcome the problem.
The C users used the same (extended width) version of MPI library
and an extension in our MPI include file (there is a typedef for
int's in the function interfaces).
Hubert
-----Original Message-----
From: mpi-forum-bounces at lists.mpi-forum.org [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of Rajeev Thakur
Sent: Monday, January 25, 2010 12:52 AM
To: 'Main MPI Forum mailing list'
Subject: Re: [Mpi-forum] MPI_Count
I don't know who wants to communicate >2GB at a time, but HP seemed to have enough users wanting it to cause Kannan to create and
pursue ticket #117 until he moved to a different project.
Rajeev
> -----Original Message-----
> From: mpi-forum-bounces at lists.mpi-forum.org
> [mailto:mpi-forum-bounces at lists.mpi-forum.org] On Behalf Of
> Underwood, Keith D
> Sent: Sunday, January 24, 2010 5:20 PM
> To: Main MPI Forum mailing list; mpi3 at lists.mpi-forum.org
> Subject: Re: [Mpi-forum] MPI_Count
>
> > Still seems a little odd to have a solution for I/O
> functions and not
> > for communication functions (the original ticket was for
> communication
> > https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/117).
> > If > 2GB can be written from a single process in a single
> call, they
> > can also be communicated. If it is tedious to break up the
> write into
> > smaller writes, the same applies to communication.
>
> "can be" and "will be" are very different phrases. The
> feeling within the group that was there Thursday morning was
> that the "way 64 bit is done" (TM) is to replicate all of the
> function calls. The group felt like a compelling case had
> been made for I/O, because you would want to write all of
> your memory to file (e.g. checkpointing) and memory per node
> is growing beyond 2 GB. However, it was much harder to make
> the case that you really wanted to send that much data from a
> contiguous location to any other node. In fact, someone made
> the statement "all of the application users understand that
> you can't do that and still scale on any network in the
> foreseeable future".
>
> Your interoperability comments may be a more motivating
> example, but as of Thursday morning, it was clear that there
> was not consensus among the forum members who were -
> especially given the pain that is going to be involved in
> fixing this for the general case.
>
> Keith
>
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
_______________________________________________
mpi-forum mailing list
mpi-forum at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
More information about the mpi-forum
mailing list