[Mpi3-abi] For the April MPI Forum Meeting
Jeff Brown
jeffb at [hidden]
Thu Apr 24 13:41:10 CDT 2008
Do you have time to complete the Microsoft column?
At 12:14 PM 4/24/2008, Erez Haba wrote:
>Okay with me.
>
>From: mpi3-abi-bounces_at_[hidden]
>[mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown
>Sent: Thursday, April 24, 2008 10:53 AM
>To: MPI 3.0 ABI working group; MPI 3.0 ABI working group
>Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting
>
>here's what I see on the wiki:
>Types HP IBM Microsoft
>MPICH2 OpenMPI ABI
>MPI_Datatype struct
>hpmp_dtype_s* int int int struct
>ompi_datatype_t* TBD
>MPI_Op struct
>hpmp_op_s* int int int struct
>ompi_op_t* TBD
>MPI_Comm struct
>hpmp_comm_s* int int int struct
>ompi_communicator_t* TBD
>MPI_Errhandler struct
>hpmp_err_s* int int int struct
>ompi_errhandler_t* TBD
>Examples HP IBM
>Microsoft MPICH2 OpenMPI ABI
>Datatype
>MPI_CHAR &hpmp_char 4 0x4c000101
>0x4c000101 &ompi_mpi_char TBD
>Op MPI_SUM &hpmp_sum enum
>2 0x58000003 0x58000003 &ompi_mpi_op_sum TBD
>MPI_COMM_WORLD &hpmp_comm_world enum
>0 0x44000000 0x44000000 &ompi_mpi_comm_world TBD
>Compare MPI_IDENT 0 enum
>0 0 0 enum 0 TBD
>There's a lot more detail in the spreadsheet. To do this right, we
>need to cover the entire space. I'd prefer to stick with excel
>(posted to the wiki) and add columns for the various implementations.
>
>At 10:23 AM 4/24/2008, Erez Haba wrote:
>
>Jeff, are you aware that we started that table on the wiki pages?
>(or you just prefer it in excel?)
><http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h>http://svn.mpi-forum.org/trac/mpi-forum-web/wiki/Compare_mpi_h
>
>On that same page you can also find the various mpi.h files.
>
>
>Thanks,
>.Erez
>
>-----Original Message-----
>From: mpi3-abi-bounces_at_[hidden] [
>mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Jeff Brown
>Sent: Thursday, April 24, 2008 9:18 AM
>To: MPI 3.0 ABI working group; MPI 3.0 ABI working group
>Subject: Re: [Mpi3-abi] For the April MPI Forum Meeting
>
>Just talked to our phone folks. Our trunks are down in New Mexico -
>so this is a bust. We are sort of a third world country out here.
>
>I don't think we have time to reschedule at this point.
>
>So ... if folks have the time please populate the matrix with your
>favorite MPI implementation and distribute to the group. We'll get
>into the guts of all this at the meeting.
>
>For my 5 minute briefing, I'll just show folks where we are and give
>a glimpse into the details.
>
>see you all at the meeting
>
>Jeff
>
>At 10:01 AM 4/24/2008, Terry Dontje wrote:
> >Am I the only one getting an "all circuits are busy" message from the
> >number below?
> >
> >--td
> >
> >Jeff Brown wrote:
> > > all,
> > >
> > > I scheduled a telecon to discuss status and get somewhat organized
> > > for the meeting:
> > >
> > > Thursday April 24, 10:00 MDT
> > > local number: 606-1201(6-1201)
> > > toll free number: 888 343-0702.
> > >
> > > I'll send out some slides for the 5 minute briefing for the group.
> > >
> > > I'm having a hard time finding time to devote to this, but I'll have
> > > a cut at the OpenMPI and LAMPI analysis prior to the telecon. We
> > > need someone to look at MPICH, and the vendor implementations need to
> > > be posted.
> > >
> > > Jeff
> > >
> > >
> > >
> > > At 10:03 AM 4/16/2008, Jeff Brown wrote:
> > >
> > >> Yes, it's time to put some cycles toward this. Let's start
> > >> populating the matrix and have a telecon toward the end of next
> > >> week. I'll schedule a WG working session at the meeting.
> > >>
> > >> I'll take a look at OpenMPI and LAMPI, the two primary MPI
> > >> implementations we use at LANL, and post to the wiki by the end of
> > >> the week. Others, please do the same for your MPI implementation
> > >> (especially the vendors). Overlap is OK.
> > >>
> > >> I'll send out specifics on the telecon. Let's shoot for Thursday
> > >> April 24, 9:00 A.M. MST.
> > >>
> > >> Jeff
> > >>
> > >> At 09:51 AM 4/16/2008, Narasimhan, Kannan wrote:
> > >>
> > >>> Folks,
> > >>>
> > >>> Are we planning on a WG update to report at the April 28-30 Forum
> > >>> meeting? We have started the process of identifying the mpi.h
> > >>> differences, but I dont think we have synthesized the data yet, or
> > >>> come to any conclusions/next steps... Or did I miss something here?
> > >>>
> > >>> Thanx!
> > >>> Kannan
> > >>>
> > >>> -----Original Message-----
> > >>> From: mpi3-abi-bounces_at_[hidden]
> > >>> [ mailto:mpi3-abi-bounces_at_[hidden]] On Behalf Of Edric Ellis
> > >>> Sent: Monday, March 17, 2008 4:18 AM
> > >>> To: MPI 3.0 ABI working group
> > >>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
> > >>>
> > >>>
> > >>> I'm not sure how best to express this, but there are a couple of
> > >>> things that occur to me that might be important:
> > >>>
> > >>> 1. The size of the handle types (cf. size of a pointer perhaps?)
> > >>>
> > >>> 2. should we add some sort of table describing the current situation
> > >>> as to how applications pick up the value of e.g. MPI_COMM_WORLD? E.g.
> > >>> MPICH2 uses "#define MPI_COMM_WORLD 0x44000000", so that value is
> > >>> burned into the binary; whereas OpenMPI uses extern pointers - i.e.
> > >>> ompi_mpi_comm_world is in the initialized data section of libmpi.so,
> > >>> and the value resolved at (dynamic) link time.
> > >>>
> > >>> Cheers,
> > >>>
> > >>> Edric.
> > >>>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: mpi3-abi-bounces_at_[hidden]
> > >>>>
> > >>> [<<mailto:mpi3-abi->mailto:mpi3-abi- >mailto:mpi3-abi-
> > >>>
> > >>>> bounces_at_[hidden]] On Behalf Of Jeff Brown
> > >>>> Sent: Thursday, March 13, 2008 10:11 PM
> > >>>> To: MPI 3.0 ABI working group; mpi3-abi_at_[hidden]
> > >>>> Subject: Re: [Mpi3-abi] Meeting notes from 10th March
> > >>>>
> > >>>> I propose a way we can make progress ...
> > >>>>
> > >>>> Let's start populating a matrix (excel spreadsheet) with a column for
> > >>>> each MPI implementation, and rows for the various MPI datatypes,
> > >>>> constants, etc. where the internal implementations varys. I'll kick
> > >>>> it off for OpenMPI and send out.
> > >>>>
> > >>>> The last column of the matrix can be "ABI" where we propose a common
> > >>>> approach across the implementations.
> > >>>>
> > >>>> A couple of driving principles:
> > >>>> 1. the ABI solution shouldn't negatively impact quality of
> > >>>>
> > >>> implementation
> > >>>
> > >>>> 2. minimize platform specific solutions
> > >>>>
> > >>>> I'd like to see if we can produce a single ABI that spans platforms.
> > >>>>
> > >>>> comments?
> > >>>>
> > >>>> Jeff
> > >>>>
> > >>>>
> > >>>> _______________________________________________
> > >>>> mpi3-abi mailing list
> > >>>> mpi3-abi_at_[hidden]
> > >>>>
> <http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> > >>>>
> > >>> _______________________________________________
> > >>> mpi3-abi mailing list
> > >>> mpi3-abi_at_[hidden]
> > >>>
> <http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> > >>>
> > >>> _______________________________________________
> > >>> mpi3-abi mailing list
> > >>> mpi3-abi_at_[hidden]
> > >>>
> <http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> > >>>
> > >> _______________________________________________
> > >> mpi3-abi mailing list
> > >> mpi3-abi_at_[hidden]
> > >>
> <http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> > >>
> > >
> > >
> > > _______________________________________________
> > > mpi3-abi mailing list
> > > mpi3-abi_at_[hidden]
> > >
> <http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
> > >
> >
> >_______________________________________________
> >mpi3-abi mailing list
> >mpi3-abi_at_[hidden]
> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
>
>_______________________________________________
>mpi3-abi mailing list
>mpi3-abi_at_[hidden]
><http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>
>_______________________________________________
>mpi3-abi mailing list
>mpi3-abi_at_[hidden]
><http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
>_______________________________________________
>mpi3-abi mailing list
>mpi3-abi_at_[hidden]
>http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-abi
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi3-abi/attachments/20080424/cf07861f/attachment.html>
More information about the Mpi3-abi
mailing list