<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.2900.3354" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>Dear Dick,</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>Thank you. Keeping the send buffer mapped upon the
sender address space is surely legal. I wonder what will be happening to the
sender that has, say, thousands of the outstanding MPI_Isends, all of which
mapped their contents to the receiver memory. This will consume kernel memory
space and possibly hardware resources. Will this be good for scalability? On a
microkernel with so little memory to play with? I'd rather map the send buffer
out.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>As for the send or receive side byte swapping, or, for that
matter, any kind of reversible transformation - the receiver will certainly do
this once. Note however that proposal #46 talks about collectives as
well. If there are several receivers in a Bcast type operation, they will all be
doing the transformation instead of the sender doing this once there and back
again. The total expense will be higher for the receiver side transformation if
more than two receivers are involved.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>Another couple of scenarios to
consider:</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>3) Im<FONT size=2>agine send buffers have
to pinned in the memory. To avoid doing this too often, these registrations
will normally be cached. If more than one send can be used for a buffer or, for
that matter, overlapping portions of the same buffer, say by different threads,
access to the lookup-and-pin will have to be made atomic. This will further
complicate implementation and introduce a potentially costly mutual
exclusion primitive into the critical path.</FONT></FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>4) I wonder what a const modifier will do for a buffer
identifies by MPI_BOTTOM and/or a derived data type, possibly with holes in it.
How will this square up with the C language sequence association
rules?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>5) N</FONT></SPAN><SPAN class=189302322-08122008><FONT
face=Arial color=#0000ff size=2>ote also if both #45 and #46 will be introduced,
there will be no way to retract this, even with the help of the
MPI_INIT_ASSERTED, should we later decide to introduce assertion like
MPI_NO_SEND_BUFFER_READ_ACCESS. The const modifier from #46 will make that
syntactically useless.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>6) </FONT></SPAN><SPAN class=189302322-08122008><FONT
face=Arial color=#0000ff size=2>Finally, what will happen in the Fortran
interface? With the copy-in/copy-out possibly happening on the MPI subroutine
boundary for array sections? If more than one send is allowed, the application
can pretty easily exhaust any virtual memory with a couple of long enough
vectors.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>I'm looking forward to your and everybody's opinion on
those scenarios.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>Best regards.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=189302322-08122008><FONT face=Arial
color=#0000ff size=2>Alexander</FONT></SPAN></DIV><BR>
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> mpi-22-bounces@lists.mpi-forum.org
[mailto:mpi-22-bounces@lists.mpi-forum.org] <B>On Behalf Of </B>Richard
Treumann<BR><B>Sent:</B> Monday, December 08, 2008 10:56 PM<BR><B>To:</B> MPI
2.2<BR><B>Subject:</B> Re: [Mpi-22] please review - Send Buffer Access (ticket
#45)<BR></FONT><BR></DIV>
<DIV></DIV>
<P>On a single OS under AIX, IBM MPI does map the portion of send side memory
that holds the send buffer into the receivers address space. We do this for both
contiguous and non-contiguous buffers. The mapping lasts just long enough for
the receive side CPU to do a memory copy from send buffer to receive buffer.
(see patent <B><FONT size=4>7,392,256</FONT></B><FONT
size=4>)</FONT><BR><BR>This optimization does not have any effect on the
addressability of the send buffer by the sending task. In our case, at least,
this optimization does not argue against the proposal.<BR><BR>Also, Robert and I
had a chat about the byte swap trick and it seems it should be both semantically
cleaner and require fewer CPU cycles to do it in the receive buffer. In the
receive buffer there is no question that the application must wait for the
communication to complete and the swap only needs to be done once (the message
flows in with bytes in sender order and the MPI_Recv does one pass of byte swaps
if required. In the send buffer trick, the swaps to receiver order must be done
and then a second pass is needed to undo it)<BR><BR>Dick <BR><BR>Dick Treumann -
MPI Team <BR>IBM Systems & Technology Group<BR>Dept X2ZA / MS P963 -- 2455
South Road -- Poughkeepsie, NY 12601<BR>Tele (845) 433-7846 Fax (845)
433-8363<BR><BR><BR><TT>mpi-22-bounces@lists.mpi-forum.org wrote on 12/08/2008
04:11:45 PM:<BR><BR>> [image removed] </TT><BR><TT>> <BR>> Re: [Mpi-22]
please review - Send Buffer Access (ticket #45)</TT><BR><TT>> <BR>> Erez
Haba </TT><BR><TT>> <BR>> to:</TT><BR><TT>> <BR>> MPI
2.2</TT><BR><TT>> <BR>> 12/08/2008 04:13 PM</TT><BR><TT>> <BR>> Sent
by:</TT><BR><TT>> <BR>>
mpi-22-bounces@lists.mpi-forum.org</TT><BR><TT>> <BR>> Please respond to
"MPI 2.2"</TT><BR><TT>> <BR>> Dear Alexander,</TT><BR><TT>>
</TT><BR><TT>> As far as I recall memory remapping from the main
processor to a <BR>> network device was discussed before (If I recall
correctly, in the <BR>> April meeting). I think that it’s close enough to
your scenario of <BR>> remapping to a different process for the purpose of
this discussion.</TT><BR><TT>> </TT><BR><TT>> Is your case real? Do
you know of systems that do that with MPI? Or <BR>> is it a hypothetical
case?</TT><BR><TT>> </TT><BR><TT>> </TT><BR><TT>> For the
review process, we do need people to review the text; we <BR>> added this
requirement in the last meeting. Regardless, it does not <BR>> prevent any
other person from giving feedback on the proposal. I’m <BR>> sure that
Jeff or Bill can give you’re a more formal definition of <BR>> the review
process.</TT><BR><TT>> </TT><BR><TT>> The wiki page state
that:</TT><BR><TT>> <A
href="https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow">https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/mpi22/TicketWorkflow</A></TT><BR><TT>>
To advance to the first reading, a proposal must be reviewed by the <BR>>
lead chapter author and three other reviewers. That review should <BR>> check
the change against the standard text to ensure that the change<BR>> in
context is correct; in addition, the change should be evaluated <BR>> for
completeness. For changes that involve multiple chapters (but <BR>> are
logically related and hence belong in a single ticket), the <BR>> respective
chapter authors must review the changes in their <BR>> chapters. These
reviews must be entered as comments on the ticket. <BR>> MPI 2.2 Chapter
Authors </TT><BR><TT>> </TT><BR><TT>> </TT><BR><TT>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Supalov, Alexander<BR>> Sent: Saturday, December 06, 2008 11:44
AM<BR>> To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer
Access (ticket #45)</TT><BR><TT>> </TT><BR><TT>> Dear
Erez,</TT><BR><TT>> </TT><BR><TT>> Thank you. I agree to
respectfully disagree on this with you, for <BR>> the following two
reasons:</TT><BR><TT>> </TT><BR><TT>> 1) The memory remapping
scenario IO brought up a couple of days ago <BR>> was not discussed before
the first voting as far as I can recall. If<BR>> you have proof to the
contrary, I would most kindly ask you to <BR>> present it. If this cannot be
done, I would say that a new issue has<BR>> been added to the discussion, and
we may need to review its substance.</TT><BR><TT>> </TT><BR><TT>> 2)
Next, I would like to see the definition of the ticket review <BR>> process
that states the reviewers are supposed to only check the <BR>> proposed text
for correspondence with the existing standard. My <BR>> opinion is that
reviewers are there to bring both textual and <BR>> substantial concerns up
when they see the need for this. So far I've<BR>> been acting on this
conviction.</TT><BR><TT>> </TT><BR><TT>> I'm looking forward to the
further discussion on the floor.</TT><BR><TT>> </TT><BR><TT>> Best
regards.</TT><BR><TT>> </TT><BR><TT>> Alexander</TT><BR><TT>>
</TT><BR><TT>> <BR>> From: mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Erez Haba<BR>> Sent: Saturday, December 06, 2008 7:45 PM<BR>>
To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer Access
(ticket #45)</TT><BR><TT>> Thanks Alexander,</TT><BR><TT>>
</TT><BR><TT>> I respectfully decline your proposal to suspend the
review of these tickets.</TT><BR><TT>> I don’t see any specific reference wrt
text in your comments; and <BR>> you don’t bring any new issue that has not
been discussed before the 1st<BR>> voting. Thus I don’t see any reason
to suspend their review.</TT><BR><TT>> </TT><BR><TT>>
Thanks,</TT><BR><TT>> .Erez</TT><BR><TT>> </TT><BR><TT>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Supalov, Alexander<BR>> Sent: Friday, December 05, 2008 4:19
PM<BR>> To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer
Access (ticket #45)</TT><BR><TT>> </TT><BR><TT>> Dear
Erez,</TT><BR><TT>> </TT><BR><TT>> Thank you. I reviewed the text
and found that further polishing of <BR>> its textual aspects should be
suspended until the substance is <BR>> clarified. I put a comment to this
effect into the ticket, as well <BR>> as into the dependent ticket #46. In my
opinion, both tickets are <BR>> not yet ready to go into the standard and
should go into another <BR>> round of contemplation of their possible
repercussions.</TT><BR><TT>> </TT><BR><TT>> In both cases presumed
application friendliness is traded for less <BR>> freedom of implementation.
Application developers who disregard the <BR>> standard now will most likely
continue to do this in the future, <BR>> possibly in some other way.
Restricting the freedom of <BR>> implementation to make their life easier
does not seem to be an <BR>> attractive proposition to me.</TT><BR><TT>>
</TT><BR><TT>> If any of the issues identified so far, or comparable
issues we <BR>> cannot fathom at the moment, surface up in one of the future
HPC <BR>> platforms and hinder MPI adoption or transition to MPI-2.2 there,
we<BR>> will have done disservice both to the MPI standard and to the
<BR>> community. I hope this will bear on the minds of those who're going
<BR>> to vote on these two items at the meeting.</TT><BR><TT>>
</TT><BR><TT>> Best regards.</TT><BR><TT>> </TT><BR><TT>>
Alexander</TT><BR><TT>> </TT><BR><TT>> <BR>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Erez Haba<BR>> Sent: Saturday, December 06, 2008 12:28 AM<BR>>
To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer Access
(ticket #45)</TT><BR><TT>> Dear Alexander,</TT><BR><TT>>
</TT><BR><TT>> It is okay and encouraged for people to comment and
argue on the <BR>> proposals. You can add your comments to the ticket arguing
your <BR>> important points. The forum then consider that various
points and <BR>> vote on the proposal.</TT><BR><TT>>
</TT><BR><TT>> However for the voting process we need people to review
the text and<BR>> confirm that it does not conflict with the standard and it
is <BR>> reasonable (from language pov) to be included in the
standard.</TT><BR><TT>> </TT><BR><TT>> If we are willing to review
the text and state that it valid for the<BR>> standard, that would be great.
If you have any comments on the text <BR>> please send them to
me.</TT><BR><TT>> </TT><BR><TT>> Thanks,</TT><BR><TT>>
.Erez</TT><BR><TT>> </TT><BR><TT>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Supalov, Alexander<BR>> Sent: Friday, December 05, 2008 2:26
PM<BR>> To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer
Access (ticket #45)</TT><BR><TT>> </TT><BR><TT>> Dear
Erez,</TT><BR><TT>> </TT><BR><TT>> Thank you. I'm afraid I would
need to have it explained in more <BR>> detail why review may not include
arguments on the substance. If <BR>> something in the proposal makes one
think that the proposed matter <BR>> may be detrimental to the MPI standard
and its implementations, I <BR>> consider it one's duty to point this
out.</TT><BR><TT>> </TT><BR><TT>> Following up on your reply: the
segfault situation you described <BR>> will make an MPI compliant program
break. Thus, the implementation <BR>> will have to keep the send buffer
mapped into the sending process <BR>> address space. This is a limitation on
the MPI implementation that <BR>> should be taken into account during the
voting.</TT><BR><TT>> </TT><BR><TT>> Another possibility that has
been pointed out earlier was that the <BR>> proposed change disallows byte
swap and other send buffer <BR>> conversions to be done in place. At least
one historical MPI <BR>> implementation was doing this to a great avail. Who
knows what is <BR>> going to happen in the future?</TT><BR><TT>>
</TT><BR><TT>> Best regards.</TT><BR><TT>> </TT><BR><TT>>
Alexander</TT><BR><TT>> </TT><BR><TT>> <BR>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Erez Haba<BR>> Sent: Friday, December 05, 2008 11:18 PM<BR>> To:
MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer Access (ticket
#45)</TT><BR><TT>> I think that the idea is for the reviewers to check the
text for any<BR>> mistakes and compatibility with the existing text, rather
than check<BR>> for the validity of the proposal. The later as I recall is
left for <BR>> the MPI forum assembly.</TT><BR><TT>>
</TT><BR><TT>> As for your question, I’m sure that you can answer it
yourself. J If<BR>> the memory is still also mapped to the original process
(as with <BR>> shared memory) that everything is fine. If the memory is
removed <BR>> from the original process, than the app will get an
access-violation fault.</TT><BR><TT>> (if this system works on a page
boundary, to take this action it <BR>> needs to make sure that there are no
other allocation on the same page)</TT><BR><TT>> </TT><BR><TT>>
Thanks,</TT><BR><TT>> .Erez</TT><BR><TT>> </TT><BR><TT>> From:
mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Supalov, Alexander<BR>> Sent: Friday, December 05, 2008 2:05
PM<BR>> To: MPI 2.2<BR>> Subject: Re: [Mpi-22] please review - Send Buffer
Access (ticket #45)</TT><BR><TT>> </TT><BR><TT>>
Hi,</TT><BR><TT>> </TT><BR><TT>> I'd like to review this proposal.
Let's consider the following scenario:</TT><BR><TT>> </TT><BR><TT>>
- In the MPI_Isend, MPI maps the send buffer into the address space <BR>> of
the receiving process.</TT><BR><TT>> - In the matching MPI_Recv, the
receiving process makes a copy of <BR>> the mapped send buffer into the
receive buffer.</TT><BR><TT>> - Once the copy is complete, the send buffer is
mapped back into the<BR>> sender address space during the wait/test
call.</TT><BR><TT>> </TT><BR><TT>> What will happen one tries to
access the send buffer in between?</TT><BR><TT>> </TT><BR><TT>> Best
regards.</TT><BR><TT>> </TT><BR><TT>> Alexander</TT><BR><TT>>
</TT><BR><TT>> <BR>> From: mpi-22-bounces@lists.mpi-forum.org [<A
href="mailto:mpi-22-">mailto:mpi-22-</A><BR>> bounces@lists.mpi-forum.org] On
Behalf Of Erez Haba<BR>> Sent: Friday, December 05, 2008 10:48 PM<BR>> To:
MPI 2.2<BR>> Subject: [Mpi-22] please review - Send Buffer Access (ticket
#45)</TT><BR><TT>> This proposal has passed 1st voting and needs reviewers.
We need 3 volunteers<BR>> to sign-off on this proposal, plus the 3
chapter authors to sign-off<BR>> on the text.</TT><BR><TT>>
</TT><BR><TT>> The Chapter Authors for</TT><BR><TT>>
</TT><BR><TT>> Chapter 3: Point-to-Point Communication</TT><BR><TT>>
Chapter 5: Collective Communication</TT><BR><TT>> Chapter 11: One-Sided
Communication</TT><BR><TT>> </TT><BR><TT>> Please add a comment to
the ticket saying that you reviewed the <BR>> proposal, or please send me
your comments.</TT><BR><TT>> </TT><BR><TT>> Send Buffer Access: <A
href="https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/45">https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/45</A></TT><BR><TT>>
</TT><BR><TT>> Thanks,</TT><BR><TT>> .Erez</TT><BR><TT>>
</TT><BR><TT>>
---------------------------------------------------------------------</TT><BR><TT>>
Intel GmbH</TT><BR><TT>> Dornacher Strasse 1</TT><BR><TT>> 85622
Feldkirchen/Muenchen Germany</TT><BR><TT>> Sitz der Gesellschaft: Feldkirchen
bei Muenchen</TT><BR><TT>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner,
Hannes Schwaderer</TT><BR><TT>> Registergericht: Muenchen HRB 47456
Ust.-IdNr.</TT><BR><TT>> VAT Registration No.: DE129385895</TT><BR><TT>>
Citibank Frankfurt (BLZ 502 109 00) 600119052</TT><BR><TT>>
</TT><BR><TT>> This e-mail and any attachments may contain confidential
material for</TT><BR><TT>> the sole use of the intended recipient(s). Any
review or distribution</TT><BR><TT>> by others is strictly prohibited. If you
are not the intended</TT><BR><TT>> recipient, please contact the sender and
delete all copies.</TT><BR><TT>>
---------------------------------------------------------------------</TT><BR><TT>>
Intel GmbH</TT><BR><TT>> Dornacher Strasse 1</TT><BR><TT>> 85622
Feldkirchen/Muenchen Germany</TT><BR><TT>> Sitz der Gesellschaft: Feldkirchen
bei Muenchen</TT><BR><TT>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner,
Hannes Schwaderer</TT><BR><TT>> Registergericht: Muenchen HRB 47456
Ust.-IdNr.</TT><BR><TT>> VAT Registration No.: DE129385895</TT><BR><TT>>
Citibank Frankfurt (BLZ 502 109 00) 600119052</TT><BR><TT>>
</TT><BR><TT>> This e-mail and any attachments may contain confidential
material for</TT><BR><TT>> the sole use of the intended recipient(s). Any
review or distribution</TT><BR><TT>> by others is strictly prohibited. If you
are not the intended</TT><BR><TT>> recipient, please contact the sender and
delete all copies.</TT><BR><TT>>
---------------------------------------------------------------------</TT><BR><TT>>
Intel GmbH</TT><BR><TT>> Dornacher Strasse 1</TT><BR><TT>> 85622
Feldkirchen/Muenchen Germany</TT><BR><TT>> Sitz der Gesellschaft: Feldkirchen
bei Muenchen</TT><BR><TT>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner,
Hannes Schwaderer</TT><BR><TT>> Registergericht: Muenchen HRB 47456
Ust.-IdNr.</TT><BR><TT>> VAT Registration No.: DE129385895</TT><BR><TT>>
Citibank Frankfurt (BLZ 502 109 00) 600119052</TT><BR><TT>>
</TT><BR><TT>> This e-mail and any attachments may contain confidential
material for</TT><BR><TT>> the sole use of the intended recipient(s). Any
review or distribution</TT><BR><TT>> by others is strictly prohibited. If you
are not the intended</TT><BR><TT>> recipient, please contact the sender and
delete all copies.</TT><BR><TT>>
---------------------------------------------------------------------</TT><BR><TT>>
Intel GmbH</TT><BR><TT>> Dornacher Strasse 1</TT><BR><TT>> 85622
Feldkirchen/Muenchen Germany</TT><BR><TT>> Sitz der Gesellschaft: Feldkirchen
bei Muenchen</TT><BR><TT>> Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner,
Hannes Schwaderer</TT><BR><TT>> Registergericht: Muenchen HRB 47456
Ust.-IdNr.</TT><BR><TT>> VAT Registration No.: DE129385895</TT><BR><TT>>
Citibank Frankfurt (BLZ 502 109 00) 600119052</TT><BR><TT>>
</TT><BR><TT>> This e-mail and any attachments may contain confidential
material for</TT><BR><TT>> the sole use of the intended recipient(s). Any
review or distribution</TT><BR><TT>> by others is strictly prohibited. If you
are not the intended</TT><BR><TT>> recipient, please contact the sender and
delete all copies.<BR>>
_______________________________________________<BR>> mpi-22 mailing
list<BR>> mpi-22@lists.mpi-forum.org<BR>> <A
href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-22</A><BR></TT></P><pre>---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
</pre></BODY></HTML>