From vishal.iitian at [hidden] Tue Jun 3 16:59:51 2008 From: vishal.iitian at [hidden] (Vishal Agarwal) Date: Tue, 3 Jun 2008 17:59:51 -0400 Subject: [Mpi-21] running MPI only in a single function Message-ID: Dear All I am trying to parallelize my code using MPI. My problem is i want to run several processes only in one of the functions of my C++ code (not in main). Basically my code is divided into several functions. And in one of the functions i want to parallelize. How can I do that using MPI............. -- Regards *********************************************** Vishal Agarwal Research Scholar *********************************************** * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsquyres at [hidden] Tue Jun 3 20:29:47 2008 From: jsquyres at [hidden] (Jeff Squyres) Date: Tue, 3 Jun 2008 21:29:47 -0400 Subject: [Mpi-21] running MPI only in a single function In-Reply-To: Message-ID: <200E974A-3320-4443-BEA6-D86295CC6D01@cisco.com> Greetings Vishal. This list is not really for user-level questions about MPI; it is for the standards committee itself to discuss issues related to the upcoming MPI-2.1 standard. You might want to google around for various MPI tutorials and/or textbooks. Many of them address exactly this kind of topic. Good luck! On Jun 3, 2008, at 5:59 PM, Vishal Agarwal wrote: > Dear All > > I am trying to parallelize my code using MPI. My problem is i want > to run several processes only in one of the functions of my C++ code > (not in main). > Basically my code is divided into several functions. And in one of > the functions i want to parallelize. > > How can I do that using MPI............. > > -- > Regards > > *********************************************** > Vishal Agarwal > Research Scholar > *********************************************** > > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-21 -- Jeff Squyres Cisco Systems From rabenseifner at [hidden] Fri Jun 13 16:39:57 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Fri, 13 Jun 2008 23:39:57 +0200 Subject: [Mpi-21] MPI-2.1 Version June 13, 2008 is ready Message-ID: Hi all - and dear reviewer, it is Friday the 13th -- and MPI-2.1 Version June 13, 2008 is ready, see http://www.hlrs.de/organization/par/services/models/mpi/mpi21/doc/ user and pw = mpi21 Hopefully all bugs are fixed, see bug list on http://www.hlrs.de/organization/par/services/models/mpi/mpi21/doc/MPI-2.1-reviews.pdf Dear reviewer, please check whether your items are fixed and all is fine. Only items that cannot wait until MPI 2.2 have a chance. !!! Latest deadline is June 20, 2008. Last chance. !!! Then I'll make the final MPI-2.1 for voting. It's still Friday the 13th. Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From thakur at [hidden] Wed Jun 18 11:46:04 2008 From: thakur at [hidden] (Rajeev Thakur) Date: Wed, 18 Jun 2008 11:46:04 -0500 Subject: [Mpi-21] MPI-2.1 Version June 13, 2008 is ready In-Reply-To: Message-ID: <003501c8d162$ccef63a0$860add8c@mcs.anl.gov> Rolf, A few small typos in mpi-report-2.1draft-2008-06-13-colored.pdf: * pg 229, ln 1: "the the location" should be "the location" * pg 229, ln 4: should be MPI_Comm_get_attr instead of set_attr. set_attr is covered on line 3. * pg 415, end of ln 19: "If the they" should be "If they" Rajeev > -----Original Message----- > From: mpi-21-bounces_at_[hidden] > [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Rolf > Rabenseifner > Sent: Friday, June 13, 2008 4:40 PM > To: MPI 2.1 Mailing List > Subject: [Mpi-21] MPI-2.1 Version June 13, 2008 is ready > > Hi all - and dear reviewer, > > it is Friday the 13th -- > and > MPI-2.1 Version June 13, 2008 is ready, see > > http://www.hlrs.de/organization/par/services/models/mpi/mpi21/doc/ > > user and pw = mpi21 > > Hopefully all bugs are fixed, see bug list on > http://www.hlrs.de/organization/par/services/models/mpi/mpi21/ > doc/MPI-2.1-reviews.pdf > > Dear reviewer, > please check whether your items are fixed and all is fine. > Only items that cannot wait until MPI 2.2 have a chance. > > !!! Latest deadline is June 20, 2008. Last chance. !!! > > Then I'll make the final MPI-2.1 for voting. > > It's still Friday the 13th. > > Best regards > Rolf > > > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner > Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) > _______________________________________________ > mpi-21 mailing list > mpi-21_at_[hidden] > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-21 > From rabenseifner at [hidden] Mon Jun 23 10:42:35 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Mon, 23 Jun 2008 17:42:35 +0200 Subject: [Mpi-21] Fwd: Final MPI-2.1 (June 23, 2008) is available on the web Message-ID: It's hopefully done. --- the forwarded message follows --- attached mail follows: Hi all, the final MPI-2.1 (June 23, 2008) is available via http://www.hlrs.de/mpi/mpi21/doc/ (username and pw is mpi21) This is the final version. All further corrections will go to MPI-2.2. Thank you to you all for all your work you put into this common project! Best regards Rolf PS: All open issues that are handed over from MPI-2.1 to MPI-2.2 can found at http://www.hlrs.de/mpi/mpi21/doc/MPI-2.1-reviews_open-issues.txt and http://www.hlrs.de/mpi/mpi21/doc/MPI-2.1-reviews_open-issues.pdf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jun 26 08:22:19 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 26 Jun 2008 15:22:19 +0200 Subject: [Mpi-21] Preparation of MPI-2.1 at June 2008 meeting In-Reply-To: Message-ID: Hi all, all files for preparing the MPI-2.1 / MPI-1.3 sessions at June 2008 MPI Forum meeting in San Francisco are done. Files are available at (username and pw are mpi21) http://www.hlrs.de/mpi/mpi21/doc/ Only eleven Slides: ------------------- http://www.hlrs.de/mpi/mpi21/doc/mpi_21_at_MPIforum_2008-06_overview.pdf or print-version: http://www.hlrs.de/mpi/mpi21/doc/mpi_21_at_MPIforum_2008-06_overview_4to1.pdf On the web page you will find the final documents of the MPI-2.1 and MPI-1.3 standards - waiting for the first votes. And you find all issues that will be handed over to MPI-2.2: http://www.hlrs.de/mpi/mpi21/doc/MPI-2.1-reviews_open-issues.txt or print-version: http://www.hlrs.de/mpi/mpi21/doc/MPI-2.1-reviews_open-issues.pdf Best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From rabenseifner at [hidden] Thu Jun 26 08:29:27 2008 From: rabenseifner at [hidden] (Rolf Rabenseifner) Date: Thu, 26 Jun 2008 15:29:27 +0200 Subject: [Mpi-21] Implementation of MPI-2.1 Message-ID: Dear MPI-implementor, if you are not yet on the list at slide 8 in http://www.hlrs.de/mpi/mpi21/doc/mpi_21_at_MPIforum_2008-06_overview.pdf or if you are on the list and you have not yet given an estimate when your MPI-2.1 implemenatation will be available, then please give us this estimate at June meeting in San Francisco (or with a short reply to this list before the meeting). Thanks and best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) From alexander.supalov at [hidden] Thu Jun 26 09:09:22 2008 From: alexander.supalov at [hidden] (Supalov, Alexander) Date: Thu, 26 Jun 2008 15:09:22 +0100 Subject: [Mpi-21] Implementation of MPI-2.1 In-Reply-To: Message-ID: <5ECAB1304A8B5B4CB3F9D6C01E4E21A2017CA8A4@swsmsx413.ger.corp.intel.com> Hi everybody, I'm going to miss this meeting, so let me use this reply as a chance to express my opinion ahead of the vote: - We need to produce official versions of the MPI-1.3 and MPI-2.1 documents with change mark-up, either in change bars or in color coding. At the moment, as far as I understand, only the official documents are going to be published, and they won't have clear indication, apart from the MPI-2.1 change log, of what was actually changed in the text. Many standard bodies I know routinely publish marked-up versions of their new documents to facilitate smooth transition. If we don't follow this practice, we risk implementors overlooking some small but important changes, and then claiming MPI-1.3 and MPI-2.1 compliance by mistake. - The desire to have only Forum members edit the standards is a good one, but it may not be practical. In the particular case of the language binding Annex, we'd not be able to complete the massive chances on time without delegating this to someone else. This would make the standard process slip, which was considered a bigger risk at the time. What is necessary here is rather more careful supervision and review of the results, which I think we should take into account going forward. - If there are any material changes needed to Intel MPI implementation to make it MPI-1.3 and MPI-2.1 compliant, we'll target 2009. If we can claim MPI-1.3 and MPI-2.1 compliance in the current code without noticeable changes, either to the code or to the current draft during the pending voting, we may decide to go for this year's fall release. Best regards. Alexander -----Original Message----- From: mpi-21-bounces_at_[hidden] [mailto:mpi-21-bounces_at_[hidden]] On Behalf Of Rolf Rabenseifner Sent: Thursday, June 26, 2008 3:29 PM To: MPI 2.1 Mailing List Subject: [Mpi-21] Implementation of MPI-2.1 Dear MPI-implementor, if you are not yet on the list at slide 8 in http://www.hlrs.de/mpi/mpi21/doc/mpi_21_at_MPIforum_2008-06_overview.pdf or if you are on the list and you have not yet given an estimate when your MPI-2.1 implemenatation will be available, then please give us this estimate at June meeting in San Francisco (or with a short reply to this list before the meeting). Thanks and best regards Rolf Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseifner_at_[hidden] High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30) _______________________________________________ mpi-21 mailing list mpi-21_at_[hidden] http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-21 --------------------------------------------------------------------- Intel GmbH Dornacher Strasse 1 85622 Feldkirchen/Muenchen Germany Sitz der Gesellschaft: Feldkirchen bei Muenchen Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer Registergericht: Muenchen HRB 47456 Ust.-IdNr. VAT Registration No.: DE129385895 Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.