[Mpi3-hybridpm] MPI + threads / MPI +OpenMP
Rajeev Thakur
thakur at mcs.anl.gov
Thu Aug 13 13:40:36 CDT 2009
I was referring to COMM_TWORLD (with the T), the new communicator
introduced in the proposal.
Rajeev
> -----Original Message-----
> From: mpi3-hybridpm-bounces at lists.mpi-forum.org
> [mailto:mpi3-hybridpm-bounces at lists.mpi-forum.org] On Behalf
> Of Anthony Skjellum
> Sent: Thursday, August 13, 2009 1:33 PM
> To: mpi3-hybridpm at lists.mpi-forum.org
> Subject: Re: [Mpi3-hybridpm] MPI + threads / MPI +OpenMP
>
> I think MPI_COMM_WORLD should have legacy behavior...
>
> ------Original Message------
> From: Rajeev Thakur
> Sender: mpi3-hybridpm-bounces at lists.mpi-forum.org
> To: mpi3-hybridpm at lists.mpi-forum.org
> ReplyTo: mpi3-hybridpm at lists.mpi-forum.org
> Sent: Aug 13, 2009 12:37 PM
> Subject: Re: [Mpi3-hybridpm] MPI + threads / MPI +OpenMP
>
> Some questions just for clarification:
>
> * What does MPI_Comm_size of MPI_COMM_TWORLD return -- the
> total number
> of end points on all processes?
>
> * For collective operations on MPI_COMM_TWORLD, how many
> processes/threads must call them -- exactly one for each end point? If
> each process has 11 threads and 5 end points say, the user
> has to ensure
> that exactly 5 threads on each process call the collective,
> one for each
> end point?
>
> Rajeev
>
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
>
> Tony Skjellum, PhD
> RunTime Computing Solutions, LLC
> tony at runtimecomputing.com
> direct: +1-205-314-3595
> cell: +1-205-807-4968
>
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
More information about the mpiwg-hybridpm
mailing list