[mpiwg-hybridpm] Meeting Today

Jim Dinan james.dinan at gmail.com
Mon Jan 26 15:42:51 CST 2015


All,

Here is the diff of changes from the December meeting.  There's one spot
where a few options for the text are included and the MPI_ALIASED changes
are still pending (thanks to Dan for leading this tricky task).

Thanks,
 ~Jim.

On Mon, Jan 26, 2015 at 10:53 AM, Jim Dinan <james.dinan at gmail.com> wrote:

> Hi All,
>
> Reminder that there will be a meeting at 11am CT today.
>
>  ~Jim.
>
> =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
>
> Meeting Info:
>
>
> https://cisco.webex.com/ciscosales/j.php?ED=236535652&UID=0&PW=NOGE0NDk5MmVh&RT=MiMxMQ%3D%3D
>
> +1-866-432-9903
> Meeting ID: 206095536
>
> https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Hybrid
>
> =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20150126/c6ebac39/attachment-0001.html>
-------------- next part --------------
Index: chap-context/context.tex
===================================================================
--- chap-context/context.tex	(revision 1904)
+++ chap-context/context.tex	(revision 1905)
@@ -345,7 +345,7 @@
 
 \begin{funcdef}{MPI\_GROUP\_RANK(group, rank)}
 \funcarg{\IN}{group}{  group (handle)}
-\funcarg{\OUT}{rank}{  rank of the calling process in
+\funcarg{\OUT}{rank}{  rank of the \MPIreplace{3.1}{380}{calling}{MPI} process in
 group, or \flushline
 \const{MPI\_UNDEFINED} if the process is not a
 member (integer)}
@@ -386,6 +386,7 @@
 \mpifunc{MPI\_GROUP\_TRANSLATE\_RANKS}, which returns
 \constskip{MPI\_PROC\_NULL} as the translated rank.
  
+\label{subsec:context-grpacc-compare}
 \begin{funcdef}{MPI\_GROUP\_COMPARE(group1, group2, result)}
 \funcarg{\IN}{group1}{ first group (handle)}
 \funcarg{\IN}{group2}{ second group (handle)}
@@ -737,7 +738,7 @@
 
 \begin{funcdef}{MPI\_COMM\_RANK(comm, rank)}
 \funcarg{\IN}{comm}{ communicator (handle)}
-\funcarg{\OUT}{rank}{  rank of the calling process in group of
+\funcarg{\OUT}{rank}{  rank of the \MPIreplace{3.1}{380}{calling}{MPI} process in group of
 \mpiarg{ comm} (integer)}
 \end{funcdef}
 
@@ -768,6 +769,7 @@
 communicator.
 \end{users}
 %
+\label{subsec:context-intracommacc-compare}
 \begin{funcdef}{MPI\_COMM\_COMPARE(comm1, comm2, result)}
 \funcarg{\IN}{comm1}{ first communicator (handle)}
 \funcarg{\IN}{comm2}{ second communicator (handle)}
@@ -1191,9 +1193,10 @@
 
 %%% -- BEGIN ENDPOINTS CHANGES
 \MPIupdateBegin{3.1}{380}
+\label{subsec:context-intracomconst-endpoints}
 \begin{funcdef}{MPI\_COMM\_CREATE\_ENDPOINTS(parent\_comm, my\_num\_ep, info, new\_comm\_handles)}
 \funcarg{\IN}{parent\_comm}{communicator (handle)}
-\funcarg{\IN}{my\_num\_ep}{number of endpoints to be created at this process (integer)}
+\funcarg{\IN}{my\_num\_ep}{number of endpoints to be created at this process (non-negative integer)}
 \funcarg{\IN}{info}{info object (handle)}
 \funcarg{\OUT}{new\_comm\_handles}{array of handles to new communicator (array of handles)}
 \end{funcdef}
@@ -1212,11 +1215,12 @@
 Distinct handles for each associated rank in the output communicator are
 returned in the \mpiarg{new\_comm\_handles} array at the corresponding process in
 \mpiarg{parent\_comm}. Ranks associated with a process in \mpiarg{parent\_comm}
-are numbered contiguously in the output communicator, and the starting rank is
+are numbered contiguously in the output communicator and in the
+\mpiarg{new\_comm\_handles} array.  The starting rank is
 defined by the order of the associated rank of the process in the parent communicator.
-The communicator handle returned at index 0 in \mpiarg{new\_comm\_handles}
+The rank associated with the communicator handle at index 0 in \mpiarg{new\_comm\_handles}
 corresponds to the calling process in \mpiarg{parent\_comm}.  All other
-communicator handles returned in \mpiarg{new\_comm\_handles} represent new
+communicator handles returned in \mpiarg{new\_comm\_handles} are associated with new
 ranks that are not members of the group of \mpiarg{parent\_comm}.
 
 If \mpiarg{parent\_comm} is an intracommunicator, this function returns a new
@@ -1227,9 +1231,27 @@
 is also an intercommunicator where the local group consists of ranks
 associated with processes in the local group of \mpiarg{parent\_comm} and the
 remote group consists of ranks associated with processes in the remote
-group of \mpiarg{parent\_comm}. If either the local or remote group is empty,
+group of \mpiarg{parent\_comm}.
+%
+The local group of the output communicator has a size equal to the sum of the
+\mpiarg{my\_num\_ep} values provided by the processes in the local group of
+\mpiarg{parent\_comm} and the remote group of the output communicator has a
+size equal to the sum of the \mpiarg{my\_num\_ep} values provided by the
+processes in the remote group of \mpiarg{parent\_comm}.
+%
+If either the local or remote group is empty,
 \const{MPI\_COMM\_NULL} is returned in all entries of
 \mpiarg{new\_comm\_handles}.
+%
+The local groups of each output communicator handle are aliased.  It is
+erroneous to mix group or communicator handles associated with
+-- Pick:
+-- different ranks
+-- different ranks in the same communicator
+-- different processes
+--
+in a call to an MPI routine, with the exception of
+communicator and group comparison routines.
 
 No cached
 information propagates from \mpiarg{parent\_comm} to the new communicator. Each process in
@@ -1241,12 +1263,12 @@
 \mpiarg{new\_comm\_handles} argument is ignored.
 If a process provides a valid \mpiarg{my\_num\_ep} argument, but the MPI
 implementation is not able to create a new communicator because of the
-\mpiarg{my\_num\_ep} argument at this process, this function will raise an exception of type
+\mpiarg{my\_num\_ep} argument at this process, this function will raise an exception of class
 \error{MPI\_ERR\_ENDPOINTS}.
 
 Ranks in the new communicator behave as separate MPI processes, including 
 semantics specified in Section~\ref{sec:pt2pt-semantics}, which the application must guarantee for each rank.  For example, a
-collective operation on the new communicator must have the participation of every rank in this communicator.
+collective operation on the new communicator must have the participation of all members of the group of the new communicator.
 %An exception to this rule is made for
 %\mpifunc{MPI\_COMM\_FREE}, which must be called for every rank in the new
 %communicator, but must permit a single thread to perform these calls serially.
@@ -1272,7 +1294,8 @@
 \begin{description}
 \item{\infokey{same\_num\_ep}} --- If set to \constskip{true}, then the
 implementation may assume that the argument \mpiarg{my\_num\_ep} is identical
-on all processes.
+on all processes and that all processes have provided this info key with the
+same value.
 \end{description}
 
 
Index: chap-changes/changes.tex
===================================================================
--- chap-changes/changes.tex	(revision 1904)
+++ chap-changes/changes.tex	(revision 1905)
@@ -149,6 +149,20 @@
  
 \begin{enumerate}
 
+%% ENDPOINTS
+\MPIupdate{3.1}{380}{\item
+Section~\ref{subsec:context-grpacc} on
+page~\pageref{subsec:context-grpacc-compare},
+Section~\ref{subsec:context-intracommacc} on
+page~\pageref{subsec:context-intracommacc-compare}, and
+Section~\ref{subsec:context-intracomconst} on
+page~\pageref{subsec:context-intracomconst-endpoints}.
+\newline
+The \mpifunc{MPI\_COMM\_CREATE\_ENDPOINTS} function was added and the
+\const{MPI\_ALIASED} outcome was added for group and communicator comparison
+routines.
+}
+
 % 01.--- MPI-3.0 Ticket 281 
 \item 
 Section~\ref{sec:deprecated} on page~\pageref{sec:deprecated},
Index: chap-appLang/appLang-Const.tex
===================================================================
--- chap-appLang/appLang-Const.tex	(revision 1904)
+++ chap-appLang/appLang-Const.tex	(revision 1905)
@@ -457,6 +457,7 @@
 {\small Fortran type: \ftype{INTEGER}} \\
 \hline
 \const{MPI\_IDENT} \\
+\MPIupdate{3.1}{380}{\const{MPI\_ALIASED}} \\
 \const{MPI\_CONGRUENT} \\
 \const{MPI\_SIMILAR} \\
 \const{MPI\_UNEQUAL} \\


More information about the mpiwg-hybridpm mailing list