[Mpi-forum] behavior of NBC for intracommunicators when root == MPI_PROC_NULL

Anthony Skjellum skjellum at auburn.edu
Thu Dec 11 18:42:44 CST 2014


A communication that does nothing is not the same as the initialization value of the type.  It makes a non-unique mapping of operations to handles too, for no good reason.

Imagine that I start my array with MPI_REQUEST_NULL as the default value.  Now I can have some of those in the middle of my array even though this is an actual (albeit empty) communication.  I have to distinguish now between MPI_REQUEST_NULL being generated because the work was trivial, vs. because of an uninitialized array element.

What if I want to test this request too?  Does it return completed? :-)

I think there is a subtle semantic difference between "a no work operation" and MPI_REQUEST_NULL.  We need to look at all the possible dark corners of what can happen by the literal mapping.

I will go re-read 5.2.3 next.

Its just my opinion, it may be that we decide I am overly worried about a small edge condition.

Tony





Anthony Skjellum, PhD
Professor of Computer Science and Software Engineering
COLSA Professor of Cybersecurity and Information Assurance
Director of the Auburn Cyber Research Center and Lead Cyber Scientist for Auburn
Samuel Ginn College of Engineering
Auburn University
skjellum at auburn.edu or skjellum at gmail.com
cell: +1-205-807-4968 ; office: +1-334-844-6360


CONFIDENTIALITY: This e-mail and any attachments are confidential and
may be privileged. If you are not a named recipient, please notify the
sender immediately and do not disclose the contents to another person,
use it for any purpose or store or copy the information in any medium.

________________________________
From: mpi-forum [mpi-forum-bounces at lists.mpi-forum.org] on behalf of George Bosilca [bosilca at icl.utk.edu]
Sent: Thursday, December 11, 2014 4:50 PM
To: Main MPI Forum mailing list
Subject: Re: [Mpi-forum] behavior of NBC for intracommunicators when root == MPI_PROC_NULL

What's wrong with 5.2.3? From a practical point of view, returning MPI_REQUEST_NULL makes sense (following the description about communication with MPI_PROC_NULL from 3.11).

  George.


On Thu, Dec 11, 2014 at 5:22 PM, Anthony Skjellum <skjellum at auburn.edu<mailto:skjellum at auburn.edu>> wrote:
Shouldn't it just be a trivial request -- why map to null ? - shortcut should of course happen internally but the request should be valid --- undesirable side effects possible Imho with a literal mapping !

Anthony Skjellum, PhD
205-807-4968<tel:205-807-4968>


> On Dec 11, 2014, at 3:30 PM, Jeff Hammond <jeff.science at gmail.com<mailto:jeff.science at gmail.com>> wrote:
>
> I would hope that the Ianything is matched with a corresponding
> Test/Wait* call on every process that calls it.  To allow otherwise
> seems pretty odd.
>
> Presumably, if Ibcast is a no-op when root=MPI_PROC_NULL, then the
> request can be set to MPI_REQUEST_NULL and thus completing it is
> trivial.
>
> If the standard says that completing a request known to be
> MPI_REQUEST_NULL is not required, then I suppose the user can avoid
> making the completion call, but I don't like that style.
>
> Jeff
>
>> On Thu, Dec 11, 2014 at 11:30 AM, Fab Tillier <ftillier at microsoft.com<mailto:ftillier at microsoft.com>> wrote:
>> Hi Folks,
>>
>>
>>
>> I can’t find anything in the standard document that explains the behavior
>> for NBC requests on intercommunicators when root == MPI_PROC_NULL.  Taking
>> MPI_Ibcast as an example, what is the output value of request?  Is it
>> MPI_REQUEST_NULL?  Is it not set at all?  Is it a valid request that must be
>> completed?
>>
>>
>>
>> Thanks,
>>
>> -Fab
>>
>>
>> _______________________________________________
>> mpi-forum mailing list
>> mpi-forum at lists.mpi-forum.org<mailto:mpi-forum at lists.mpi-forum.org>
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com<mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org<mailto:mpi-forum at lists.mpi-forum.org>
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum
_______________________________________________
mpi-forum mailing list
mpi-forum at lists.mpi-forum.org<mailto:mpi-forum at lists.mpi-forum.org>
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20141212/53d54add/attachment-0001.html>


More information about the mpi-forum mailing list