[Mpi3-rma] Dynamic windows example

James Dinan dinan at mcs.anl.gov
Tue Dec 7 16:10:35 CST 2010


Hi All,

Another version of the linked list example using atomic CAS in place of 
get-flush-put.  The code is still not heterogeneous-safe, if there's 
interest I'd be happy to look into fixing this up.

  ~Jim.

On 12/6/10 7:30 AM, Torsten Hoefler wrote:
> On Sun, Dec 05, 2010 at 10:57:06PM -0600, James Dinan wrote:
>> On 12/05/2010 10:29 PM, Pavan Balaji wrote:
>>>
>>> On 12/05/2010 10:19 PM, James Dinan wrote:
>>>>> Using MPI_BYTE in the broadcast is also not working in heterogeneous
>>>>> environments (where you should also compare the sizes of MPI_Aint).
>>>>
>>>> Will dynamic windows be usable on such systems if processes don't agree
>>>> on the size of MPI_Aint?
>>>
>>> Yes. What part of the current draft is not heterogeneous-safe?
>>
>> The displacement into a dynamic window where a newly registered buffer
>> will be accessed is the address of the buffer cast to an MPI_Aint.  If
>> processes don't agree on the size of an address then it seems like it
>> will be challenging to handle dynamic window displacements in a
>> heterogeneous-safe way.
> Yes, that's what I meant by checking the sizes of MPI_Aint in my
> original comments :-).
>
>> Even if the MPI implementation handles this by
>> making the size of MPI_Aint large enough to accommodate all processes,
>> the address arithmetic I used to get the next field from an item pointer
>> will not be valid (consider a 32 bit process casting 64 bit MPI_Aint to
>> llist_item_t*):
> Well, sending the MPI_Aint to another process would convert it into the
> right format to be interpreted correctly (including offset calculations)
> on this process. This, of course, assumes a linear address space. I
> don't see why a process should ever cast a remote address into a local
> address. Remote addresses can only be accessed through MPI functions.
>
>> (MPI_Aint)&(((llist_item_t*)tail_ptr.disp)->next)
>>
>> So, it seems like for the above to be heterogeneous-safe I should be
>> using MPI types.
> Yes, generally.
>
> All the Best,
>    Torsten
>

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: mpi_rma_llist_cas.c
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20101207/47dbf593/attachment-0001.c>


More information about the mpiwg-rma mailing list