[Mpi3-rma] Dynamic windows example

James Dinan dinan at mcs.anl.gov
Tue Dec 7 19:57:09 CST 2010


Oops...I thought CAS took a count (which I also forgot to include), but 
after looking at the proposal it looks like I was mistaken.  If we have 
to do two calls to CAS then there's no need for both components of the 
pointer to share the same type.  If the intent is for CAS to take a 
count, then add 2 to the previous version's CAS call.  Update is attached.

   ~Jim.

On 12/7/10 5:08 PM, Rajeev Thakur wrote:
> Is a single MPI_AINT sufficient for the CAS? Those buffers contain a struct with 2 Aints.
>
> Rajeev
>
>
> On Dec 7, 2010, at 4:10 PM, James Dinan wrote:
>
>> Hi All,
>>
>> Another version of the linked list example using atomic CAS in place of get-flush-put.  The code is still not heterogeneous-safe, if there's interest I'd be happy to look into fixing this up.
>>
>> ~Jim.
>>
>> On 12/6/10 7:30 AM, Torsten Hoefler wrote:
>>> On Sun, Dec 05, 2010 at 10:57:06PM -0600, James Dinan wrote:
>>>> On 12/05/2010 10:29 PM, Pavan Balaji wrote:
>>>>>
>>>>> On 12/05/2010 10:19 PM, James Dinan wrote:
>>>>>>> Using MPI_BYTE in the broadcast is also not working in heterogeneous
>>>>>>> environments (where you should also compare the sizes of MPI_Aint).
>>>>>>
>>>>>> Will dynamic windows be usable on such systems if processes don't agree
>>>>>> on the size of MPI_Aint?
>>>>>
>>>>> Yes. What part of the current draft is not heterogeneous-safe?
>>>>
>>>> The displacement into a dynamic window where a newly registered buffer
>>>> will be accessed is the address of the buffer cast to an MPI_Aint.  If
>>>> processes don't agree on the size of an address then it seems like it
>>>> will be challenging to handle dynamic window displacements in a
>>>> heterogeneous-safe way.
>>> Yes, that's what I meant by checking the sizes of MPI_Aint in my
>>> original comments :-).
>>>
>>>> Even if the MPI implementation handles this by
>>>> making the size of MPI_Aint large enough to accommodate all processes,
>>>> the address arithmetic I used to get the next field from an item pointer
>>>> will not be valid (consider a 32 bit process casting 64 bit MPI_Aint to
>>>> llist_item_t*):
>>> Well, sending the MPI_Aint to another process would convert it into the
>>> right format to be interpreted correctly (including offset calculations)
>>> on this process. This, of course, assumes a linear address space. I
>>> don't see why a process should ever cast a remote address into a local
>>> address. Remote addresses can only be accessed through MPI functions.
>>>
>>>> (MPI_Aint)&(((llist_item_t*)tail_ptr.disp)->next)
>>>>
>>>> So, it seems like for the above to be heterogeneous-safe I should be
>>>> using MPI types.
>>> Yes, generally.
>>>
>>> All the Best,
>>>    Torsten
>>>
>>
>> <mpi_rma_llist_cas.c>_______________________________________________
>> mpi3-rma mailing list
>> mpi3-rma at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-rma
>


-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: mpi_rma_llist_cas.c
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-rma/attachments/20101207/d817d9b0/attachment-0001.c>


More information about the mpiwg-rma mailing list