[Mpi3-hybridpm] New draft of the EI chapter uploaded

Douglas Miller dougmill at us.ibm.com
Wed Feb 9 09:23:32 CST 2011

Hi Pavan,

Maybe we really are saying the same thing. The talk about making all MPI
call synchronzise might be throwing me off. I was thinking of this (the
'balanced' INFO var) as being information that the user was giving to the
implementation telling it that the code between JOIN and LEAVE was taking a
"deliberate" approach towards progress, and that the user was saying that
they understand that the performance of their JOIN-LEAVE block will
directly relate to how much/frequently they make MPI calls (or get to
LEAVE). Not sure what the right words are, but I feel we must make it more
obvious to users why they want to use 'balanced' and how they should select
what code to put between JOIN and LEAVE in that case.

Douglas Miller                  BlueGene Messaging Development
IBM Corp., Rochester, MN USA                     Bldg 030-2 A410
dougmill at us.ibm.com               Douglas Miller/Rochester/IBM

             Pavan Balaji                                                  
             <balaji at mcs.anl.g                                             
             ov>                                                        To 
                                       Douglas Miller/Rochester/IBM at IBMUS  
             02/09/2011 09:05                                           cc 
             AM                        mpi3-hybridpm at lists.mpi-forum.org   
                                       Re: [Mpi3-hybridpm] New draft of    
                                       the EI chapter uploaded             

Hi Doug,

On 02/09/2011 08:54 AM, Douglas Miller wrote:
> The problem is that none of these INFO options represent the mode I was
> discussing in our last meeting. These are different interpretations of
> how helper threads will be used and implemented.
> I am not at all comfortable with this concept of adding synchronization
> to all MPI calls. In fact, I'm not at all sure what it means to set
> balanced to true and then go into computation. How do computation
> threads get synchronized in those cases? It sounds the same as
> balanced=false.

This is based on what we had discussed in the call -- we even had a
discussion on why the term was called "balanced" and Bronis clarified
that the computation was balanced across threads allowing the MPI
implementation to use all threads for any MPI call (this "using all
threads" is what I'm calling "synchronizing").

Marc had also pointed out that each thread should independently be able
to make progress to MPI_HELPER_TEAM_LEAVE even if a thread is expecting
help from other threads thus requiring synchronization between them
(e.g., you cannot wait on a pthread_barrier). Even this part is captured
in the description that the threads might synchronize with other threads
during an MPI call.

I don't think we are saying two different things -- it's just a
word-smithing issue but I think your description is restricting it more
than you require by saying that the application can only call MPI

  -- Pavan

Pavan Balaji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20110209/c4c71e32/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20110209/c4c71e32/attachment-0003.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pic14532.gif
Type: image/gif
Size: 1255 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20110209/c4c71e32/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20110209/c4c71e32/attachment-0005.gif>

More information about the mpiwg-hybridpm mailing list