[Mpi3-hybridpm] Reminder for telecon tomorrow

Joe Ratterman jratt0 at gmail.com
Wed Feb 17 11:57:53 CST 2010


If you have 8 endpoints doing communication in parallel, they would each not
need to be able saturate the network; it may be most efficient to allow them
greater communication parallelism at the software level.

Allocating as many endpoints as allowed without sacrificing FIFO space would
not, to the best of my understanding, result in performance issues.


Thanks,
Joe Ratterman
jratt at us.ibm.com


On Wed, Feb 17, 2010 at 10:21 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:

> Joe,
>
> On 02/17/2010 08:38 AM, Joe Ratterman wrote:
> > Let's assume that there are 32 FIFOs per node, and that using 6 per
> > process or endpoint is considered optimal (both are actually higher
> > in actuality, but it isn't too important).  If we were to allow users to
> > create up to 8 endpoints per node, it would only be possible to allocate
> > 4 FIFOs for each endpoint.  Doing this in all cases would slow the
> > single-threaded case, which will probably be more common for quite some
> > time.
>
> Thanks. I'm missing some math here. I'd assume that you'd still keep 6
> FIFOs per endpoint, so each endpoint can fully saturate all the network
> links. So, you'd have a maximum of 5 endpoints. Is this correct?
>
> Suppose you always initialize the network with 5 endpoints, but in the
> single threaded case just use 1 endpoint, and in the multi-threaded (or
> more specifically multi-threaded/multi-endpoint) case use as many, but
> up to 5, endpoints as specified by the user. Does this cause a
> performance problem? Or a resource usage problem?
>
>  -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
> _______________________________________________
> Mpi3-hybridpm mailing list
> Mpi3-hybridpm at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-hybridpm/attachments/20100217/36021623/attachment-0001.html>


More information about the mpiwg-hybridpm mailing list