[mpi3-coll] array parameters in nonblocking collectives

Rajeev Thakur thakur at mcs.anl.gov
Sat Aug 1 10:54:20 CDT 2009


Torsten,
        What does libNBC do in this case?

Rajeev
 

> -----Original Message-----
> From: mpi3-coll-bounces at lists.mpi-forum.org 
> [mailto:mpi3-coll-bounces at lists.mpi-forum.org] On Behalf Of 
> Torsten Hoefler
> Sent: Saturday, August 01, 2009 10:46 AM
> To: MPI-3 Collective Subgroup Discussions
> Subject: Re: [mpi3-coll] array parameters in nonblocking collectives
> 
> Hello Bin,
> thanks for bringing this up Bin, it seems like an important issue.
> 
> I can see both sides. In general, I would say that all vector 
> collectives are not scalable anyway because they require 
> \Theta(P) memory per node and \Theta(P^2) total. Shifting 
> this by a constant factor probably doesn't matter too much 
> for asymptotic (large-scale) analysis. However, there is 
> certainly a border-zone where something could be gained with 
> transferring the ownership of the arrays. 
> 
> We will certainly discuss this issue in Helsinki and at the 
> next telecon!
> 
> For the future, I would generally expect that, at large 
> scale, such communications are rather sparse. So we should 
> either talk about a sparse interface to vector collectives or 
> its static variant, the graph topology collectives. Those 
> options will also be discussed asap. Both options would 
> potentially reduce the total memory consumption (assuming 
> sparse communications) from \Theta(P^2) to O(P) or O(P 
> log(P)) depending on the problem.
> 
> All the Best,
>   Torsten
> 
> --
>  bash$ :(){ :|:&};: --------------------- http://www.unixer.de/ -----
> Torsten Hoefler       | Postdoctoral Fellow
> Open Systems Lab      | Indiana University    
> 150 S. Woodlawn Ave.  | Bloomington, IN, 474045, USA Lindley 
> Hall Room 135 | +01 (812) 855-3608 
> _______________________________________________
> mpi3-coll mailing list
> mpi3-coll at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-coll
> 




More information about the mpiwg-coll mailing list