On Mon, Dec 17, 2012 at 10:07 AM, Moore, Shirley V <span dir="ltr"><<a href="mailto:svmoore@utep.edu" target="_blank">svmoore@utep.edu</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Jed and Torsten,<br>
<br>
Sorry not to have responded until now but I have had a heavy travel schedule lately.<br>
<br>
I have previously discussed a quantum mechanics application code with Torsten that might benefit from an efficient implementation of neighborhood collectives. For this code, we have a distributed array in which processes store projector data for the ions with which they overlap. Processors that overlap the same ions store redundant copies of the data but this is a vast improvement over earlier implementations in which the entire array was replicated on all the processes. After computing their contribution to the new projectors, the processes exchange and sum the data with other processes that overlap the same ions. Currently this is handled with point-to-point communication. Although better than the earlier implementations that used MPI_Allreduce, the current implementation still has substantial communication overhead. The neighborhoods are irregular and are not entirely static although they change fairly slowly. The question of whether the vector that is exchanged is the same or not depends on whether we do one communication phase per ion or all the communication is one call. Currently we do it in one call - i.e., a process sends the data for all its ions to all the processes with which it has overlap and processes just throw away the data they don't need.</blockquote>
<div><br></div><div>This sounds like a similar structure to the mesh-based PDE applications I was explaining. If you determine which ions your neighbors have, you could reduce only those that will be used, but then you need the "sparse reduction" that I was suggesting. I don't foresee the approach of sending everything and doing key matching on the receiver being expressible as an MPI reduction, though you could still do a Gather with neighborhood collectives and reduce by key locally. (This would be impractical for PDE applications; I don't know about your case.)</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> But since they don't have a data location for data they don't store, this is not a straight reduction but rather they go through the list and pick out what they need. The problem with doing one communication phase per ion is that the number of ions with which a process has overlap can be fairly large -- e.g., on the order of 50.<br>
<br>
Hope this helps. Please let me know any questions or comments.<br>
<br>
Regards,<br>
Shirley Moore<br>
________________________________________<br>
From: <a href="mailto:five9a2@gmail.com">five9a2@gmail.com</a> [<a href="mailto:five9a2@gmail.com">five9a2@gmail.com</a>] On Behalf Of Jed Brown [<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>]<br>
Sent: Sunday, December 16, 2012 10:53 PM<br>
To: Torsten Hoefler<br>
Cc: Moore, Shirley V; MPI-3 Collective Subgroup Discussions<br>
Subject: Re: Neighborhood collectives round 2: reductions<br>
<div class="HOEnZb"><div class="h5"><br>
On Sat, Dec 15, 2012 at 8:08 AM, Torsten Hoefler <<a href="mailto:htor@illinois.edu">htor@illinois.edu</a><mailto:<a href="mailto:htor@illinois.edu">htor@illinois.edu</a>>> wrote:<br>
> Those use cases ([3]<a href="http://lists.mpi-forum.org/mpi3-coll/2011/11/0239.php" target="_blank">http://lists.mpi-forum.org/mpi3-coll/2011/11/0239.php</a>)<br>
> were all dependent on being able to reduce to overlapping targets.<br>
Depends on your definition of target. If you mean processes by<br>
"targets", then the current interface proposal provides this; if you<br>
mean memory locations at one process by "targets", then this will not be<br>
possible within current MPI semantics.<br>
<br>
I mean that the memory overlaps on the processor accumulating the result of the reduction. Think of a bunch of subdomains of a regular grid with one or two cells of overlap. An example of a "reduction" is to add up the contribution from all copies of each given cell. Cells near the middle of a "face" are only shared by two processes, but corner cells are shared by several processes.<br>
<br>
<br>
> As for defining "identity", the operation I would like is to reduce by<br>
> combining with a local buffer (usually in-place destination buffer). That<br>
> is, if I have the local buffer<br>
> mine = [1.1, 2.1, 3.1, 4.1, 5.1, 6.1]<br>
This can be expressed as a self-edge (we can discuss about in-place<br>
arguments, but then you would need to guarantee that the local buffer is<br>
larger than the largest neighbor buffer).<br>
<br>
Useful application semantics would require the same.<br>
<br>
<br>
> and vector types for my two neighbors (defined by me)<br>
> incoming_type[0] = [0, 3, 4]<br>
> incoming_type[1] = [1, 4]<br>
> with incoming data (actually getting sent)<br>
> incoming_data[0] = [10.2, 20.2, 30.2]<br>
> incoming_data[1] = [100.3, 200.3]<br>
> the result would be<br>
> [op(1.1, 10.2), op(2.1, 100.3), 3.1, op(4.1, 20.2), op(5.1, 30.2, 200.3),<br>
> 6.1]<br>
> This would be a natural expression of the operation I call "SFReduce" in<br>
> [4]<a href="http://59A2.org/files/StarForest.pdf" target="_blank">http://59A2.org/files/StarForest.pdf</a><br>
I see, this may be doable with the vector interface (if we remove the<br>
restriction of equal vector sizes -- this would remove some optimization<br>
opportunities). Can you confirm that the current proposed<br>
neighbor_reducev() interface can cover this case?<br>
<br>
If you remove the restriction of equal vector sizes, are you going to add an MPI_Datatype describing where to put the result? (I'd expect that to be a neighbor_reducew.) Note that in general, there would be some points shared by neighbors {1,2} and other points shared by neighbors {1,3} (and {2,3}, ...) thus we can't just sort such that the reduction is always applied to the "first N" elements.<br>
<br>
<br>
One remaining question is if you can always guarantee "packed" data,<br>
i.e., that the "empty" elements are always at the tail. Well, I guess<br>
you could always add identity elements in the middle to create gaps.<br>
<br>
I could pack, but I thought the point of the W interfaces was to enable the user to avoid packing (with possible performance advantages relative to fully-portable user code).<br>
<br>
<br>
Also, would the static communication topology work for your use-cases<br>
(neighborhoods don't change for a while).<br>
<br>
Yes, comm topology is typically static, and we wouldn't use the neighborhood routines if it was changing frequently.<br>
</div></div></blockquote></div><br></div>