[Mpi-forum] MPI_Mprobe workaround

Torsten Hoefler htor at illinois.edu
Fri Jul 13 14:39:49 CDT 2012


Jeff,

> Hmm, it seems I've stumbled upon a variant of Listing 1.1 in your paper :-)
Yes.

> I'm a big fan of buffer pools, so I am unfazed by the potential
> performance impact of sbrk().  If you need to malloc() so much memory
> that sbrk() is called, I suspect that the message transfer time is
> going to take longer than a system call.  
Keep in mind that you may (hopefully!) have multiple interfaces. If the
network bandwidth does not scale with the number of cores that one may
have all kinds of different issues.

The paper shows also that the benefit if Mprobe (using std malloc) can
be measured, even on yesterday's systems (eight cores if I remember
correctly).

> But I confess to being somewhat ignorant of the performance of slow,
> general purpose operating systems like Linux.
You forgot "widely available" and "portable" in this list ;-).

Torsten

-- 
### qreharg rug ebs fv crryF ------------- http://www.unixer.de/ -----
Torsten Hoefler         | Performance Modeling and Simulation Lead
Blue Waters Directorate | University of Illinois (UIUC)
1205 W Clark Street     | Urbana, IL, 61801
NCSA Building           | +01 (217) 244-7736



More information about the mpi-forum mailing list