[Mpi-comments] non-blocking communication deadlock possible?

Maciej Szpindler m.szpindler at icm.edu.pl
Fri Dec 19 03:27:06 CST 2014


Dear Mustafa Mahamed,

It sounds like your program is broken but I believe that this list
is  not the proper place for such issues. Try posting your question
to stackoverflow.com instead.

Regards,
Maciej

W dniu 18.12.2014 o 14:05, mustafa mohamed pisze:
> Hello there ..i already do lab for mpich2 and connect all computers i have and i do cpi program and its work correctly .. but when i execute my program its give a  message : the process stop working because one process exiting without calling MPI_Init .. what i can do to make it works.. ??
>
>
>       On Wednesday, December 10, 2014 4:33 PM, Rolf Rabenseifner <rabenseifner at hlrs.de> wrote:
>
>
>   Dear Mr. Hehn,
>
> as long as you start your example with exactly two MPI processes,
> your example is guaranteed to run without deadlock
> based on the wording on nonblocking progress MPI-3.0 page 56, lines 30-35.
>
> Therefore, it is still deadlock-free if you Substitute your
>
>>    MPI_Waitall(2,r,s);
>
> by
>
>      MPI_Wait(&r[0],s[0]);
>      MPI_Wait(&r[1],s[1]);
>
> Thank you for your request on whether the MPI standard is
> completely describing the nonblocking functionality.
>
> Best regards
> Rolf
>
>
> ----- Original Message -----
>> From: "Andreas Hehn" <hehn at phys.ethz.ch>
>> To: mpi-comments at lists.mpi-forum.org
>> Sent: Wednesday, December 10, 2014 12:15:28 PM
>> Subject: [Mpi-comments] non-blocking communication deadlock possible?
>>
>> Dear MPI comments readers,
>>
>> I am not 100% sure if the following piece of code could cause a deadlock
>> in a standard-compliant MPI implementation or not.
>>
>> #include <mpi.h>
>>
>> int main(int argc, char** argv) {
>>    MPI_Status s[2];
>>    int num;
>>
>>    MPI_Init(&argc, &argv);
>>    MPI_Comm_rank(MPI_COMM_WORLD,&num);
>>
>>    double ds=5.4; // to send
>>    double dr;    // to receive
>>    int tag=99;
>>
>>    MPI_Request r[2];
>>    if(num==0) {
>>      MPI_Isend(&ds,1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD,&r[0]);
>>      MPI_Irecv (&dr,1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD,&r[1]);
>>    }
>>    else {
>>      MPI_Isend(&ds,1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD,&r[0]);
>>      MPI_Irecv (&dr,1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD,&r[1]);
>>    }
>>
>>    MPI_Waitall(2,r,s);
>>
>>    MPI_Finalize();
>>    return 0;
>> }
>>
>> It is not entirely clear to me if the order of MPI_Isend and MPI_Irecv
>> matters (as it does for blocking communication). Both function calls
>> should return immediately according to section 3.7 of the standard -
>> even if MPI_Isend uses synchronous send.
>> So at this point there shouldn't be any problem.
>> However, will MPI_Waitall() return for any order of the filed requests?
>> The standard says it is equivalent to MPI_Wait() for the number of
>> requests in "some arbitrary order" (line 28 of page 59).
>> I suppose this means MPI_Waitall() is not supposed to mind the order of
>> the requests in any way. Therefore it could not deadlock.
>> Is this correct and is the line of argument sufficient to exclude the
>> possibility of a deadlock?
>>
>> Best regards,
>>
>> Andreas Hehn
>> _______________________________________________
>> mpi-comments mailing list
>> mpi-comments at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments
>>
>
>
>
> _______________________________________________
> mpi-comments mailing list
> mpi-comments at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments
>



More information about the mpi-comments mailing list