<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:14px"><div>Hello there ..</div><div dir="ltr">i already do lab for mpich2 and connect all computers i have and i do cpi program and its work correctly .. but when i execute my program its give a message : the process stop working because one process exiting without calling MPI_Init .. what i can do to make it works.. ?? <br></div><div><span></span></div> <div class="qtdSeparateBR"><br><br></div><div style="display: block;" class="yahoo_quoted"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 14px;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div dir="ltr"> <font size="2" face="Arial"> On Wednesday, December 10, 2014 4:33 PM, Rolf Rabenseifner <rabenseifner@hlrs.de> wrote:<br> </font> </div> <br><br> <div class="y_msg_container">Dear Mr. Hehn,<br clear="none"><br clear="none">as long as you start your example with exactly two MPI processes,<br clear="none">your example is guaranteed to run without deadlock<br clear="none">based on the wording on nonblocking progress MPI-3.0 page 56, lines 30-35. <br clear="none"><br clear="none">Therefore, it is still deadlock-free if you Substitute your<br clear="none"><br clear="none">> MPI_Waitall(2,r,s);<br clear="none"><br clear="none">by<br clear="none"><br clear="none"> MPI_Wait(&r[0],s[0]);<br clear="none"> MPI_Wait(&r[1],s[1]);<br clear="none"><br clear="none">Thank you for your request on whether the MPI standard is<br clear="none">completely describing the nonblocking functionality.<br clear="none"><br clear="none">Best regards<br clear="none">Rolf <br clear="none"><br clear="none"> <br clear="none">----- Original Message -----<br clear="none">> From: "Andreas Hehn" <<a shape="rect" ymailto="mailto:hehn@phys.ethz.ch" href="mailto:hehn@phys.ethz.ch">hehn@phys.ethz.ch</a>><br clear="none">> To: <a shape="rect" ymailto="mailto:mpi-comments@lists.mpi-forum.org" href="mailto:mpi-comments@lists.mpi-forum.org">mpi-comments@lists.mpi-forum.org</a><br clear="none">> Sent: Wednesday, December 10, 2014 12:15:28 PM<br clear="none">> Subject: [Mpi-comments] non-blocking communication deadlock possible?<br clear="none">> <br clear="none">> Dear MPI comments readers,<br clear="none">> <br clear="none">> I am not 100% sure if the following piece of code could cause a deadlock<br clear="none">> in a standard-compliant MPI implementation or not.<br clear="none">> <br clear="none">> #include <mpi.h><br clear="none">> <br clear="none">> int main(int argc, char** argv) {<br clear="none">> MPI_Status s[2];<br clear="none">> int num;<br clear="none">> <br clear="none">> MPI_Init(&argc, &argv);<br clear="none">> MPI_Comm_rank(MPI_COMM_WORLD,&num);<br clear="none">> <br clear="none">> double ds=5.4; // to send<br clear="none">> double dr; // to receive<br clear="none">> int tag=99;<br clear="none">> <br clear="none">> MPI_Request r[2];<br clear="none">> if(num==0) {<br clear="none">> MPI_Isend(&ds,1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD,&r[0]);<br clear="none">> MPI_Irecv (&dr,1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD,&r[1]);<br clear="none">> }<br clear="none">> else {<br clear="none">> MPI_Isend(&ds,1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD,&r[0]);<br clear="none">> MPI_Irecv (&dr,1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD,&r[1]);<br clear="none">> }<br clear="none">> <br clear="none">> MPI_Waitall(2,r,s);<br clear="none">> <br clear="none">> MPI_Finalize();<br clear="none">> return 0;<br clear="none">> }<br clear="none">> <br clear="none">> It is not entirely clear to me if the order of MPI_Isend and MPI_Irecv<br clear="none">> matters (as it does for blocking communication). Both function calls<br clear="none">> should return immediately according to section 3.7 of the standard -<br clear="none">> even if MPI_Isend uses synchronous send.<br clear="none">> So at this point there shouldn't be any problem.<br clear="none">> However, will MPI_Waitall() return for any order of the filed requests?<br clear="none">> The standard says it is equivalent to MPI_Wait() for the number of<br clear="none">> requests in "some arbitrary order" (line 28 of page 59).<br clear="none">> I suppose this means MPI_Waitall() is not supposed to mind the order of<br clear="none">> the requests in any way. Therefore it could not deadlock.<br clear="none">> Is this correct and is the line of argument sufficient to exclude the<br clear="none">> possibility of a deadlock?<br clear="none">> <br clear="none">> Best regards,<br clear="none">> <br clear="none">> Andreas Hehn<br clear="none">> _______________________________________________<br clear="none">> mpi-comments mailing list<br clear="none">> <a shape="rect" ymailto="mailto:mpi-comments@lists.mpi-forum.org" href="mailto:mpi-comments@lists.mpi-forum.org">mpi-comments@lists.mpi-forum.org</a><br clear="none">> <a shape="rect" href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments</a><br clear="none">> <br clear="none"><br clear="none">-- <br clear="none">Dr. Rolf Rabenseifner . . . . . . . . . .. email <a shape="rect" ymailto="mailto:rabenseifner@hlrs.de" href="mailto:rabenseifner@hlrs.de">rabenseifner@hlrs.de</a><br clear="none">High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530<br clear="none">University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832<br clear="none">Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner<br clear="none">Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307)<div class="yqt3543575830" id="yqtfd66514"><br clear="none">_______________________________________________<br clear="none">mpi-comments mailing list<br clear="none"><a shape="rect" ymailto="mailto:mpi-comments@lists.mpi-forum.org" href="mailto:mpi-comments@lists.mpi-forum.org">mpi-comments@lists.mpi-forum.org</a><br clear="none"><a shape="rect" href="http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments" target="_blank">http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments</a><br clear="none"></div><br><br></div> </div> </div> </div> </div></body></html>