[Mpi-forum] Problem with large number of process

Bland, Wesley wesley.bland at intel.com
Thu Jan 5 10:28:46 CST 2017


Hi Ichrak,


This mailing list is for those involved in creating the MPI Standard to discuss things directly related to standardizing MPI. It's not for answering questions about problems with an MPI program or implementation. The good news is that there are great alternatives with places to ask those questions.


For questions/problems with MPI programs it's best to use something like stackoverflow.com where there's a pretty good community of people who can help answer all kinds of programming questions (including MPI - http://stackoverflow.com/questions/tagged/mpi).


If you think you've found a bug with a particular implementation, you can use the support structure put in place for that implementation:


MPICH - discuss at mpich.org

Open MPI - users at lists.open-mpi.org<mailto:users at lists.open-mpi.org>


Commercial implementations have various websites, email addresses, etc. that you can find as well.


Please redirect your question to the more appropriate venue.


Thanks,

Wesley


On January 5, 2017 at 10:21:25 AM, Ichrak Mehrez (ichrak1412 at gmail.com<mailto:ichrak1412 at gmail.com>) wrote:

Hello,

I have a simple MPI program, everything is fine for a small number of processes = {10,100, and 1000}, the problem is that for 10000 processes I have a runtime error. The attached file contains the generated error messages for n = 10 000.

  *   I tested my code on 4 nodes cluster within grid5000 platform (on each node: 2 CPUs Intel Xeon E5-2630 v3, 8 cores/CPU, 126GB RAM, 5x558GB HDD, 186GB SSD, 10Gbps ethernet)
  *   The mpi.c file contains the following code:

int main(int argc, char *argv[])
{
 int myRank, numProcs;
 MPI_Init(&argc, &argv);
 MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
 MPI_Comm_size(MPI_COMM_WORLD, &numProcs);

if(myRank==0) printf("%d -%d\n",numProcs,myRank);

MPI_Finalize();
return(0);
}

Thank you.
_______________________________________________
mpi-forum mailing list
mpi-forum at lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20170105/567ccde7/attachment.html>


More information about the mpi-forum mailing list