QuantumATK Forum

QuantumATK => Installation and License Questions => Topic started by: Anirban Basak on May 4, 2012, 14:53

Title: Assymetrical core distribution (in Parallel)
Post by: Anirban Basak on May 4, 2012, 14:53
Hi,

           If I have two 12 core machine and one 80 core SUN machine, can I some how define which node has how many cores and make the program run such that each 12+12+80 cores are utilized? I tried to simply include the ip of SUN machine and increase the number to 12+12+80, and mpich2 just roughly divided it by 3 and allocated it symmetrically.

Thanks
Title: Re: Assymetrical core distribution (in Parallel)
Post by: Nordland on May 4, 2012, 18:44
You have a machines file. In this machines file you can write the patten in which you want to distribute the jobs, so for
example if you writing something like this

Code
machine1
machine2
machine3
machine3
machine3
machine1
machine2
machine3
....

Then when running mpich it will assign from the top down so the first 2 goes to the two first machines then the next 3 will go to machine 3, then another one for machine 1, the one for machine2, and then again for machine 3.

You must then select a good pattern for you needs.
Title: Re: Assymetrical core distribution (in Parallel)
Post by: Anders Blom on May 5, 2012, 00:34
A more compact way is suggested here: http://www.hpccommunity.org/content/behind-scenes-mpi-process-placement-162/
I think you can write

Quote
machine1:12
machine2:80

HOWEVER - I would strongly advise against starting that many MPI processes on a single node!!! My experience shows that the best performance you can get is to have one MPI process per socket. Now, the Sun machine with 80 cores is perhaps a slightly different story, I'm not sure how many sockets that is (20?) but you should experiment a bit, and not immediately try to run 80 MPI process - it will most likely be slower than using a lower number.
Title: Re: Assymetrical core distribution (in Parallel)
Post by: Anirban Basak on May 5, 2012, 10:28
Thank you !!! This works very fine  :D