Author Topic: Testing MPI  (Read 5876 times)

0 Members and 1 Guest are viewing this topic.

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Testing MPI
« on: June 2, 2016, 14:22 »
I followed the steps in installing  MPI as it was pointed out in the tutorial and I run a test script but I am not sure if it is working properly. I attached a snapshot of the log file. Can someone please confirm

I have single machine with a dual processor (10 cores each)

Thank you.

Offline Jess Wellendorff

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 933
  • Country: dk
  • Reputation: 29
    • View Profile
Re: Testing MPI
« Reply #1 on: June 3, 2016, 10:35 »
From the limited info given bu the PNG, it looks like you ran a job with 4 MPI processes. Was this the expected result? It not, we need you to attach the full log file, the script, and give details about how the parallel job was executed.

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #2 on: June 3, 2016, 12:20 »
Dear Jess,

The log file I attached had nothing more written in it. I used the following test script from one of your tutorials and chose to run a multiprocess parallel simulation on 4 processors.

import NLEngine

n = NLEngine.numberOfProcesses()
i = NLEngine.myRank()

for j in range(n):
    if i == j:
        print "MPI process %i of %i reporting." % (i+1, n)
    NLEngine.barrier()

And when I first installed MPI and run the commands mpi -validate & smpd -status, everything was working fine

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Re: Testing MPI
« Reply #3 on: June 3, 2016, 13:20 »
If you really want a good test that MPI works properly, try to also do
Code: python
import platform
import time
import random
import os

from NanoLanguage import *

node = platform.node()
rand_sleep = random.random()
time.sleep(rand_sleep)

if processIsMaster():
    print "Master : %s" % node
else:
    print "Slave  : %s" % node
    
env_variables = ['OMP_NUM_THREADS', 'OMP_DYNAMIC', 'MKL_NUM_THREADS', 'MKL_DYNAMIC']
for variable in env_variables:
    print "%s %s=%s" % (node, variable, os.environ[variable])

from NL.ComputerScienceUtilities.ParallelTools.ParallelTools import mpiDiagnostic
mpiDiagnostic()
Although the output in the log WINDOW may appear to be missing something, perhaps the actual file created has more? It's just that it looked a bit truncated...

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #4 on: June 3, 2016, 13:34 »
Dear Anders,

I ran the script you provided and this is what I got (The actual log file):

Slave  : Quantissimo-PC
Quantissimo-PC OMP_NUM_THREADS=1
Quantissimo-PC OMP_DYNAMIC=FALSE
Quantissimo-PC MKL_NUM_THREADS=1
Quantissimo-PC MKL_DYNAMIC=FALSE
| Quantissimo-PC [ Slave ]          P-score: 1.68976   Network :  410.3 MB/s   |
|
|                                                                              |
+------------------------------------------------------------------------------+

Timing:                          Total     Per Step       

--------------------------------------------------------------------------------

Loading Modules + MPI   :       2.32 s       2.32 s      20.30|=============|
--------------------------------------------------------------------------------
Total                   :      11.45 s
Master : Quantissimo-PC
Quantissimo-PC OMP_NUM_THREADS=1
Quantissimo-PC OMP_DYNAMIC=FALSE
Quantissimo-PC MKL_NUM_THREADS=1
Quantissimo-PC MKL_DYNAMIC=FALSE
+------------------------------------------------------------------------------+
| MPI Diagnostic                                                               |
+------------------------------------------------------------------------------+
| Quantissimo-PC [ Master]          P-score: 1.20824                           |
+------------------------------------------------------------------------------+
+------------------------------------------------------------------------------+
| Total MPI performance : 228.164 MB/s                                         |
+------------------------------------------------------------------------------+


I am still new to the parallization on ATK but the output is not what one would expect from the script (i.e printing the master and slave nodes, in my case should be a total of 4).  Or this is maybe due to the fact I am using a single machine so sockets/cores rather than nodes.
« Last Edit: June 3, 2016, 13:38 by Mohammed »

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Re: Testing MPI
« Reply #5 on: June 3, 2016, 13:45 »
Why 4? It looks like you told it to use 2 MPI processes, in which case the output is correct.

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #6 on: June 3, 2016, 13:53 »
Actually I chose 4

I ran it twice once on 2 MPI and once on 4 MPI and in both cases gave me one master and one slave, as it appears in the previous log file
« Last Edit: June 3, 2016, 13:58 by Mohammed »

Offline Jess Wellendorff

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 933
  • Country: dk
  • Reputation: 29
    • View Profile
Re: Testing MPI
« Reply #7 on: June 6, 2016, 08:33 »
That sounds strange. I  have attached an ATK Python script, which I ran as "mpiexec -n 4 atkpython mpi_diagnostic.py > mpi_diagnostic.log", and the resulting log file. The log file indicates 1 master and 3 slaves. Are you not able to get something similar on your machine?

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Re: Testing MPI
« Reply #8 on: June 6, 2016, 09:01 »
Sure, if you run from the command line - on Linux :)
The issue here is Job Manager on Windows.
I am not 100% it works perfectly, we should do some more testing. It may just be a simple issue with the logfile though...

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #9 on: June 6, 2016, 10:49 »
Jess,

Ran the same file you attached, and still getting one master and one slave as before.

Anders

I tried to run simple geometry optimization on silicon using 4 MPI and I attached a snapshot of a section in the log file and it seems to detect 4 process IDs. This got me confused. If it shows the process IDs does it mean that the requested parallezation is working properly. But if that is the case the MPI test should have worked as well ?

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Re: Testing MPI
« Reply #10 on: June 6, 2016, 11:10 »
Yes, it may be just the MPI test is not working as well... The good news is that your geometry optimization seems to really use 4 processes :)

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #11 on: June 6, 2016, 11:18 »
Ok, good to know. Thank you.  :)

I will go into the more complicated simulations and see how the parallezation fairs.

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Re: Testing MPI
« Reply #12 on: June 6, 2016, 11:22 »
Did some testing and I clearly see 4 MPI processes running on the machine during the test. However, the log file appears to show only 2!  :o ???
This may be some limitation of Windows or related to how we harvest the log output in VNL...

But in conclusion of this thread: if you see 4 processes when running a "real" script, then for sure the parallelization is working. The test scripts are just supposed to be a quicker way to determine this.
« Last Edit: June 6, 2016, 12:22 by Anders Blom »

Offline Mohammed

  • Heavy QuantumATK user
  • ***
  • Posts: 27
  • Country: eg
  • Reputation: 0
    • View Profile
Re: Testing MPI
« Reply #13 on: June 6, 2016, 11:31 »
thanks for the confirmation. Hence will keep to what the VNL scripts are showing  :)