Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Derek Stewart

Pages: [1]
1
Hi Anders,

Thanks for your quick reply.  After going back through my input file, I realized that I had not chosen the spin polarized exchange correlation functional, so it is not surprising that transmission was not split into different components.

Thanks also for the tips on the python scripting as well.

Regards,

Derek

2
Hi everyone,

I would like to print out the spin up and spin down transmission spectrum as a function of energy. I have found the following approach:

# -------------------------------------------------------------
# Transmission spectrum
# -------------------------------------------------------------
transmission_spectrum = TransmissionSpectrum(
    configuration=device_configuration,
    energies=numpy.linspace(-5,5,100)*eV,
    kpoints=MonkhorstPackGrid(1,1),
    energy_zero_parameter=AverageFermiLevel,
    infinitesimal=1e-06*eV,
    self_energy_calculator=KrylovSelfEnergy(),
    )

if processIsMaster(): nlprint(transmission_spectrum)


which prints out the total transmission spectrum. 

Is it possible to print out the spin separated spectrum in the output file?

Also, is there a quick command to print out the conductance at the Fermi energy level for spin up and spin down components?

Thanks,

Derek

3
Hi everyone,

I just noticed that the final command line I listed had a typo.  It should have included the redirect to the output file as follows:


mpiexec -n 2 -hosts d1,d2 /opt/QuantumWise/atk-11.2.b2/atkpython/bin/atkpython /home/derek/atk_mpi_test/test_mpi.py > out.run < /dev/null &

4
Hi everyone,

I have been testing out the new version of ATK with some parallel runs and I have run into a problem when I try to run even the simple mpi_test in the background with mpiexec.  Everything works properly if I run it with everything printing out to the screen or I direct it to a file and let it run in the foreground.  For mpich2, I am using version 1.3.2.  The calculations are done on a redhat enterprise 5 machine with Xeon processors.

For example, these commands work fine:

 mpiexec  -n 2 -hosts d1,d2 /opt/QuantumWise/atk-11.2.b2/atkpython/bin/atkpython /home/derek/atk_mpi_test/test_mpi.py

 mpiexec  -n 2 -hosts d1,d2 /opt/QuantumWise/atk-11.2.b2/atkpython/bin/atkpython /home/derek/atk_mpi_test/atk_mpi_test > out.run

However, when I try to run it in the background using & at the end.

 mpiexec -n 2 -hosts d1,d2 /opt/QuantumWise/atk-11.2.b2/atkpython/bin/atkpython /home/derek/atk_mpi_test/test_mpi.py &

I get the following error:
[mpiexec@d1.cnf.cornell.edu] HYDU_sock_read (./utils/sock/sock.c:222): read errno (Input/output error)
[mpiexec@d1.cnf.cornell.edu] control_cb (./pm/pmiserv/pmiserv_cb.c:249): assert (!closed) failed
[mpiexec@d1.cnf.cornell.edu] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec@d1.cnf.cornell.edu] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:206): error waiting for event
[mpiexec@d1.cnf.cornell.edu] main (./ui/mpich/mpiexec.c:404): process manager error waiting for completion


After searching through some discussion groups on mpiexec using hydra routing, I found the following work-around to run things in the background. 

 mpiexec -n 2 -hosts d1,d2 /opt/QuantumWise/atk-11.2.b2/atkpython/bin/atkpython /home/derek/atk_mpi_test/test_mpi.py < /dev/null &
 
With this redirection, you can also run the calculation with nohup at the beginning as well.
   
The following link discusses this issue in more detail:
http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-October/008239.html

Best Regards,

Derek





5
Hi Anders,

Thank you for your quick response.  I will try switching the way the transmission is printed out and see how it does.

Thanks again,

Derek
   

6
General Questions and Answers / jumbled transmission output
« on: March 6, 2009, 17:37 »
Hi everyone,

We are trying to do some parallel transmission calculations with ATK.  We are running into a problem where the transmission output lines appear to be jumbled.  It looks like the program is printing out results from different nodes and the lines are being merged with no carriage returns between them.  Our print statement is:

print trans_energy.pop(),'\t',trans_coeff.pop()


For example we get:

0.923 eV        0.209212442784
0.924 eV        0.296368037242
0.925 eV        0.50.009 eV     3.10781739019e-11
-0.008 eV       3.05943705848e-11
-0.007 eV       3.01449322864e-11
-0.006 eV       2.97280958329e-11
-0.005 eV       2.93422624775e-11


or

0.842 eV        0.100164929123
0.843 eV        0.100308432721
0.844 eV        0.100430468154
0.845 eV        0.10053021030.00575102241749
-0.08 eV        0.00551587865742
-0.079 eV       0.0052757592935
-0.078 eV       0.00503039033118


Have other people experienced this problem?

Thanks,

Derek
 

7
Hi all,

I was wondering if it was possible for ATK to calculate the complex band structure of a bulk material in a given direction? 

Thanks,

Derek

8
General Questions and Answers / Re: Check parallel performance
« on: February 11, 2009, 05:18 »
Hi Anders,

Thanks for the zip file and the info on the parallel scaling.  They should give me a good place to start.

Best regards,

Derek
 

9
General Questions and Answers / Check parallel performance
« on: February 10, 2009, 20:11 »
Hi everyone,

I would like to check the parallel performance of ATK on my cluster.  I was wondering which specific bulk and molecule systems were used to generate the Benchmark in the "Parallel calculations using ATK".  Are they included with the examples?

Also, the tutorial mentions that you always need to type the full path for ATK to run it in parallel.  There is a way to get around this problem of typing the entire path for ATK for parallel jobs.  In bash, you can add a PATH statement to your .bashrc file and this will load in the proper PATH on the other nodes in your cluster.

Best regards,

Derek

Pages: [1]