Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - frsy

Pages: 1 [2] 3
16
Future Releases / Would NanoLanguage support python 3.x?
« on: April 20, 2009, 11:19 »
We know vnl language is based on python 2.X. Would it upgrade to python 3.0

17
Thank you Dr. Blom!  I have tried initial_calculation in bulk calculation and succeeded. But in this two-probe calculation:

scf = restoreSelfConsistentCalculation(
    filename = 'dump.nc'
)

import ATK
ATK.setCheckpointFilename('check.nc')
ATK.setVerbosityLevel(6)

# Using initial density from self consistent calculation
scf = executeSelfConsistentCalculation(
    ini_two_probe_conf,
    two_probe_method,
    initial_calculation = scf,
)

It failed with "NLPolicyError: A restart for a two-probe requires an initial calculation from a two-probe." The dump.nc should already include required data since "self_consistent_calculation=scf" worked normally.

18
Tried. Your are right! Thank you! But I have another related question.
The calculation of electrode is converged and saved to the checkpoint file. But the calculation of two-probe is not converged. If I kill the job and change central region parameters in the input script, will the restarted  job read these changed parameters or just ignore them? In other words, the parameters are read from checkpoint file or from input script when restarting the calculation?
My test showed parameters are read from checkpoint file. Am I correct? It it is true one has to re-calculate converged electrodes if the parameters of central region are changed.

19
General Questions and Answers / Can atk calculate frequency?
« on: April 14, 2009, 13:24 »
I think atk can't do this directly. Can anyone give some hints? Such as building force constants matrix.
Thanks!

20
The calculation of electrode should be done within 4 hrs from the very begining in my case. So it is really strange.
Could it relate to the parallel run? Or something went wrong when recalculated electrodes?

21
Dear all,
    I ran a two-probe job and suffered the power failure. But I have the NetCDF file and the electrode calculation part had been completed. To restart my job I modified the script:

scf = restoreSelfConsistentCalculation(
    filename = 'crash.nc'
)

scf = executeSelfConsistentCalculation(
    self_consistent_calculation=scf,
)

   Then I submitted the job and "top" command showed it was running. But for long time (24hr+) there was no further output after:
# -----------------------------------------------------------------------------
# TwoProbe Algorithm Parameters
# -----------------------------------------------------------------------------
Electrode Constraint = ElectrodeConstraints.Off
Initial Density Type = InitialDensityType.EquivalentBulk

   The "top" command showed it was still running. Did I make something wrong?

Regards,


22
Thank you. I think you are right. For electrode calculations the parallelization is very good but few CPUs are used when two-probe is calculated.

23
First a check: which version of ATK are you using?
ATK 2008.10.0 Linux-x86_64

But, then you write "Intel MPI", which could be the simple explanation: ATK only works with MPICH2 (ver 1.0.8 or similar).
Now I have tried mpich2 included in Fedora 10, mpich2version outputs:
MPICH2 Version:         1.0.8
MPICH2 Release date:    Unknown, built on Tue Mar 10 00:21:11 EDT 2009
MPICH2 Device:          ch3:nemesis

This time I wrote machinefile but the same thing happened to my SCF calculation of TwoProbeMethod. Only 3 CPUs (sometimes 2 CPUs) worked when the atk_exec stepped into the SCF loop.
So this should not an error occuerd by mpich2 or intel mpi. I think this may relate to the TwoProbeMethod. If I run the bulk job (KSMethod) 8 CPUs works all the time.

Can you give me more hints? Are there other parameters in the input file can controll the parallel?

BTW: On the manual of "Launching a parallel job using MPICH2" it said
If you want to run on a specific set of machines you can construct a machine file. To run 2 jobs on the specified machines:
mpiexec -n 2 -machinefile mymachinefile $ATK_BIN_DIR/atk [args...]

I believe this is not correct. (At least on my machine.) Since machinfile is a global args, it must appear before the local args n:
mpiexec -machinefile mymachinefile -n 2 $ATK_BIN_DIR/atk [args...]

24
1) Do you have enough licenses for running 8 jobs?
Yes.

2) Do you use a machine file? or do you use the system default machine file? Is there more than 3 names written is this file?
I do not use any machine file since I use "mpirun" instead of "mpiexec". Intel MPI outputs
"WARNING: Can't read mpd.hosts for list of hosts, start only on current"
at the begining of outfile. This message is not an error. I also run VASP in this way. It always works. I will try "mpiexec" and check if this still happens.

25
General Questions and Answers / Re: mesh cut-off
« on: April 6, 2009, 07:40 »
I think the best way is to try.

26
Dear All,
    I run ATK with the command:
      mpirun -np 8 atk job.py &> outfile&
    Then I run "top" to watch the system and eight atk_exec processes were found to be running. But after  the atk output "# sc  0 : Fermi Energy =    0.00000 Ry" in outfile, only three atk_exec processes were running until the end of job. I have changed the parallel number from 8 to 4. There was no change. Only three atk_exec processes were running in the SCF loop.
    Is this normal? My job is a SCF calculation of TwoProbeMethod.
    Regards,

Frsy

27
Dear all,
   The Figure 66 on ATK.TwoProbe manual showed a periodically-repeated heterogeneous two-probe.
(http://quantumwise.com/documents/manuals/ATK-2008.10/ref.twoprobeconfiguration.html#ref.twoprobeconfiguration.notes.repetitions)
   But the lattice lengths on vertical (x/y) direction are same in both electrodes. Is this necessary for calculating heterogeneous two-probe? If it is not,  how the L/R electrodes with different lattices are periodically-repeated on x,y directions? Is there any limitation?
   Regards,

Frsy

28
Dear Nordland,
    I suggest you add constaints parameter to the calculateOptimizedBulkConfiguration() so that one can fix atoms in the bulk.
Another suggestion is to add "if processIsMaster():" before print(). The current script messes up the screen in a parallel calculation.

    Regards,

Frsy

29
Yes, Blom.
The stress is normal now. Thank you.

30
After debug "pseudoStress" and "findFractionalCoordinates" I found incorrect fractional_coordinates were used to build strained_configuration. So the energy difference is large and thus the absolute stress is also large.
Maybe the sub "findFractionalCoordiantes" doesn't work in my case. (BCT Rutile)

My input fractional coords:
coordinates = [[ 0.        ,  0.        ,  0.        ],
               [ 0.69520000,  0.30480000,  0.        ]]

The initial coords in output:
# Index  Element  x (Ang)  y (Ang)  z (Ang)
      0       Ti     0.00     0.00     0.00
      1       O     -0.91     0.91     1.48

Recognized coords in the pseudoStress for the first time:
# Index  Element  x (Ang)  y (Ang)  z (Ang)
      0       Ti     0.00     0.00     0.00
      1       O     -0.91     0.91     1.48

Strained coords after using "findFractionalCoordinates":
# Index  Element  x (Ang)  y (Ang)  z (Ang)
      0       Ti     0.00     0.00     0.00
      1       O      2.07     0.25    -0.74

It is far from the original coords, the calculated fractional coords may be questionable.

Pages: 1 [2] 3