Author Topic: Appropriate runtimes and convergence  (Read 4332 times)

0 Members and 1 Guest are viewing this topic.

Offline BandTheory

  • Heavy QuantumATK user
  • ***
  • Posts: 26
  • Reputation: 0
    • View Profile
Appropriate runtimes and convergence
« on: December 8, 2009, 18:59 »
Hello,

I currently have a script running that has been going for a little over 7 days.  It is running in parallel on 8 64-bit processors.

This seems a bit much to me.  Now its the biggest two-probe system I have ever tried to simulate.  There are 171 atoms in the central region.  Are these calculation times appropriate or should I assume that something is amiss?

Could there be some convergence issue?  If so how could I fix it?

Thanks much for your help,

BT

Offline Nordland

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 812
  • Reputation: 18
    • View Profile
Re: Appropriate runtimes and convergence
« Reply #1 on: December 8, 2009, 20:54 »
I have performed a calculation on 2160 carbon atoms in the central region, so the program is able to handle these kinds of system.

However for systems this size, the parameters should not be to conserative set.
Perhaps if you tell how many iterations the calculation is running it is possible to given an idea what might be the course, since
176 atoms should be able to run in much much less than a week.

Offline BandTheory

  • Heavy QuantumATK user
  • ***
  • Posts: 26
  • Reputation: 0
    • View Profile
Re: Appropriate runtimes and convergence
« Reply #2 on: December 15, 2009, 12:35 »
Well the iteration control parameters and iteration mixing parameters are both at default.

Could it be that one of the electrode (the CNT one in my case) has the basis set being double zeta polarized?

Maybe I should be more clear.  I have a CNT electrode and a copper one.  All copper atoms I set to use single zeta, all carbon atoms I set to use double zeta polarized.

I of course have the scattering layers in correctly, I think.

Offline zh

  • Supreme QuantumATK Wizard
  • *****
  • Posts: 1141
  • Reputation: 24
    • View Profile
Re: Appropriate runtimes and convergence
« Reply #3 on: December 16, 2009, 02:34 »
The vacuum thickness is also a crucial parameter affecting the computing cost and speed. Evidently, if the lengths of unit cell vectors along x and y are  quite large, each step during the self-consistent calculation will need more time.

i) track each step of the self-consistent calculation, and find out the average time for a step as well as the change of charge in each step;
ii) check the memory consumption for each node during the self-consistent calculation, and see whether the memory was exhausted or not.

If the change of charge is dramatically fluctuant, the slow convergence may be caused by the parameters specified in the input file and related to the charge mixing and the determination of occupation.