Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Anders Blom

Pages: 1 ... 331 332 [333] 334 335 ... 362
4981
There is no need to use more than 1 k-point in the Y direction, it just makes the calculation run slower, without changing or improving the results in any way!

So, try something like 10x1x200 instead.

4982
Although it's not a really critical point, I would consider shifting the coordinates in the X direction. If you visualize the system in the Nanoscope in VNL, you will find that half of the atoms are outside the unit cell. ATK will shift them inside, but it can cause some problems, and at any rate it will be a bit tricky to visualize the results later on.

Otherwise the key to this system is most likely indeed the k-point sampling, as zh pointed out. In the X direction, that is. The system is periodic in this direction, and graphene typically requires a relatively large k-point sampling, if not for convergence then at any rate to obtain accurate results.

There is no need to increase the circle to 100 Ry, you will just lose accuracy, even with 100 points. If you fail to converge with a circle of, say, 10 Ry, then the problem lies elsewhere. Since there appears to be enough electrode layers in your setup, I think the k-point sampling will be very helpful. In addition, you can experiment with increasing the temperature.

Increasing the k-point sampling increases the computation time quite a bit if you run in serial. If you have possibility, consider running the calculation in parallel.

4983
It should be

Code
T = trans.average_density_of_states()

The reason is that the DOS, like the transmission, is an average over the k-points in the Brillouin zone sampling.

Are you familiar with the very useful command "dir"? It can really help in situations like this. You can insert "print dir(trans)" and then you will see all properties of the object "trans" :)

4984
Just a typo ;)

The key should be 'Density Of States Density of states', but you put 'Desnsity Of States Desnity of states' in your script.

4985
Alternatively, you can modify the script to not ask for the file name interactively, but just enter the VNL file name manually into the script before you run it. In this case you also need to hardcode the sample name into the script. See http://quantumwise.com/forum/index.php?topic=300.0 for a discussion on precisely this.

4986
That looks ok, although note the following:

  • It will be hard to converge the calculation above 1 V bias. I'd limit the voltage range to 0-1 V for now (this is just a test to learn, anyway, right?)
  • The variable "scf" is not initially defined in this code segment. Make sure to have a statement "scf = None" before these lines!
  • You will probably find this post on I-V curves interesting: http://quantumwise.com/forum/index.php?topic=302.0
  • Good luck and have fun! :)

4987
That's odd, if you followed it exactly it should converge in 70-80 steps which is below the default max = 100.

Perhaps you can attach the script you run, and we can check if some parameter is set wrong.

4988
General Questions and Answers / Re: ATKError: bad allocation
« on: July 31, 2009, 22:29 »
Normally one would suspect that the larger unit cell would be a problem for the supersized system, in terms of memory. But in this particular case I think the issue lies with the enormous k-points sampling. I doubt you need 40x40, probably 20x20 is just as good, and uses much less memory, and runs much faster.

Note that for the supercell there is no real periodicity in the c-direction, and thus no need for k-point sampling there (1 point is enough). Therefore you have an easier time getting that system to fit in memory, but of course the results will not be the same due to the vacuum gap in the c-direction.

So, in summary, the problem is that you run out of memory due to the large k-point sampling.

4989
General Questions and Answers / Re: ATKError: bad allocation
« on: July 31, 2009, 12:04 »
I'm a bit confused... The structure you attached works, but the smaller structure does not work? If so, please attach the structure that doesn't work, it is more relevant to inspect that one...

4990
General Questions and Answers / Re: From crystalmaker to VNL
« on: July 31, 2009, 11:30 »
Trouble with this approach is that it only works when you know the answer. There is nothing that says that another problem will be "almost converged" after 50 iterations, just because the standard problem is, even using the same iteration mixing parameters etc. Each problem is unique, and the non-linear scf loop can take you on a very wild and wobbly ride in configuration space.

What you could do, is if you see from dRho and dEtot that, while you haven't reached precisely the accuracy you have set (say, 1e-5), these values stabilize at 1e-4 but have trouble getting down that last decimal, then you are, indeed, anyway "close to convergence".

4991
General Questions and Answers / Re: From crystalmaker to VNL
« on: July 31, 2009, 10:12 »
I still don't see what the criterion for judging whether the results are "good" or not would be... And I don't see how you could use the results obtained halfway through the SCF loop to judge whether it will converge or not.

4992
Right, 4 iterations per hour, that's quite normal for a Windows laptop. It will be MUCH faster on the cluster.

It might very well be converging. This is one of the most difficult systems around, and 70-80 iterations are typically needed (and that's the best we ever got this system to do, sometimes it needs 200...!).

Sure, the scripts are fully cross-platform compatible, so you can prepare the script in VNL on Windows, save it, run it on the cluster, and then reimport the VNL file back to Windows VNL and view the results. You can even bring over the NC file and do the second step analysis (transmission etc) on Windows, although you will probably want to run that on the cluster too, as it scales even better with the number of nodes.

4993
Parameters seem reasonable, but you are running a geometry optimization, is that what you really wanted?

Always try to converge the single point calculation (no optimization) first, to ensure that you have the right parameters for the quantum-chemical model. Then you can start relaxations, if necessary.

Note that each step in the optimization easily may take 10-12 hours, esp. if you run on a laptop or so. Multiply this by 10-20 steps of convergence for the geometry optimization, and you get quite a long time. That's why a system like this, esp. with so high k-point sampling (10x10) should be run on a cluster, in which case (precisely thanks to the k-points sampling in particular) will scale very nicely and perhaps run 10-20 times faster on, say, 16-32 nodes.

4994
Please post output too. 40 hours - how many iterations are done within this time? Expect it to take about 70-80 iterations, this is a difficult system!

4995
General Questions and Answers / Re: From crystalmaker to VNL
« on: July 31, 2009, 09:07 »
Good that you solved the first part. Perhaps you could share the solution, others might benefit from it too.

About the second part, I interpret your question as:

"Will this procedure change the point to which the calculation converges? I.e., will it now converge to a different result than if it was left to run flat out all the way?"

In principle it will converge to a different point, since when you restore you lose the mixing history. However, if the convergence is stable, the difference should be negligible and just a matter of numerical accuracy. To improve the quality, you may in this case want to converge to a lower tolerance than the default (1e-5), just to be sure.

And, at the end of the day, to be really sure, you had better do a test to verify this, by simply letting it run, and then start over, interrupt it and restart, and compare the physical quantities (transmission etc).

I must however say that I don't really see how your approach would be useful... The calculations will never converge to anything wrong, unless it's a particular case of converging to zero charge (a common headache), but that usually happens within the first 5-6 iterations anyway. Otherwise, it's either a matter of converging or not converging, meaning running endless iterations. The latter is a problem, but the only way to discover it is to ... run the calculation until it converges. Or rather, inspect the convergence patterns (the dRho and dEtot values). If the scf loop is on a path to convergence, they will decrease steadily (usually after an initial period of wobbling and stabilizing slowly), or they will just stay at values like 1e+01 for a long time. But then you should just abort and retune the parameters.

Thus, I don't see any way by which you can inspect the results after, say, the 17th iteration, and judge their quality. I mean, what is the criterion by which you will say the results are "good"?

Pages: 1 ... 331 332 [333] 334 335 ... 362