Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - anyuezhiji

Pages: [1] 2
1
I guess if there are atoms on or too close to the faces of parallelepipeds consisted of  the three basis vectors, this bug may encountered but not every time.

2
Interesting issue. some notes if I may:
- Everything looks fine if you run the two calculations (Graphene1.py and Graphene2.py) including the BS analysis. The two BS objects are thus exactly the same. No errors here.
- Instead something is going wrong when you read one of the two BulkConfiguration (nlread part). The possible bug seems somewhere there, we will investigate.
- Because it seems that the problem is not related to k-points, by using 3x3 k-points and SZP basis set allows you to run on a laptop in less then one minute and makes everything faster.
- I noticed you use UnitCell for your bulk configuration. Why not hexagonal?

Dear Dr. Martinez:
    Thank you for your attention.
    -Yes.
    -I think the problem is not in the nlread part:1.it going wrong without "nlread" in some case;2 repeatedly nlread and nlsave, the same structure can be visualized in VNL. 
    -Yes, it makes everything faster. However, Dr. Blom said "I re-ran your Graphene1.py script with 27x27 k-points and I then get a perfect match to the expected result.", so 27x27 k-points is used for test.
    - If conducted atomic reconstruction and no longer hexagonal, this bug still exists. Carbon atomic system with UnitCell is only used for test. In fact, this bug has been encountered in some other complex systems with other elements such as B, N, Mo, S , Si. Coordinate translation could lead to the disappearance and appearance of bug, and I did not know what kind of initial structure could avoid this bug.  All I can do is to call QuantumWise Staffs for Help :D, for I do not have the source code can be used to debug.

3
I really just changed 1 line - the k-points, compared to yours.
Attached is also the corresponding band structure plot.
I didn't evaluate the total energy.


Dear Dr. Blom,
I sincerely apologize for the delay in my response, but I have been extremely busy programming a new TB approximation and doing some other things, and the compute servers are also busy.
I just downloaded the script provided by you yesterday, and re-ran without any editing. It seems that everything goes well. But when Band1.py be tested , I got some different results.


Band1.py
Quote
bulk_configuration = nlread('scf.nc',BulkConfiguration)[0]
bandstructure = Bandstructure(
    configuration=bulk_configuration,
    kpoints=[[0.,0.,0.],[1./3,2./3,0.],[2./3,1./3,0.]],
    bands_above_fermi_level=All
    )
nlsave('band1.nc', bandstructure)


4
I re-ran your Graphene1.py script with 27x27 k-points and I then get a perfect match to the expected result.

Dear Dr. Blom,

Could you provide this script as attachment.
Thanks a lot.

5
Here is the DFT result with 27x27 k-points, which is consistent with  9x9 k-points.
The total energy is -1.487783852e+04 eV (-1.487783794e+04 for 9x9, -1.511937382e+04 is the right value).

6

Use GrapheneDFTB.py, but set vector_b = [8.544, 14.799, 0.0]*Angstrom.

7
First of all, thanks a lot for providing simple scripts for debugging. It makes a world of difference for us.

Now, you wouldn't save any memory by storing this information, because you would still have to read it back into memory from the NC file. And doing so takes about as much time as recomputing it. So if anything we do save space doing it the way we do - the NC file becomes smaller.

So, graphene is tricky. I re-ran your Graphene1.py script with 27x27 k-points and I then get a perfect match to the expected result. Probably you can get away with fewer but I wanted to be sure.


Dear Dr. Blom,

I'm trying to using 27x27 k-points for test but it seems does not work.
The DFT test are running (It will be relatively time-consuming), here are DFTB results:

8
If you also only use 3 k-points for the self-consistent loop, this may be the reason - try 9x9 for the main calculation instead. I don't have time to look at the details but I agree something needs to be checked.

Right, those parts are not stored in the NC file because they are fast to recalculate, even for a large system.


Dear Dr. Blom,

Only 3 k-points is for Bandstructure() test, for the self-consistent loop is 9x9x1.

Quote
numerical_accuracy_parameters = NumericalAccuracyParameters(
    k_point_sampling=(9, 9, 1),
    )

Quote
bandstructure = Bandstructure(
    configuration=bulk_configuration,
    kpoints=[[0.,0.,0.],[1./3,2./3,0.],[2./3,1./3,0.]],
    bands_above_fermi_level=All
    )

Quote
grep "E =" Graphene1.log
|   0 E = -209.947 dE =  1.000000e+00 dH =  1.262869e-01                       |
|   1 E = -184.423 dE =  2.552412e+01 dH =  1.085114e-01                       |
|   2 E = -191.404 dE =  6.980547e+00 dH =  2.408157e-03                       |
|   3 E = -185.399 dE =  6.004406e+00 dH =  2.498466e-02                       |
|   4 E = -185.413 dE =  1.391652e-02 dH =  8.107498e-05                       |
|   5 E = -185.383 dE =  2.998789e-02 dH =  1.288588e-04                       |
|   6 E = -185.385 dE =  2.034424e-03 dH =  1.304958e-05                       |
|   7 E = -185.385 dE =  2.128451e-05 dH =  1.905362e-06                       |

Quote
grep "E =" Graphene2.log
|   0 E = -209.947 dE =  1.000000e+00 dH =  1.262857e-01                       |
|   1 E = -184.423 dE =  2.552401e+01 dH =  1.085104e-01                       |
|   2 E = -191.403 dE =  6.980123e+00 dH =  2.408149e-03                       |
|   3 E = -185.399 dE =  6.004017e+00 dH =  2.498361e-02                       |
|   4 E = -185.413 dE =  1.390191e-02 dH =  8.113542e-05                       |
|   5 E = -185.383 dE =  3.011765e-02 dH =  1.296259e-04                       |
|   6 E = -185.385 dE =  2.236028e-03 dH =  1.403214e-05                       |
|   7 E = -185.385 dE =  3.237228e-05 dH =  1.753595e-06                       |


In self-consistent loop, the two models are both end with  E = -185.385 and Fermi Level  = -4.011141 eV (-4.011147 eV), but the total energies given by TotalEnergy(bulk_configuration).evaluate().inUnitsOf(eV) are -1.487783794e+04 and -1.511937382e+04 respectively.

Maybe there is nonething wrong in self-consistent loop, but something wrong in further calculations (e.g. Bandstructure() TotalEnergy()).


Maybe I'm obsessive-compulsive disorder, I still think those parts should also stored in the NC file. I dislike the repeated calculations, it would be a waste of time and memory.


Thanks and regards for your help and hard work in the summer holidays!
 

9
I apologize, I was not looking carefully enough. You are probably right, and the easiest way is to use the "Close neighbors" tools in the VNL Builder to verify it!

So in that case we are back to your first question, which I don't quite understand. Your scripts are very advanced which make them difficult to understand for anyone else. For a bug report, you will need to prepare two simple scripts that show the issue. The scripts need to be without "code" - just a simple version without options or loops or variables, and without custom plotting too - because after all there is a small possibility the error is in your code and not in ATK, and it will take us too long to troubleshoot your code.

Maybe, simply, your translation is not correct...


Here are simple scripts.

The “Select By Bond Length” tool shown there are definitely none atoms overlapping.
Graphene1.py → Graphene2.py
Conducted only coordinate translation: x → x-0.64  y → y+0.38

And for Graphene1.py, if set vector_b = [8.544, 14.7986420999, 0.0]*Angstrom,  Band1.png would got.
Only 3 kpoints are used in Graphene1.py and Graphene2.py for testing, but they shown results consistent with the more detailed results from the Band.py and plotBand.py in “script for bug test.7z”.

By the way, every time reading the model from self-consistent finished .nc files, it would shown "Calculating Nonlocal Part and Kinetic Matrix".  Is not that information stored in the .nc files? If yes, this is a waste of time.

10
1. There are definitely atoms overlapping.
2. No. If two atoms are in the same position (modulus a lattice translation vector) you will get wrong results (in any code).
3. The design of MPI - the way used in ATK - is to duplicate the memory per MPI process. If you instead do a multithreaded calculation, this is not the case. MPI was originally designed as a way to distribute calculation across separate nodes in a cluster environment. Using MPI on a multicore machine can provide some performance improvement, but it's in the nature of the problem that you need enough memory to use it in this way. Note however that you will most likely not get a good speed-up by putting say 4 MPIs on a single quadcore machine - there will be too much competition between the processes for RAM and cache access.



Well, but which atoms are overlapping.

Use the lattice vectors in Graphene1.uc to repeat Graphene1.xyz as 2×2×1 along the A B and C-axes,
then export the model 2×2×1 as Graphene1(2.2.1).xyz,
import it to Graphene1(2.2.1).xlsx and sort the coordinates by  section (x as the primary sort field, y as  the secondary sort field),
then calculate the distances of coordinates in adjacent rows,
but none of distances  are  less then 2.46 (if overlapped, there must be a distance close to zero).


11
For your 1st question:
The lattice vectors defined in your script file correspond to those of a  hexagonal supercell of graphene. But the coordinates of atoms defined in your script file are wrong, because they contain some atoms in a neighboring image of such supercell. It will cause some atoms being overlapped in the supercell calculation although no error message was reported during the running of job.   So both of your calculations are wrong.

For your 2nd questions:
Please provide more detailed information instead of a rough claim. For  a typical case of your calculation  running in the serial manner  and in the parallel one with multiple processes, you may type the command of "top" to check the use of memory.  Also, if you can provide the details of your parallel jobs (that is, how did you run the job in parallel?), it will be much better.

Please check the manual for running ATK in:
http://www.quantumwise.com/documents/tutorials/latest/ParallelGuide/ParallelGuide.pdf




For  1st question:
1. I believe that there is none atoms being overlapped (See attachments).
2. Even if there are overlaped atoms, coordinate translation would lead to the same result in the right program.
3. The difference caused by coordinate translation apears in DFT and Slater-Koster module, disapears in Extended Hückel module.

For  2nd question:
I found it claimed here: http://quantumwise.com/support/faq/91-how-many-atoms-can-be-computed-using-atk?catid=21%3Atechnical
Quote
An important thing to note is that each MPI node uses the same amount of RAM. So if one assigns more than one MPI processes to the same node, the memory requirement goes up quickly on that node.
I hope shared-memory should been used in ATK‘s python multiprocessing to save RAM.

12


Possible Bug 1:
Conducted only coordinate translation would actually lead to different results (See attachments for details).
This bug is likely to remain in the latest version of Atomistix ToolKit 2014.b1 [Build 6d212c8].
Is there anything wrong with getting the neighborlist?


Possible Bug 2:
Multiprocessing eats too much memory?
Are there n copies of data in processes when running a work with n cores (all cores are in one computer)?
Then it would eats n times memory.
However, with vasp 5.3.5, a n-core work eats only a little more memory than serial processing.
Is multiprocessing with improper pythonmodule? Or anything improper in program dealing with parallel process?

13
 It'seems that just [for voltage in numpy.linspace(0.0,2.0,11)*Volt] contains 1.8 v .

14
This is possible for 1, but not for 2, today. We will consider 2 as a feature idea, although I think a more generic approach might be to take the real-space denstiy from the previous calculation and use that to initiate the scf loop instead.


Thanks very much for your reply.

If some atoms are monolithic moved, and the scattering region is resized,  could any information be read from the previous calculation and use that to initiate the scf loop ? If could ,can you provide an example script ?



For further,could some information be read from old calculations by specifying corresponding index number of atoms while the corresponding atoms is in similar environment? And regarded as neutral atoms while not specified.

15
For instance:
1. An atom is slightly moved
2. An new atom is added
For an existing atom, read the value of each basis set from the old calculation used to generate the electron density
For an new added atom,  regard as neutral atom to generate the electron density


Pages: [1] 2