Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - filipr

Pages: [1] 2
1
I don't think this is currently possible for a user to do.

The reason is that both during the SCF loop and when using HartreePotential to calculate the Hartree potential only the full electron density is used. This is both to ensure that no unnecessary computations are done (solving the Poisson equation is not always a cheap operation) and ensure that the correct boundary conditions are satisfied. In fact the Hartree potential is by definition a scalar quantity - it is an electrostatic potential.

You could in principle calculate the electrostatic potential from the spin-up and spin-down part of the electron density, but that would require you to extract those two densities and solve the Poisson equation separately for each one. I don't think this is possible for a user to do.

2
Hi Kevin,

In general you can't know how long time a calculation will take. First of all, it depends on what kind of calculation you are doing: Molecular Dynamics with Force Fields? A transmission spectrum calculation using a tight-binding calculator? Or a band structure calculation using DFT? Many calculations involve multiple steps, e.g. a DFT band structure first requires you to determine the self-consistent ground state density, after which you can calculate the band structure.

In some cases you can estimate the order of magnitude, but let's consider an example to give you an idea of the complexity involved:

Consider a DFT calculation: It is an iterative approach: Given an effective potential you calculate the density, then up update the potential, calculate a new density and so on until the change in the density between subsequent steps is smaller than some threshold. How many steps will this take? There is no way of knowing, as it depends on the system, pseudopotentials, numerical settings etc - but it is typically between 10 and 100 steps. Now in each step you calculate the Hamiltonian and find it's eigenvalues. The Hamiltonian has several contributions: For instance calculating the XC potential scales as the number of grid points, i.e. with the volume, solving the electrostatic potential in general scales as the number of grid points squared. Finding the eigenvalues and eigenvectors of the Hamiltonian scales cubicly in the basis set size, which itself is proportional to the volume or equivalently the number of atoms. Due to the prefactors of the different terms the calculation will be limited by different calculations for different systems. If you have a small to medium system with a high grid point density it may be limited by grid terms like XC and electrostatic potentials, whereas for large systems the cubic scaling of the basis set size will surely dominate. Estimating the time for each contribution from to the total calculation is extremely hard and depends on settings, parallelization and the computer specs.

The best you can do is to make different sized versions of the system you want to study, e.g. a smallish version, a medium version and a large version, run them and then extrapolate the timings assuming the N³ scaling behavior for large systems. Note this assumes that each version uses the same number of SCF steps... You may want to only take the time per SCF step into account.

Now this was a single DFT calculation. What if you also want to do geometry optimization/relaxation? Well, such a calculation is also an iterative algorithm that may take between 1 and infinitely many steps at each step doing a DFT calculation.

All of this timing analysis has to be repeated for every type of calculation, parameters, parallelization etc etc etc.

So in practice you don't estimate the time. You can make a couple of "sounding" calculations, i.e. smaller and faster calculations that consider a smaller part of the full system you want to describe, to get an idea of convergence, precision and timing. Then from those you can guesstimate or extrapolate the approximate time scale needed for the full scale model. When you have done a couple (or many!) calculations you start to get a vague idea of the time it takes to do similar calculations.

3
Exit Code 9 from an MPI program means that the program was killed by the host system.

It could be that the job scheduling system on your cluster killed the job because it used too much memory or was taking longer than the requested wall time allocation. See if you got a mail from the queuing system or ask you system administrator.

4
Actually, the TotalEnergy analysis object in QuantumATK by default gives the total free energy at the electronic temperature/broadening specified when doing the ground state calculation. The extrapolated total energy at T = 0 K has to be obtained either from the text output in the log or from using:
Code
energy_at_zero_kelvin = total_energy.alternativeEnergies()['Zero-Broadening-Energy']
If you want the free energy at a different energy you have to repeat the calculation, changing the broadening under the "Numerical Accuracy" settings.

If you for some reason want to extrapolate to a broadening different than the one you did the calculation for you can simply use that F(σ) = E(0) + 1/2 γσ2 + O(σ4), i.e. it is approximately a parabola. You have two points on this parabola: E(0) and F(σ) at the σ used in the calculation - from those you can extrapolate to any other σ in a close neighborhood.

See also:
https://docs.quantumatk.com/manual/Types/TotalEnergy/TotalEnergy.html
and
https://docs.quantumatk.com/manual/technicalnotes/occupation_methods/occupation_methods.html

5
The output is bloated with SLURM messages, however the important part is the error message:

Code
Traceback (most recent call last):
  File "potential_plot.py", line 70, in <module>
    axarr[1].plot(z.inUnitsOf(Ang), v_z.inUnitsOf(eV))
  File "zipdir/NL/CommonConcepts/PhysicalQuantity.py", line 2173, in inUnitsOf
NL.ComputerScienceUtilities.Exceptions.NLValueError: Unable to convert unit value V to an incompatible unit eV.

So I guess you still have to use volts for the potential v_z, i.e. change line 70 in your script to:

Code
axarr[1].plot(z.inUnitsOf(Ang), v_z.inUnitsOf(Volt))

6
The total energy is basically the expectation value of the many-body DFT Hamiltonian. As such the sign matters: the lower the energy the more stable the configuration. Remember that adding an arbitrary constant scalar potential to the Hamiltonian does not change the physics (wave functions and density will be the same) but shifts the total energy. This mean that the actual value of the total energy is not of much use, only energy differences. In you example the important property is the difference in energy between two configurations ΔE = EB - EA. If ΔE is negative it means that configuration B is more stable than configuration A.

Note that the total energy is dependent on the pseudopotentials. So be sure that when you calculate energy differences between two configurations they should use the same pseudopotentials (you should in general try to use the same computational settings for the two calculations).

7
Your structure is a slab, not an isolated molecule, and it has periodic boundary conditions. The energy spectrum will therefore have a dispersion and you need to calculate the k-resolved band structure instead of the molecular energy spectrum (which are just the eigenenergies at the Gamma point, k=(0, 0, 0)).

8
The 3D plots in the viewer often involves isosurfaces, colormaps, contour plots, etc, which are not very suitable for plotting using vector graphics.

If you however only want to present atomic structure (bonds, and atoms), you can use the Python API to extract this data and plot these using some external Python 3D vector graphics plotting framework.

9
In order to understand why the direct band gap at K in the primitive cell is not at the K point for the supercell you have to understand Bloch's theorem(https://en.wikipedia.org/wiki/Bloch%27s_theorem) and what the Brillouin zone (https://en.wikipedia.org/wiki/Brillouin_zone) actually is. The Brillouin zone for a supercell is not the same as for the primitive cell.

Let's consider a simple example in 1D. The 1D unitcell has length L. Then the k-points are given by k = t 2π/L, where t is the "fractional k-point", i.e. the coordinates inside the first Brillouin zone. The first Brillouin zone is defined as the coordinates of t for which -0.5 < t ≤ 0.5. If you have a fractional k-point outside this region it is wrapped back inside. Now if you consider an equivalent system, but where instead of using the primitive cell you consider a supercell of length 2L, then the k-points are given by k2 = t2 2π/(2L). Likewise, the Brillouin zone of this system is defined as -0.5 < t2 ≤ 0.5. You'll see that in terms of k, the Brillouin zone of the supercell is half the size of the primitive cell. Now the k-points of the states in the two are still the same. Let's consider the states at the K point of the Brillouin zone of the primitive cell. The K point is determined by t1 = 0.5. We have k1 = k2, so 0.5 * 2π/L = k2 = t2 2π/(2L) => t2 = 0.5 * 2 = 1.0. So the K-point of the primitive cell corresponds to the t=1.0 fractional k-point of the supercell - this gets wrapped back to the Gamma point (t2 = 0) in the first Brillouin zone of the supercell.

If you want to consider a band structure of a supercell as projected on the primitive cell you can use the effective band structure analysis tool (https://docs.quantumatk.com/tutorials/effective_band_structure/effective_band_structure.html).

Also, if any of this sound confusing I suggest you to revisit your basic solid state physics book and do some example calculations by hand.

10
Yes, all tasks in QuantumATK are evaluated through special Python scripts. You may save the script from the Script Generator (File > Save) and open it in any editor (or the built in editor in QuantumATK). To learn how to use Python scripts instead of the GUI see: https://docs.quantumatk.com/manual/Python.html

For information on how to configure the calculation through Python see https://docs.quantumatk.com/manual/NLRefMan.html, but the easiest is probably to do some changes in the GUI and save them as a Python script and see what was changed.

11
Yes, there will be a difference in the adsorbtion energy. Whether the difference is significant is impossible to say. As I said, it depends on the actual system and how big a difference you consider to be "significant". The only real way to find out is to do both calculations.

12
When an atom/molecule is adsorbed on a surface it alters the electronic environment in the vicinity of the adsorbption site. The effect of this will be a change in the electronic density and thus a change in the potential from the charge distribution. When you go far enough away from the adsorbtion site one will expect the local density and potential to look like the pristine interface - but how you have to go away depends on how the material responds to the adsorbant. There will be multiple effects that determine this: how much the geometry of the interface atoms actually change (nearby atoms will get pushed/pulled) and how much the electrons screen the adsorbant (dielectric properties). Depending on the material, these effects can be short range or long range.

When you do a supercell calculation, it is still a periodic crystal, i.e. you repeat the adsorbant every 3 or every 6 or so unit cells. So you create artificial system with repeated adsorbants with a density/concentration of adsorbants which is typically higher that the system you are trying to model. If you want to model a single isolated absorbant you have to make sure that the absorbants can safely be regarded as isolated, i.e. that the distance between them is longer than the above described effects. The only way to ensure that is to converge adsorbtion energy with respect to the supercell size. So you have to do adsorbtion energy calculations of increasing supercell sizes, e.g. 3x3, 4x4, 5x5, 6x6, 7x7, 8x8, ..., until the energy changes less that some threshold that you consider negligible.

13
In QuantumATK (and I presume the same is true for VASP) the Fermi level is calculated as the root of the equation sum_{k,n} f(e_{k,n} - Ef) - N = 0, i.e. the energy Ef for which the sum of state occupations equals the number of electrons. The value of the Fermi level will thus depend on the state eigenvalues, the occupation method, the broadening and the precision of the root finding algorithm.

For large band gaps and small occupation broadenings the values of the eigenvalues should not matter much, and in the limit of broadening going to zero (but not exactly zero) one can show that using this algorithm the Fermi level will always be exactly in the middle of the band gap. In your examples I guess the reason for the discrepancy is that your VASP and QuantumATK calculations use different occupation methods or broadenings. You can read more about occupation methods here: https://docs.quantumatk.com/manual/technicalnotes/occupation_methods/occupation_methods.html

In any case, for a system with a band gap the exact value of the Fermi level is irrelevant as long as it is inside the band gap. In both the VASP and QuantumATK case the density of states will integrate to the total number of electrons as the density of states is zero inside the band gap (integrating zero equals zero). As such the Fermi level for a semiconductor/insulator calculated using DFT is somewhat meaningless and only used for defining a zero value when plotting band structures and DOS. Instead meaningful quantities should be based on the valence band maximum or conduction band minimum.

14
Also, there should be no problem running a .py file created on Windows and running it on Linux - it is just a text file with a Python script. The reason why it may say some Python commands are unknown may be if the version of QuantumATK installed on your cluster is different from the version you have on your Windows machine.

Please tell us which versions you are using on your Windows machine and on your cluster and show us the error messages you get when you try to run your job.

15
Hi Hadi,

The icon on your desktop is a shortcut, i.e. a .desktop file. It should work on most common Linux distributions, but we have not tested it on Mint Linux.

Can you try to open a terminal and lauch QuantumATK from there? You would usually do:

Code
$> /home/USERNAME/QuantumATK/QuantumATK-VERSION/bin/quantumatk

with USERNAME and VERSION matching your setup.

If this works, the program runs fine but something is wrong with the shortcut file. You can create your own shortcut file (see e.g. https://forums.linuxmint.com/viewtopic.php?t=269281), which can also be installed system-wide using the xdg-desktop-icon(https://linux.die.net/man/1/xdg-desktop-icon) and the xdg-desktop-menu(https://linux.die.net/man/1/xdg-desktop-menu) tools - these are the ones the installer tries to install shortcuts with.

Pages: [1] 2