Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - filipr

Pages: [1]
The total energy is basically the expectation value of the many-body DFT Hamiltonian. As such the sign matters: the lower the energy the more stable the configuration. Remember that adding an arbitrary constant scalar potential to the Hamiltonian does not change the physics (wave functions and density will be the same) but shifts the total energy. This mean that the actual value of the total energy is not of much use, only energy differences. In you example the important property is the difference in energy between two configurations ΔE = EB - EA. If ΔE is negative it means that configuration B is more stable than configuration A.

Note that the total energy is dependent on the pseudopotentials. So be sure that when you calculate energy differences between two configurations they should use the same pseudopotentials (you should in general try to use the same computational settings for the two calculations).

Your structure is a slab, not an isolated molecule, and it has periodic boundary conditions. The energy spectrum will therefore have a dispersion and you need to calculate the k-resolved band structure instead of the molecular energy spectrum (which are just the eigenenergies at the Gamma point, k=(0, 0, 0)).

Questions and Answers / Re: Export images in vector format
« on: January 29, 2021, 13:58 »
The 3D plots in the viewer often involves isosurfaces, colormaps, contour plots, etc, which are not very suitable for plotting using vector graphics.

If you however only want to present atomic structure (bonds, and atoms), you can use the Python API to extract this data and plot these using some external Python 3D vector graphics plotting framework.

In order to understand why the direct band gap at K in the primitive cell is not at the K point for the supercell you have to understand Bloch's theorem( and what the Brillouin zone ( actually is. The Brillouin zone for a supercell is not the same as for the primitive cell.

Let's consider a simple example in 1D. The 1D unitcell has length L. Then the k-points are given by k = t 2π/L, where t is the "fractional k-point", i.e. the coordinates inside the first Brillouin zone. The first Brillouin zone is defined as the coordinates of t for which -0.5 < t ≤ 0.5. If you have a fractional k-point outside this region it is wrapped back inside. Now if you consider an equivalent system, but where instead of using the primitive cell you consider a supercell of length 2L, then the k-points are given by k2 = t2 2π/(2L). Likewise, the Brillouin zone of this system is defined as -0.5 < t2 ≤ 0.5. You'll see that in terms of k, the Brillouin zone of the supercell is half the size of the primitive cell. Now the k-points of the states in the two are still the same. Let's consider the states at the K point of the Brillouin zone of the primitive cell. The K point is determined by t1 = 0.5. We have k1 = k2, so 0.5 * 2π/L = k2 = t2 2π/(2L) => t2 = 0.5 * 2 = 1.0. So the K-point of the primitive cell corresponds to the t=1.0 fractional k-point of the supercell - this gets wrapped back to the Gamma point (t2 = 0) in the first Brillouin zone of the supercell.

If you want to consider a band structure of a supercell as projected on the primitive cell you can use the effective band structure analysis tool (

Also, if any of this sound confusing I suggest you to revisit your basic solid state physics book and do some example calculations by hand.

Yes, all tasks in QuantumATK are evaluated through special Python scripts. You may save the script from the Script Generator (File > Save) and open it in any editor (or the built in editor in QuantumATK). To learn how to use Python scripts instead of the GUI see:

For information on how to configure the calculation through Python see, but the easiest is probably to do some changes in the GUI and save them as a Python script and see what was changed.

Yes, there will be a difference in the adsorbtion energy. Whether the difference is significant is impossible to say. As I said, it depends on the actual system and how big a difference you consider to be "significant". The only real way to find out is to do both calculations.

When an atom/molecule is adsorbed on a surface it alters the electronic environment in the vicinity of the adsorbption site. The effect of this will be a change in the electronic density and thus a change in the potential from the charge distribution. When you go far enough away from the adsorbtion site one will expect the local density and potential to look like the pristine interface - but how you have to go away depends on how the material responds to the adsorbant. There will be multiple effects that determine this: how much the geometry of the interface atoms actually change (nearby atoms will get pushed/pulled) and how much the electrons screen the adsorbant (dielectric properties). Depending on the material, these effects can be short range or long range.

When you do a supercell calculation, it is still a periodic crystal, i.e. you repeat the adsorbant every 3 or every 6 or so unit cells. So you create artificial system with repeated adsorbants with a density/concentration of adsorbants which is typically higher that the system you are trying to model. If you want to model a single isolated absorbant you have to make sure that the absorbants can safely be regarded as isolated, i.e. that the distance between them is longer than the above described effects. The only way to ensure that is to converge adsorbtion energy with respect to the supercell size. So you have to do adsorbtion energy calculations of increasing supercell sizes, e.g. 3x3, 4x4, 5x5, 6x6, 7x7, 8x8, ..., until the energy changes less that some threshold that you consider negligible.

In QuantumATK (and I presume the same is true for VASP) the Fermi level is calculated as the root of the equation sum_{k,n} f(e_{k,n} - Ef) - N = 0, i.e. the energy Ef for which the sum of state occupations equals the number of electrons. The value of the Fermi level will thus depend on the state eigenvalues, the occupation method, the broadening and the precision of the root finding algorithm.

For large band gaps and small occupation broadenings the values of the eigenvalues should not matter much, and in the limit of broadening going to zero (but not exactly zero) one can show that using this algorithm the Fermi level will always be exactly in the middle of the band gap. In your examples I guess the reason for the discrepancy is that your VASP and QuantumATK calculations use different occupation methods or broadenings. You can read more about occupation methods here:

In any case, for a system with a band gap the exact value of the Fermi level is irrelevant as long as it is inside the band gap. In both the VASP and QuantumATK case the density of states will integrate to the total number of electrons as the density of states is zero inside the band gap (integrating zero equals zero). As such the Fermi level for a semiconductor/insulator calculated using DFT is somewhat meaningless and only used for defining a zero value when plotting band structures and DOS. Instead meaningful quantities should be based on the valence band maximum or conduction band minimum.

Also, there should be no problem running a .py file created on Windows and running it on Linux - it is just a text file with a Python script. The reason why it may say some Python commands are unknown may be if the version of QuantumATK installed on your cluster is different from the version you have on your Windows machine.

Please tell us which versions you are using on your Windows machine and on your cluster and show us the error messages you get when you try to run your job.

Hi Hadi,

The icon on your desktop is a shortcut, i.e. a .desktop file. It should work on most common Linux distributions, but we have not tested it on Mint Linux.

Can you try to open a terminal and lauch QuantumATK from there? You would usually do:

Code: [Select]
$> /home/USERNAME/QuantumATK/QuantumATK-VERSION/bin/quantumatk

with USERNAME and VERSION matching your setup.

If this works, the program runs fine but something is wrong with the shortcut file. You can create your own shortcut file (see e.g., which can also be installed system-wide using the xdg-desktop-icon( and the xdg-desktop-menu( tools - these are the ones the installer tries to install shortcuts with.

It is in fact possible to use your own pseudopotentials. QuantumATK supports three pseudopotential formats:

For using a pseudopotential you also need to provide a compatible LCAO basis set. The LCAO basis set consists of a set of confined orbitals that are generated on the fly from the pseudopotential: you have to specify which orbitals and tweak things like confinement potential. For LCAO-DFT finding a good and at the same time fast basis set is by it-self not an easy task. For PW-DFT only the occupied orbitals of the basis set is used to initialize the density before the self-consistent iterations, so the quality of the basis set is unimportant for the results.
OpenMX pseudopotentials are special and need to use a matching LCAO basis set pregenereated and numerically tabulated in special .pao files.

To use a custom pseudopotential I suggest you create a structure in the Builder with the elements you are missing, send it to the scripter, select either PseudoDojo (ONCVPSP) or OMX (OpenMX) pseudopotential (with spin-orbit I presume) in CalculatorSettings. Then in the Scripter you select "Show Defaults" in the "Script Details" drop down and then send to Editor. It will then generate a script from which you can see how the basis set orbitals and pseudopotentials are specified. You can then modify it to suit your custom pseudopotential.

Feel free to ask any questions if you get stuck.

Questions and Answers / Re: GGA-PAW Memory error
« on: October 19, 2020, 09:29 »
Dear Sadegh

I've looked through your input script and your calculation output and as you also have observed it should have enough memory to run a DFT calculation. Tt is also able to do a few optimization steps before it crashes. This does sound a bit weird, as if something is leaking memory or uses more memory than we would think. We will try to run you script and see if we can pinpoint the issue. Unfortunately, this may take a while as we have a lot of other things to do, so be patient.

Until then I have some suggestions for reducing the memory footprint:

I looked up the CPU you are running on, and it has 20 physical cores, and you have two of those, so a total of 40 available cores. Let's put them all to use :)

I see you are using the newest version (2020.09) which is quite well parallelized using threads. So I suggest that you run using 4 threads, this means that you should run with 40/4 = 10 processes (you can select 4 threads and 10 processes in the job manager). You have 13 k-points and ~70.000 plane waves to parallelize over. In order to reduce the number of wave functions to store in memory, you can distribute them over processes by choosing processes per k-point = 5. This should give a reasonable load balance with minimal memory consumption. So to summarize:

number of threads: 4
number of processes: 10
processes per k-point: 5 (this one you set in the calculator settings in the script generator)

I hope this makes the calculation run through.

Pages: [1]