Recent Posts

Pages: 1 [2] 3 4 ... 10
11
Please try the highlighted script from the manual page here https://docs.quantumatk.com/manual/Types/MDTrajectory/MDTrajectory.html#:~:text=import%20pylab%0A%0A%23%20Read,legend()%0A%0Apylab.show() using atkpython in a terminal. You should nlread the .hdf5 file containing the MDTrajectory object. This should give you access to the quantities you are looking for.
12
General Questions and Answers / Silicene Nanoribbon
« Last post by Akash Ramasamy on December 21, 2021, 11:46 »
i have few questions about the Silicene nanoribbons.
1. I build a silicene nanoribbon from a plugin. Unit cell is a default option i have. Whether it is okay or i have to change it to hexagonal.
2. What is the K_point sampling for the Silicene nanoribbon.What is criteria to choose the value.
3.What is total energy of the system.
4.What is the band gap value for the Both armchair and Zigzag nanoribbon
13
General Questions and Answers / Combine two ReaxFF or Machine-learned FF
« Last post by Alireza on December 15, 2021, 16:54 »
Dear Experts,

My system under study consists of a twisted bilayer MoS2 (2700 atoms) which is sandwiched by two hBN (each 722 atoms). I would like to do a optimization in this system within DFTB level of theory. Since there are no suitable parameters for Optimization, I decided to run a reactive force field calculation. I specified first single hBN layer as tags='layer1', the twisted bilayer MoS2 as tags='layer2', and the last hBN layer as tags='layer3'.
The calculator and optimization in the script file is as follows:
Code
# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------

sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
sw_layer2 = ReaxFF_HSMo_2017(tags='layer2')
sw_layer3 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer3')

# Combine all 3 potential sets in a single calculator.
calculator = TremoloXCalculator(parameters=[sw_layer1, sw_layer2, sw_layer3])

bulk_configuration.setCalculator(calculator)
bulk_configuration.update()
nlsave('hBN-MoS2-hBN.hdf5', bulk_configuration)

# -------------------------------------------------------------
# Optimize Geometry
# -------------------------------------------------------------
bulk_configuration = OptimizeGeometry(
    bulk_configuration,
    max_forces=0.01*eV/Ang,
    max_stress=0.1*GPa,
    max_steps=400,
    max_step_length=0.2*Ang,
    trajectory_filename='hBN-MoS2-hBN_trajectory.hdf5',
    trajectory_interval=1.0*Minute,
    restart_strategy=RestartFromTrajectory(),
    optimizer_method=LBFGS(),
    enable_optimization_stop_file=True,
)
nlsave('hBN-MoS2-hBN.hdf5', bulk_configuration)
nlprint(bulk_configuration)

But the job ran into this error:
Code
Traceback (most recent call last):
  File "hBN-MoS2-hBN.tags.py", line 3495, in <module>
    sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
  File "build/lib/python3.8/site-packages/tremolox/TremoloXReaxFF.py", line 2740, in __init__
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24513, in setTags
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24430, in actOnlyOnTaggedRegion
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 9759, in _limitToOneTag
RuntimeError: The ReaxFF potential does not support the usage of tags!
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14
Traceback (most recent call last):
  File "hBN-MoS2-hBN.tags.py", line 3495, in <module>
    sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
  File "build/lib/python3.8/site-packages/tremolox/TremoloXReaxFF.py", line 2740, in __init__
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24513, in setTags
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24430, in actOnlyOnTaggedRegion
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 9759, in _limitToOneTag
RuntimeError: The ReaxFF potential does not support the usage of tags!
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15
slurmstepd: error: task_p_post_term: rmdir(/dev/cpuset/slurm21924163/slurm21924163.4294967294_0) failed Device or resource busy
where the most important part tells
Code
The ReaxFF potential does not support the usage of tags!

I also have plan to use the recent and beautiful feature in qatk, Machine-learned FF. But as the reference I need dftb parameter for Mo and S. For the electronic calculation I am using the parameters by DOI: 10.1021/ct4004959, but these parameters do not contain the repulsion potential. So, optimization is not possible. I need to find a way whether with combining two ReaxFF or using machine-learned FF.

Any suggestions are appreciated,
Cheers, A
14
General Questions and Answers / What does error exit code 11 mean?
« Last post by Alireza on December 15, 2021, 14:23 »
Dear Experts,

I got this error message during my electronic transport calculation.
Code
===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 1017659 RUNNING AT taurussmp8
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================

The man signal states

Code
SIGSEGV      11       Core    Invalid memory reference

which means that the program tries to access memory which does not belong to my user space, I guess.

Please kindly file my input and output files.

The slurm setting is as follows
Code
#!/bin/bash
#SBATCH -J "04.3821"                   
#SBATCH -A 'nano-10'              
#SBATCH --time=48:58:00                     
#SBATCH --nodes=1
#SBATCH --ntasks=150
#SBATCH --cpus-per-task=1
#SBATCH --mem=20000000
#SBATCH --output=%x.log                       
#SBATCH --error=%x.err                         
#SBATCH --partition=julia                 
#SBATCH --mail-type=end
#SBATCH --mail-user=alireza.ghasemifard@tu-dresden.de

### prepare calculation
# set the name of your Python file
PYTHON_NAME="Device3821.py"
# specify licence file
export SNPSLMD_LICENSE_FILE="2722@141.30.9.17"
# create a temporary log file to view job in real time
export TEMP_LOG_PATH="temp.log"


### submit calculation
# set number of CPUs per process of QuantumATK to number of CPUs per task
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export MKL_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run the calculation
/projects/m_chemie/Quantumatk/2021.06/libexec/mpiexec.hydra -n $SLURM_NTASKS /projects/m_chemie/Quantumatk/2021.06/bin/atkpython $PYTHON_NAME
15
Hi Kevin,

In general you can't know how long time a calculation will take. First of all, it depends on what kind of calculation you are doing: Molecular Dynamics with Force Fields? A transmission spectrum calculation using a tight-binding calculator? Or a band structure calculation using DFT? Many calculations involve multiple steps, e.g. a DFT band structure first requires you to determine the self-consistent ground state density, after which you can calculate the band structure.

In some cases you can estimate the order of magnitude, but let's consider an example to give you an idea of the complexity involved:

Consider a DFT calculation: It is an iterative approach: Given an effective potential you calculate the density, then up update the potential, calculate a new density and so on until the change in the density between subsequent steps is smaller than some threshold. How many steps will this take? There is no way of knowing, as it depends on the system, pseudopotentials, numerical settings etc - but it is typically between 10 and 100 steps. Now in each step you calculate the Hamiltonian and find it's eigenvalues. The Hamiltonian has several contributions: For instance calculating the XC potential scales as the number of grid points, i.e. with the volume, solving the electrostatic potential in general scales as the number of grid points squared. Finding the eigenvalues and eigenvectors of the Hamiltonian scales cubicly in the basis set size, which itself is proportional to the volume or equivalently the number of atoms. Due to the prefactors of the different terms the calculation will be limited by different calculations for different systems. If you have a small to medium system with a high grid point density it may be limited by grid terms like XC and electrostatic potentials, whereas for large systems the cubic scaling of the basis set size will surely dominate. Estimating the time for each contribution from to the total calculation is extremely hard and depends on settings, parallelization and the computer specs.

The best you can do is to make different sized versions of the system you want to study, e.g. a smallish version, a medium version and a large version, run them and then extrapolate the timings assuming the N³ scaling behavior for large systems. Note this assumes that each version uses the same number of SCF steps... You may want to only take the time per SCF step into account.

Now this was a single DFT calculation. What if you also want to do geometry optimization/relaxation? Well, such a calculation is also an iterative algorithm that may take between 1 and infinitely many steps at each step doing a DFT calculation.

All of this timing analysis has to be repeated for every type of calculation, parameters, parallelization etc etc etc.

So in practice you don't estimate the time. You can make a couple of "sounding" calculations, i.e. smaller and faster calculations that consider a smaller part of the full system you want to describe, to get an idea of convergence, precision and timing. Then from those you can guesstimate or extrapolate the approximate time scale needed for the full scale model. When you have done a couple (or many!) calculations you start to get a vague idea of the time it takes to do similar calculations.
16
Dear community,
I am new in QuantumATK running some particular applications. Some of them take so much time. I would like to know how much time my simulation will take before running or if there is some method to estimate that time (based on some parameters).
Thanks in advance!
17
I have prepared an input for AIMD run of 20 steps. In the output file, I want to get the following information
lattice vector, coordinates, energy and stress tensor for each step.
However, from the input that I am running, I am not getting these information. And it seems a tedious task to get the coordinates and corresponding values from the movie tool at each snapshot. So kindly tell me how to obtain these values from the AIMD run. Attached with this message is the input file that I am running.
18
General Questions and Answers / "processes_per_contour_point=2" not working!
« Last post by Alireza on December 10, 2021, 12:04 »
Dear Experts,

I am running an electronic transport calculation for twisted bilayer MoS2 at the DFTB level of theory. The unit cell contains approx 2700 atom, building the DeviceConfiguration lead to approx 7800 atoms.

I am using QATK v2021.06 on Debian GNU/Linux 9 / Kernel 4.9.0-14-amd64  / CPU Core(TM) i7-6700 /16BG Memory /16 GB swap

Here is part of my script:

Code
# Set up configuration
central_region = BulkConfiguration(
    bravais_lattice=central_region_lattice,
    elements=central_region_elements,
    cartesian_coordinates=central_region_coordinates
    )

device_configuration = DeviceConfiguration(
    central_region,
    [left_electrode, right_electrode],
    equivalent_electrode_lengths=[46.9136, 46.9136]*Angstrom,
    transverse_electrode_repetitions=[[1, 1], [1, 1]],
    )

# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------
#----------------------------------------
# Hamiltonian Parametrization
#----------------------------------------
hamiltonian_parametrization = SlaterKosterHamiltonianParametrization(
    basis_set=DFTBDirectory(r"/home/h0/algh988c/QN13"))

#----------------------------------------
# Pair Potentials
#----------------------------------------
pair_potentials = DFTBDirectory(r"/home/h0/algh988c/QN13")

#----------------------------------------
# Numerical Accuracy Settings
#----------------------------------------
device_k_point_sampling = MonkhorstPackGrid(
    nc=98,
    )
device_numerical_accuracy_parameters = NumericalAccuracyParameters(
    k_point_sampling=device_k_point_sampling,
    density_mesh_cutoff=10.0*Hartree,
    )

#----------------------------------------
# Device Algorithm Settings
#----------------------------------------
self_energy_calculator_real = KrylovSelfEnergy()
non_equilibrium_method = GreensFunction(
    processes_per_contour_point=2,
    )
equilibrium_method = GreensFunction(
    processes_per_contour_point=2,
    )
device_algorithm_parameters = DeviceAlgorithmParameters(
    self_energy_calculator_real=self_energy_calculator_real,
    non_equilibrium_method=non_equilibrium_method,
    equilibrium_method=equilibrium_method,
    store_basis_on_grid=True,
    )

#----------------------------------------
# Device Calculator
#----------------------------------------
calculator = DeviceSemiEmpiricalCalculator(
    hamiltonian_parametrization=hamiltonian_parametrization,
    pair_potentials=pair_potentials,
    numerical_accuracy_parameters=device_numerical_accuracy_parameters,
    device_algorithm_parameters=device_algorithm_parameters,
    )

device_configuration.setCalculator(calculator)
nlprint(device_configuration)
device_configuration.update()
nlsave('389.rigid.hdf5', device_configuration)

# -------------------------------------------------------------
# Transmission Spectrum
# -------------------------------------------------------------
kpoint_grid = MonkhorstPackGrid()

transmission_spectrum = TransmissionSpectrum(
    configuration=device_configuration,
    energies=numpy.linspace(1.5, 2.5, 150)*eV,
    kpoints=kpoint_grid,
    energy_zero_parameter=AverageFermiLevel,
    infinitesimal=1e-06*eV,
    self_energy_calculator=KrylovSelfEnergy(),
    enforce_zero_in_band_gap=True,
    )
nlsave('389.rigid.hdf5', transmission_spectrum)
nlprint(transmission_spectrum)

K-point grid is 1 x 1 x 98, and number of irreducible k-points are 50. Also the total number of contour points are 150.

I specified the following Slurm setting

Code
#SBATCH --nodes=1
#SBATCH --ntasks=300
#SBATCH --cpus-per-task=1
#SBATCH --mem=32000000

Despite writting processes_per_contour_point=2 in the manuscript, the calculation ran into idle.

Please take a look at the log file


The reason I am trying to use processes_per_contour_point=2 is to speed up my calculation!
For the past month all of my try ran into either out-of-time limit error(after 7 days) or out-of-memory error. That's why I am using KrylovSelfEnergy.

Suggestions are appreciated,

Cheers, A
19
General Questions and Answers / Re: Time-dependent DFT and photoluminescence
« Last post by hadhemat on December 10, 2021, 11:15 »
Thanks. Does anyone know how is it possible to calculate PL spectra in ATK then?
20
General Questions and Answers / Re: Charge transfer
« Last post by khariyahA on December 8, 2021, 10:14 »
Thankyou Sir for your reply :)

I have another question. I have calculated adsorption energy and charge transfer. From the result, I obtained small values for adsorption energy, while large values for charge transfer. I wonder is there any relationship/connection explanation between adsorption energy and charge transfer ?
Pages: 1 [2] 3 4 ... 10