Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Alireza

Pages: [1]
1
Hi Tue,

Thank you for the reply.
Is there any chance to use the preliminary version of iterative solver in bandstructure calculations now, as we urgently need it?

Alireza

2
Hello qatk,

I am using qatk/2022.12 and performing bandstructure calculations for twisted BL MoS2 in which the total number of atoms is at least 6160. I also have much bigger systems. In the beginning, I faced with out of memory error and was able to solve this issue by setting 
Quote
optimize_for_speed_over_memory=True,
in DiagonalizationSolver. I also set
Quote
store_basis_on_grid=True,
and
Quote
store_energy_density_matrix=True,
in AlgorithmParameters too. With these new settings, the calculation uses almost 20T memory with 124 cores to run. However, in the bandstructure analysis, I faced with an error stating
Quote
** On entry to DSTEDC, parameter number  8 had an illegal value
.

I was thinking since this is an expensive calculation with a large number of atoms, I should restrict the number of conduction bands to a certain value. Hence, I set
Quote
bands_above_fermi_level=5,
both in DiagonalizationSolver and Bandstructure analysis. But was not helpful and I got the same error. I learned that if
Quote
processes_per_kpoint=1
the diagonalization wants to use LAPACK algorithm, otherwise ELPA. And I think this error coming from ELPA.

The calculation is at dftb level using SLAKOs parameter.

In the attachment, you can find input, log and error files.

3
Dear Experts,

My system under study consists of a twisted bilayer MoS2 (2700 atoms) which is sandwiched by two hBN (each 722 atoms). I would like to do a optimization in this system within DFTB level of theory. Since there are no suitable parameters for Optimization, I decided to run a reactive force field calculation. I specified first single hBN layer as tags='layer1', the twisted bilayer MoS2 as tags='layer2', and the last hBN layer as tags='layer3'.
The calculator and optimization in the script file is as follows:
Code
# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------

sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
sw_layer2 = ReaxFF_HSMo_2017(tags='layer2')
sw_layer3 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer3')

# Combine all 3 potential sets in a single calculator.
calculator = TremoloXCalculator(parameters=[sw_layer1, sw_layer2, sw_layer3])

bulk_configuration.setCalculator(calculator)
bulk_configuration.update()
nlsave('hBN-MoS2-hBN.hdf5', bulk_configuration)

# -------------------------------------------------------------
# Optimize Geometry
# -------------------------------------------------------------
bulk_configuration = OptimizeGeometry(
    bulk_configuration,
    max_forces=0.01*eV/Ang,
    max_stress=0.1*GPa,
    max_steps=400,
    max_step_length=0.2*Ang,
    trajectory_filename='hBN-MoS2-hBN_trajectory.hdf5',
    trajectory_interval=1.0*Minute,
    restart_strategy=RestartFromTrajectory(),
    optimizer_method=LBFGS(),
    enable_optimization_stop_file=True,
)
nlsave('hBN-MoS2-hBN.hdf5', bulk_configuration)
nlprint(bulk_configuration)

But the job ran into this error:
Code
Traceback (most recent call last):
  File "hBN-MoS2-hBN.tags.py", line 3495, in <module>
    sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
  File "build/lib/python3.8/site-packages/tremolox/TremoloXReaxFF.py", line 2740, in __init__
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24513, in setTags
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24430, in actOnlyOnTaggedRegion
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 9759, in _limitToOneTag
RuntimeError: The ReaxFF potential does not support the usage of tags!
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14
Traceback (most recent call last):
  File "hBN-MoS2-hBN.tags.py", line 3495, in <module>
    sw_layer1 = ReaxFF_CHOSMoNiLiBFPN_2021(tags='layer1')
  File "build/lib/python3.8/site-packages/tremolox/TremoloXReaxFF.py", line 2740, in __init__
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24513, in setTags
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 24430, in actOnlyOnTaggedRegion
  File "build/lib/python3.8/site-packages/tremolox/TremoloXPotentialSet.py", line 9759, in _limitToOneTag
RuntimeError: The ReaxFF potential does not support the usage of tags!
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15
slurmstepd: error: task_p_post_term: rmdir(/dev/cpuset/slurm21924163/slurm21924163.4294967294_0) failed Device or resource busy
where the most important part tells
Code
The ReaxFF potential does not support the usage of tags!

I also have plan to use the recent and beautiful feature in qatk, Machine-learned FF. But as the reference I need dftb parameter for Mo and S. For the electronic calculation I am using the parameters by DOI: 10.1021/ct4004959, but these parameters do not contain the repulsion potential. So, optimization is not possible. I need to find a way whether with combining two ReaxFF or using machine-learned FF.

Any suggestions are appreciated,
Cheers, A

4
General Questions and Answers / What does error exit code 11 mean?
« on: December 15, 2021, 14:23 »
Dear Experts,

I got this error message during my electronic transport calculation.
Code
===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 1017659 RUNNING AT taurussmp8
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================

The man signal states

Code
SIGSEGV      11       Core    Invalid memory reference

which means that the program tries to access memory which does not belong to my user space, I guess.

Please kindly file my input and output files.

The slurm setting is as follows
Code
#!/bin/bash
#SBATCH -J "04.3821"                   
#SBATCH -A 'nano-10'              
#SBATCH --time=48:58:00                     
#SBATCH --nodes=1
#SBATCH --ntasks=150
#SBATCH --cpus-per-task=1
#SBATCH --mem=20000000
#SBATCH --output=%x.log                       
#SBATCH --error=%x.err                         
#SBATCH --partition=julia                 
#SBATCH --mail-type=end
#SBATCH --mail-user=alireza.ghasemifard@tu-dresden.de

### prepare calculation
# set the name of your Python file
PYTHON_NAME="Device3821.py"
# specify licence file
export SNPSLMD_LICENSE_FILE="2722@141.30.9.17"
# create a temporary log file to view job in real time
export TEMP_LOG_PATH="temp.log"


### submit calculation
# set number of CPUs per process of QuantumATK to number of CPUs per task
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export MKL_NUM_THREADS=$SLURM_CPUS_PER_TASK
# run the calculation
/projects/m_chemie/Quantumatk/2021.06/libexec/mpiexec.hydra -n $SLURM_NTASKS /projects/m_chemie/Quantumatk/2021.06/bin/atkpython $PYTHON_NAME

5
Dear Experts,

I am running an electronic transport calculation for twisted bilayer MoS2 at the DFTB level of theory. The unit cell contains approx 2700 atom, building the DeviceConfiguration lead to approx 7800 atoms.

I am using QATK v2021.06 on Debian GNU/Linux 9 / Kernel 4.9.0-14-amd64  / CPU Core(TM) i7-6700 /16BG Memory /16 GB swap

Here is part of my script:

Code
# Set up configuration
central_region = BulkConfiguration(
    bravais_lattice=central_region_lattice,
    elements=central_region_elements,
    cartesian_coordinates=central_region_coordinates
    )

device_configuration = DeviceConfiguration(
    central_region,
    [left_electrode, right_electrode],
    equivalent_electrode_lengths=[46.9136, 46.9136]*Angstrom,
    transverse_electrode_repetitions=[[1, 1], [1, 1]],
    )

# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------
#----------------------------------------
# Hamiltonian Parametrization
#----------------------------------------
hamiltonian_parametrization = SlaterKosterHamiltonianParametrization(
    basis_set=DFTBDirectory(r"/home/h0/algh988c/QN13"))

#----------------------------------------
# Pair Potentials
#----------------------------------------
pair_potentials = DFTBDirectory(r"/home/h0/algh988c/QN13")

#----------------------------------------
# Numerical Accuracy Settings
#----------------------------------------
device_k_point_sampling = MonkhorstPackGrid(
    nc=98,
    )
device_numerical_accuracy_parameters = NumericalAccuracyParameters(
    k_point_sampling=device_k_point_sampling,
    density_mesh_cutoff=10.0*Hartree,
    )

#----------------------------------------
# Device Algorithm Settings
#----------------------------------------
self_energy_calculator_real = KrylovSelfEnergy()
non_equilibrium_method = GreensFunction(
    processes_per_contour_point=2,
    )
equilibrium_method = GreensFunction(
    processes_per_contour_point=2,
    )
device_algorithm_parameters = DeviceAlgorithmParameters(
    self_energy_calculator_real=self_energy_calculator_real,
    non_equilibrium_method=non_equilibrium_method,
    equilibrium_method=equilibrium_method,
    store_basis_on_grid=True,
    )

#----------------------------------------
# Device Calculator
#----------------------------------------
calculator = DeviceSemiEmpiricalCalculator(
    hamiltonian_parametrization=hamiltonian_parametrization,
    pair_potentials=pair_potentials,
    numerical_accuracy_parameters=device_numerical_accuracy_parameters,
    device_algorithm_parameters=device_algorithm_parameters,
    )

device_configuration.setCalculator(calculator)
nlprint(device_configuration)
device_configuration.update()
nlsave('389.rigid.hdf5', device_configuration)

# -------------------------------------------------------------
# Transmission Spectrum
# -------------------------------------------------------------
kpoint_grid = MonkhorstPackGrid()

transmission_spectrum = TransmissionSpectrum(
    configuration=device_configuration,
    energies=numpy.linspace(1.5, 2.5, 150)*eV,
    kpoints=kpoint_grid,
    energy_zero_parameter=AverageFermiLevel,
    infinitesimal=1e-06*eV,
    self_energy_calculator=KrylovSelfEnergy(),
    enforce_zero_in_band_gap=True,
    )
nlsave('389.rigid.hdf5', transmission_spectrum)
nlprint(transmission_spectrum)

K-point grid is 1 x 1 x 98, and number of irreducible k-points are 50. Also the total number of contour points are 150.

I specified the following Slurm setting

Code
#SBATCH --nodes=1
#SBATCH --ntasks=300
#SBATCH --cpus-per-task=1
#SBATCH --mem=32000000

Despite writting processes_per_contour_point=2 in the manuscript, the calculation ran into idle.

Please take a look at the log file


The reason I am trying to use processes_per_contour_point=2 is to speed up my calculation!
For the past month all of my try ran into either out-of-time limit error(after 7 days) or out-of-memory error. That's why I am using KrylovSelfEnergy.

Suggestions are appreciated,

Cheers, A

6
Thank you Petr for your response but I did not really get it!
Using threading for plotting the results?
I have a problem when I click  FatBand Analyzer and try to merge data with PDOS, not have problem with running calculation!

Cheers, A

7
Dear experts,

I am trying to get the electronic transport of a large system which contains 2604 atoms in the unit cell, within the dftb calculations.
First I tried this setting:
--nodes=1
--ntasks-per-node=24
--cpus-per-task=1
--mem-per-cpu=2540
and the job ran into out-of-memory error.
Then I increased the number of nodes up to 20, leading to 480 cores and 1,219,200 mb total memory. Unfortunately, the calculation ran into the same error again. The attached figure is the memory utilization related to the mentioned job. At the time around 37 minutes, the semi empirical calculation finished, and the transmission calculation starts. Note that, the calculation ran into error at this time.

UPDATE on 01.11.2021:
I also tried on a different partition which has larger memory by allocating 8T memory. The same error occurred. Can someone tell me what is the scale of transport calculations with atk? The number of atoms vs memory. Is it a bug or a technical problem?

Appreciate any comments/suggestions

Cheers, A

8
Dear experts,

I have calculated the FatBS and PDOS for twisted bilayer MoS2. The unit cell contains 1302 atoms. I used dftb calculation and got the PDOS in the energy window of [-2.5:+2.5] with 601 points (energy resolution 8.3 meV). I also used tetrahedron method. As you can see in the attachment, there exist a Dirac nod in the valence bands. I expect to have a tiny channel in the PDOS, but I observed a gap with respect to Dirac energy window. The data is also available in the attached figure, where the left highlighted column is the energy and right is the total PDOS. To my knowledge, the two zero values in PDOS at -0.858 and -0.85 is not in agreement with the Dirac band in the left plot. Hence, I appreciate any comments/suggestion.

Cheers, A

10
Dear all,

I am using QATK v2020.09 on Debian GNU/Linux 9 / Kernel 4.9.0-14-amd64  / CPU Core(TM) i7-6700 /16BG Memory /16 GB swap
I did the following calculation

I have computed the FatBS and PDOS of a twisted bilyaer MoS2 which contains 1300 atoms in unit cell.

The results of the calculation can be found here here (the link will expire in a month)
From the computed data I try to plot the fat band using the following procedure

In the GUI, from the electronic properties section, when I choose the FatBand Analyzer, the proper window open. When I either try to save .hdf5 file after editing, or merge with PDOS plots, the ATK ran into crash.

The plotting process takes more than 16 GB  and finally craches with the following log

Traceback (most recent call last):
  File "zipdir/NL/GUI/Graphics/Plotter/PlotView/PlotView.py", line 384, in save
  File "zipdir/NL/IO/NLSaveUtilities.py", line 474, in nlsave
  File "zipdir/NL/IO/HDF5.py", line 840, in writeHDF5Group
  File "zipdir/NL/IO/Serializable.py", line 265, in _getVersionedData
  File "zipdir/NL/IO/Containers.py", line 179, in getValue
  File "zipdir/NL/GUI/Graphics/Plotter/IO/Plot.py", line 16, in pack
  File "zipdir/NL/IO/Containers.py", line 675, in pack
MemoryError


Cheers, A

11
Dear Dr. Blom,

https://quantumwise.com/forum/index.php?topic=4821.msg20910#msg20910

This link is not valid. Can you please update me?

12
New error after cleaning /home/username/.vnl/job_manager_2020.09/

13
Dear Mlee,

I have issue in starting a job both locally or on cluster and always facing with the attached error.

Pages: [1]