Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - krabidix

Pages: [1] 2
1
General Questions and Answers / Re: License limit
« on: August 13, 2025, 21:12 »
Thanks for the reply.
Is there anyway to get at least a rough estimate for the number of license usage for a given script? Sometimes even for simple Band structure calculations of large system (lets say ~100-200 atoms) with DFT-PW in PAW and HSE06, 100 Licenses are insufficient?  Please consider the documentation of this issue, even toy examples can help/guide a lot. Currently, nothing exist for the License limit on QuantumATK documentation.

2
General Questions and Answers / License limit
« on: August 12, 2025, 19:55 »
Hello,
how exactly license consumption is counted for QuantumATK jobs. From user experience, the license usage does not scale linearly with -n in the MPI command.
For instance, I’ve submitted jobs using --nodes=1 to --nodes=4 (sometimes 6 too) and corresponding values like -n 40, -n 80, and even -n 160, and they’ve often run without any license-related errors.
This makes it difficult to determine in advance how many licenses a job will actually consume. But sometimes I got the instructions from the IT admin that I used more than 100 Licenses (with the same setting of submission script). Below is the submission script we use for submitting jobs in our local cluster.
#!/bin/bash
# Job name
#SBATCH --job-name hse_sc

#SBATCH --nodes=1                       # Number of nodes
#SBATCH --ntasks-per-node=40              # How many tasks on each node
##SBATCH --cpus-per-task=20
#SBATCH --time=72:00:00
#SBATCH --exclusive
#Export all environment variables
#SBATCH --export=ALL


# Load the QuantumATK module

# Load the QuantumATK module
module use /n/work00/software/modulefiles
module load puck_quantumatk/2023.12-sp1


# Set the executables
export ATK_EXE=/n/work00/software/quantumatk/V-2023.12-SP1/bin/atkpython
export MPI_EXE=/n/work00/software/quantumatk/V-2023.12-SP1/mpi/bin/mpiexec.hydra
export SNPSLMD_LICENSE_FILE=27027@acme

#export MKL_DYNAMIC=TRUE


# Threading
#export OMP_NUM_THREADS=2             
export MKL_DYNAMIC=TRUE



${MPI_EXE} -n 12 ${ATK_EXE} optical_spectrum.py > optical_doped.log

3
With '24' and '4' cores, the results match.

4
Here are the bulk_configuration and other script files if the QuantumATK team wants to investigate the issue.

5
Hi,
The latest version does not make any difference.
The question remains why "process_per_displacement" parameter is so sensitive to phonon calculations ?

6
These are the dynamical matrix files for both cases.
I think this is complete information!

Now, the question can be addressed: why phonon dispersion is senstitive to "processes_per_displacement" paramter?

7
Hi,
These are relevant data of both cases ( with processes = 28 and 4).

8
Here are the log files for forces and scf.

9
Here are the output (in zip files) and script for the case in which I used a large value (28) for processes_per_displacement.
The attached image is for the case when I used the value of processes_per_displacement =4.

The only question is why the energies are that much off!

10
Thank you for your input!

I understand that I may have overgeneralized regarding geometry optimization and script parameters.

My primary focus is on the unexpected behaviour of phonon energies when changing the "processes_per_displacement" parameter within my script.

Using a large value for "processes_per_displacement" (e.g., 26) results in unusually high phonon energies (~12000 meV), while a smaller value (e.g., 4) produces the expected range for graphene (~200 meV).

This discrepancy is puzzling and leads me to question the impact of this parameter.

I am using QuantumATK Version U-2022.12. 
I used the Wigner-Seitz scheme in both cases. I changed only "processes_per_displacement".

11
Hi,
I calculated the graphene phonon dispersion spectrum using the following script:
Code
# -------------------------------------------------------------
# Bulk Configuration
# -------------------------------------------------------------

# Set up lattice
lattice = Hexagonal(2.4612*Angstrom, 30.0*Angstrom)

# Define elements
elements = [Carbon, Carbon]

# Define coordinates
fractional_coordinates = [[ 0.333333333333,  0.6666666666     ,  0.5           ],
                          [ 0.666666666666,  0.3333333333  ,  0.5           ]]

# Set up configuration
bulk_configuration = BulkConfiguration(
    bravais_lattice=lattice,
    elements=elements,
    fractional_coordinates=fractional_coordinates
    )


# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------
#----------------------------------------
# Basis Set
#----------------------------------------
basis_set = [
    GGABasis.Carbon_DoubleZetaPolarized,
    ]
#----------------------------------------
# Exchange-Correlation
#----------------------------------------
exchange_correlation = SGGA.PBE

k_point_sampling = MonkhorstPackGrid(
    na=51,
    nb=51,
    )
numerical_accuracy_parameters = NumericalAccuracyParameters(
    density_mesh_cutoff=110.0*Hartree,
    k_point_sampling=k_point_sampling,
    occupation_method=FermiDirac(0.05*eV),

    )
poisson_solver = FastFourier2DSolver(
    boundary_conditions=[[PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [DirichletBoundaryCondition(),DirichletBoundaryCondition()]]
    )
iteration_control_parameters = IterationControlParameters(
 tolerance=1e-08,
 max_steps=10000,
 )
 

calculator = LCAOCalculator(
    exchange_correlation=exchange_correlation,
    numerical_accuracy_parameters=numerical_accuracy_parameters,
    poisson_solver=poisson_solver,
    iteration_control_parameters=iteration_control_parameters,
    )

bulk_configuration.setCalculator(calculator)
nlprint(bulk_configuration)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)


## -------------------------------------------------------------
## Optimize Geometry
## ------------------------------------------------------------- 
 
   
bulk_configuration = OptimizeGeometry(
bulk_configuration,
max_forces=0.0001*eV/Ang,
max_stress=1.0e-04*eV/Angstrom**3,
max_steps=20000,
max_step_length=0.4*Ang,
optimize_cell=True,
trajectory_filename=None,
optimizer_method=LBFGS(),
enable_optimization_stop_file=False,
)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)
nlprint(bulk_configuration)



bulk_configuration = nlread('gr_mp_fhi_dzp.hdf5', BulkConfiguration)[1]
# -------------------------------------------------------------
# Dynamical Matrix
# -------------------------------------------------------------
dynamical_matrix = DynamicalMatrix(
    bulk_configuration,
    filename= 'gr_mp_fhi.hdf5',
    object_id='dynamical_matrix',
    repetitions=(11, 11, 1),
    atomic_displacement=0.01*Angstrom,
    acoustic_sum_rule=True,
    finite_difference_method=Central,
    #max_interaction_range=3.5*Angstrom,
    force_tolerance=1e-08*Hartree/Bohr**2,
    processes_per_displacement=28,
    log_filename_prefix='forces_fhi_mp_',
    use_wigner_seitz_scheme=True,
    )
dynamical_matrix.update()
 
# -------------------------------------------------------------
# Phonon Bandstructure
# -------------------------------------------------------------
phonon_bandstructure = PhononBandstructure(
    configuration=bulk_configuration,
    dynamical_matrix=dynamical_matrix,
    route=['G', 'K', 'M', 'G'],
    points_per_segment=100,
    number_of_bands=All
    )

nlsave('gr_mp_fhi_dzp.hdf5', phonon_bandstructure)

filename = 'gr_mp_fhi_ph_band.dat'#.format(band_index)
with open(filename, 'w') as f:
    phonon_bandstructure.nlprint(f)

In  processes_per_displacement, the parameter of the dynamical matrix is set to just '4' using a single node, and then the results obtained match the literature.
But when I use larger supercell repetitions=(11, 11, 1), I need to set  processes_per_displacement for more processes, so I set it to '28'. Then, the phonon energies are ~ 12000 meV (with negative energies, too), quite off from the simple graphene phonon dispersion. This processes_per_displacement  is a sensitive parameter.
Why is it so? What am I missing here?

12
Hi,
I am using my pre-calculated data (BulkConfigurations) as the TrainingSet following: https://docs.quantumatk.com/manual/Types/TrainingSet/TrainingSet.html#trainingset-c
The pre-calculated data was obtained using LCAO and saved in '.hdf5', files, there are a number of BulkConfigurations.
When I am using following script:
 
Code
import glob
import os
directory = ''

filenames= glob.glob(os.path.join(directory, 'data_*.hdf5'))


bulk_configurations = []


for filename in filenames:
    bulk_configurations.append(nlread(filename, BulkConfiguration)[0])
   
calculator = bulk_configurations[0].calculator()
training_set= TrainingSet(bulk_configurations, recalculate_training_data=False, calculator = calculator )
scan_over_non_linear_coefficients = scanOverNonLinearCoefficients(
    perform_optimization=False

# Moment Tensor Potential Training
moment_tensor_potential_training = MomentTensorPotentialTraining(
    filename='MTP.hdf5',
    object_id='mtp',
    training_sets= training_set,
    calculator=calculator, 
    fitting_parameters_list=scan_over_non_linear_coefficients
)
moment_tensor_potential_training.update()

It gives the error: "training_sets miss data. Check that all required energy, forces, or stress data is provided.".
The Bulkconfigurations are converged. What could be the possible solution to this error?

Best,
krabidix

13
General Questions and Answers / NanoTube Wrapper
« on: July 31, 2023, 10:37 »
Hi,
Can we use the script for Builder plugin tools for e.g for TubeWrapper ?
I didn't find any attributes in BulkConfiguration class for most of the plugins.


Best regards,
Krabidix

14
Hi,
What general guidelines should be followed when setting "max_interaction_range" for phonon dispersion calculations to achieve reliable results, taking into account its effects on supercell size and the occurrence of imaginary frequencies? Specifically, when using the default interaction range, larger supercells are suggested 
Code
repetitions = checkNumberOfRepetitions(bulk_configuration),
while specifying a smaller max_interaction_range leads to smaller supercells 
Code
repetitions = checkNumberOfRepetitions(bulk_configuration, max_interaction_range=5.0*Angstrom).
Furthermore, the choice of interaction range can influence the presence or absence of imaginary frequencies in different cases.
For some systems, the lower interaction range manages to eliminate imaginary frequencies, while the default value of the interaction range shows imaginary frequencies, and there are some cases in which situations got reversed i.e default interaction range eliminates imaginary frequency and the lower interaction range does not.
So, is there any general rule which needs to be followed while calculating reliable phonon dispersion?

Best regard,
krabidix

                                 
 

15
Hi,
Yes, it worked with the capital "S".

Thanks for the help.

Pages: [1] 2