Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - krabidix

Pages: [1] 2
1
With '24' and '4' cores, the results match.

2
Here are the bulk_configuration and other script files if the QuantumATK team wants to investigate the issue.

3
Hi,
The latest version does not make any difference.
The question remains why "process_per_displacement" parameter is so sensitive to phonon calculations ?

4
These are the dynamical matrix files for both cases.
I think this is complete information!

Now, the question can be addressed: why phonon dispersion is senstitive to "processes_per_displacement" paramter?

5
Hi,
These are relevant data of both cases ( with processes = 28 and 4).

6
Here are the log files for forces and scf.

7
Here are the output (in zip files) and script for the case in which I used a large value (28) for processes_per_displacement.
The attached image is for the case when I used the value of processes_per_displacement =4.

The only question is why the energies are that much off!

8
Thank you for your input!

I understand that I may have overgeneralized regarding geometry optimization and script parameters.

My primary focus is on the unexpected behaviour of phonon energies when changing the "processes_per_displacement" parameter within my script.

Using a large value for "processes_per_displacement" (e.g., 26) results in unusually high phonon energies (~12000 meV), while a smaller value (e.g., 4) produces the expected range for graphene (~200 meV).

This discrepancy is puzzling and leads me to question the impact of this parameter.

I am using QuantumATK Version U-2022.12. 
I used the Wigner-Seitz scheme in both cases. I changed only "processes_per_displacement".

9
Hi,
I calculated the graphene phonon dispersion spectrum using the following script:
Code
# -------------------------------------------------------------
# Bulk Configuration
# -------------------------------------------------------------

# Set up lattice
lattice = Hexagonal(2.4612*Angstrom, 30.0*Angstrom)

# Define elements
elements = [Carbon, Carbon]

# Define coordinates
fractional_coordinates = [[ 0.333333333333,  0.6666666666     ,  0.5           ],
                          [ 0.666666666666,  0.3333333333  ,  0.5           ]]

# Set up configuration
bulk_configuration = BulkConfiguration(
    bravais_lattice=lattice,
    elements=elements,
    fractional_coordinates=fractional_coordinates
    )


# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------
#----------------------------------------
# Basis Set
#----------------------------------------
basis_set = [
    GGABasis.Carbon_DoubleZetaPolarized,
    ]
#----------------------------------------
# Exchange-Correlation
#----------------------------------------
exchange_correlation = SGGA.PBE

k_point_sampling = MonkhorstPackGrid(
    na=51,
    nb=51,
    )
numerical_accuracy_parameters = NumericalAccuracyParameters(
    density_mesh_cutoff=110.0*Hartree,
    k_point_sampling=k_point_sampling,
    occupation_method=FermiDirac(0.05*eV),

    )
poisson_solver = FastFourier2DSolver(
    boundary_conditions=[[PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [DirichletBoundaryCondition(),DirichletBoundaryCondition()]]
    )
iteration_control_parameters = IterationControlParameters(
 tolerance=1e-08,
 max_steps=10000,
 )
 

calculator = LCAOCalculator(
    exchange_correlation=exchange_correlation,
    numerical_accuracy_parameters=numerical_accuracy_parameters,
    poisson_solver=poisson_solver,
    iteration_control_parameters=iteration_control_parameters,
    )

bulk_configuration.setCalculator(calculator)
nlprint(bulk_configuration)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)


## -------------------------------------------------------------
## Optimize Geometry
## ------------------------------------------------------------- 
 
   
bulk_configuration = OptimizeGeometry(
bulk_configuration,
max_forces=0.0001*eV/Ang,
max_stress=1.0e-04*eV/Angstrom**3,
max_steps=20000,
max_step_length=0.4*Ang,
optimize_cell=True,
trajectory_filename=None,
optimizer_method=LBFGS(),
enable_optimization_stop_file=False,
)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)
nlprint(bulk_configuration)



bulk_configuration = nlread('gr_mp_fhi_dzp.hdf5', BulkConfiguration)[1]
# -------------------------------------------------------------
# Dynamical Matrix
# -------------------------------------------------------------
dynamical_matrix = DynamicalMatrix(
    bulk_configuration,
    filename= 'gr_mp_fhi.hdf5',
    object_id='dynamical_matrix',
    repetitions=(11, 11, 1),
    atomic_displacement=0.01*Angstrom,
    acoustic_sum_rule=True,
    finite_difference_method=Central,
    #max_interaction_range=3.5*Angstrom,
    force_tolerance=1e-08*Hartree/Bohr**2,
    processes_per_displacement=28,
    log_filename_prefix='forces_fhi_mp_',
    use_wigner_seitz_scheme=True,
    )
dynamical_matrix.update()
 
# -------------------------------------------------------------
# Phonon Bandstructure
# -------------------------------------------------------------
phonon_bandstructure = PhononBandstructure(
    configuration=bulk_configuration,
    dynamical_matrix=dynamical_matrix,
    route=['G', 'K', 'M', 'G'],
    points_per_segment=100,
    number_of_bands=All
    )

nlsave('gr_mp_fhi_dzp.hdf5', phonon_bandstructure)

filename = 'gr_mp_fhi_ph_band.dat'#.format(band_index)
with open(filename, 'w') as f:
    phonon_bandstructure.nlprint(f)

In  processes_per_displacement, the parameter of the dynamical matrix is set to just '4' using a single node, and then the results obtained match the literature.
But when I use larger supercell repetitions=(11, 11, 1), I need to set  processes_per_displacement for more processes, so I set it to '28'. Then, the phonon energies are ~ 12000 meV (with negative energies, too), quite off from the simple graphene phonon dispersion. This processes_per_displacement  is a sensitive parameter.
Why is it so? What am I missing here?

10
Hi,
I am using my pre-calculated data (BulkConfigurations) as the TrainingSet following: https://docs.quantumatk.com/manual/Types/TrainingSet/TrainingSet.html#trainingset-c
The pre-calculated data was obtained using LCAO and saved in '.hdf5', files, there are a number of BulkConfigurations.
When I am using following script:
 
Code
import glob
import os
directory = ''

filenames= glob.glob(os.path.join(directory, 'data_*.hdf5'))


bulk_configurations = []


for filename in filenames:
    bulk_configurations.append(nlread(filename, BulkConfiguration)[0])
   
calculator = bulk_configurations[0].calculator()
training_set= TrainingSet(bulk_configurations, recalculate_training_data=False, calculator = calculator )
scan_over_non_linear_coefficients = scanOverNonLinearCoefficients(
    perform_optimization=False

# Moment Tensor Potential Training
moment_tensor_potential_training = MomentTensorPotentialTraining(
    filename='MTP.hdf5',
    object_id='mtp',
    training_sets= training_set,
    calculator=calculator, 
    fitting_parameters_list=scan_over_non_linear_coefficients
)
moment_tensor_potential_training.update()

It gives the error: "training_sets miss data. Check that all required energy, forces, or stress data is provided.".
The Bulkconfigurations are converged. What could be the possible solution to this error?

Best,
krabidix

11
General Questions and Answers / NanoTube Wrapper
« on: July 31, 2023, 10:37 »
Hi,
Can we use the script for Builder plugin tools for e.g for TubeWrapper ?
I didn't find any attributes in BulkConfiguration class for most of the plugins.


Best regards,
Krabidix

12
Hi,
What general guidelines should be followed when setting "max_interaction_range" for phonon dispersion calculations to achieve reliable results, taking into account its effects on supercell size and the occurrence of imaginary frequencies? Specifically, when using the default interaction range, larger supercells are suggested 
Code
repetitions = checkNumberOfRepetitions(bulk_configuration),
while specifying a smaller max_interaction_range leads to smaller supercells 
Code
repetitions = checkNumberOfRepetitions(bulk_configuration, max_interaction_range=5.0*Angstrom).
Furthermore, the choice of interaction range can influence the presence or absence of imaginary frequencies in different cases.
For some systems, the lower interaction range manages to eliminate imaginary frequencies, while the default value of the interaction range shows imaginary frequencies, and there are some cases in which situations got reversed i.e default interaction range eliminates imaginary frequency and the lower interaction range does not.
So, is there any general rule which needs to be followed while calculating reliable phonon dispersion?

Best regard,
krabidix

                                 
 

13
Hi,
Yes, it worked with the capital "S".

Thanks for the help.

14
Hi,
I was trying to run the tutorial on MTP  https://docs.quantumatk.com/tutorials/mtp_basic/mtp_basic.html.
The tutorial works fine, although to plot the results the below commands:
Code
moment_tensor_potential_training=nlread('MTP_basics_results.hdf5',MomentTensorPotentialTraining)[0]
moment_tensor_potential_training._nlplotscatter(fit_index=1)
are not working.
When using 2022.12 version following error showed up:
Traceback (most recent call last):
  File "/n/work00/software/quantumatk/2022.12/bin/../atkpython/bin/atkpython", line 8, in <module>
    sys.exit(__run_atkpython())
  File "zipdir/ATKExecutables/atkwrappers/__init__.py", line 879, in __run_atkpython
  File "plot.py", line 2, in <module>
    moment_tensor_potential_training._nlplotscatter(fit_index=1)
AttributeError: 'MomentTensorPotentialTraining' object has no attribute '_nlplotscatter'

When version 2022.03-sp1 used following error occurred:
  File "zipdir/NL/IO/NLSaveUtilities.py", line 669, in nlread
  File "zipdir/NL/IO/HDF5.py", line 568, in readHDF5
  File "zipdir/NL/IO/HDF5.py", line 699, in readHDF5Group
  File "zipdir/NL/IO/HDF5.py", line 638, in readHDF5GroupToSerializable
  File "zipdir/NL/IO/HDF5.py", line 614, in readHDF5Dict
  File "zipdir/NL/IO/HDF5.py", line 699, in readHDF5Group
  File "zipdir/NL/IO/HDF5.py", line 659, in readHDF5GroupToSerializable
  File "zipdir/NL/IO/Serializable.py", line 331, in _fromVersionedData
  File "zipdir/NL/CommonConcepts/Calculator.py", line 67, in _createObject
  File "zipdir/NL/QuantumATK/ScopeExecuter.py", line 244, in scope_execute
NL.ComputerScienceUtilities.Exceptions.NLScopeExecutionError: __init__() got an unexpected keyword argument 'paw_grid_tolerance'

Kindly, help me in resolving the problem.

15
Hi,
I also want to add one more thing.
Using 'use_wigner_seitz_scheme=True', and invoking supercell sizes 3, and 5 for testing convergence.
What I got the supercell size of 3 is giving positive energies and with a larger supercell size the imaginary modes appeared. This is something unusual?
I have shared the script.

Pages: [1] 2