Author Topic: Phonon dispersion is senstitive to "processes_per_displacement" paramter of DM?  (Read 8903 times)

0 Members and 1 Guest are viewing this topic.

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Hi, I calculated the graphene phonon dispersion spectrum using the following script:
Code
# -------------------------------------------------------------
# Bulk Configuration
# -------------------------------------------------------------

# Set up lattice
lattice = Hexagonal(2.4612*Angstrom, 30.0*Angstrom)

# Define elements
elements = [Carbon, Carbon]

# Define coordinates
fractional_coordinates = [[ 0.333333333333,  0.6666666666     ,  0.5           ],
                          [ 0.666666666666,  0.3333333333  ,  0.5           ]]

# Set up configuration
bulk_configuration = BulkConfiguration(
    bravais_lattice=lattice,
    elements=elements,
    fractional_coordinates=fractional_coordinates
    )


# -------------------------------------------------------------
# Calculator
# -------------------------------------------------------------
#----------------------------------------
# Basis Set
#----------------------------------------
basis_set = [
    GGABasis.Carbon_DoubleZetaPolarized,
    ]
#----------------------------------------
# Exchange-Correlation
#----------------------------------------
exchange_correlation = SGGA.PBE

k_point_sampling = MonkhorstPackGrid(
    na=51,
    nb=51,
    )
numerical_accuracy_parameters = NumericalAccuracyParameters(
    density_mesh_cutoff=110.0*Hartree,
    k_point_sampling=k_point_sampling,
    occupation_method=FermiDirac(0.05*eV),

    )
poisson_solver = FastFourier2DSolver(
    boundary_conditions=[[PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [PeriodicBoundaryCondition(),PeriodicBoundaryCondition()],
                         [DirichletBoundaryCondition(),DirichletBoundaryCondition()]]
    )
iteration_control_parameters = IterationControlParameters(
 tolerance=1e-08,
 max_steps=10000,
 )
  

calculator = LCAOCalculator(
    exchange_correlation=exchange_correlation,
    numerical_accuracy_parameters=numerical_accuracy_parameters,
    poisson_solver=poisson_solver,
    iteration_control_parameters=iteration_control_parameters,
    )

bulk_configuration.setCalculator(calculator)
nlprint(bulk_configuration)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)


## -------------------------------------------------------------
## Optimize Geometry
## -------------------------------------------------------------  
 
   
bulk_configuration = OptimizeGeometry(
bulk_configuration,
max_forces=0.0001*eV/Ang,
max_stress=1.0e-04*eV/Angstrom**3,
max_steps=20000,
max_step_length=0.4*Ang,
optimize_cell=True,
trajectory_filename=None,
optimizer_method=LBFGS(),
enable_optimization_stop_file=False,
)
bulk_configuration.update()
nlsave('gr_mp_fhi_dzp.hdf5', bulk_configuration)
nlprint(bulk_configuration)



bulk_configuration = nlread('gr_mp_fhi_dzp.hdf5', BulkConfiguration)[1]
# -------------------------------------------------------------
# Dynamical Matrix
# -------------------------------------------------------------
dynamical_matrix = DynamicalMatrix(
    bulk_configuration,
    filename= 'gr_mp_fhi.hdf5',
    object_id='dynamical_matrix',
    repetitions=(11, 11, 1),
    atomic_displacement=0.01*Angstrom,
    acoustic_sum_rule=True,
    finite_difference_method=Central,
    #max_interaction_range=3.5*Angstrom,
    force_tolerance=1e-08*Hartree/Bohr**2,
    processes_per_displacement=28,
    log_filename_prefix='forces_fhi_mp_',
    use_wigner_seitz_scheme=True,
    )
dynamical_matrix.update()
  
# -------------------------------------------------------------
# Phonon Bandstructure
# -------------------------------------------------------------
phonon_bandstructure = PhononBandstructure(
    configuration=bulk_configuration,
    dynamical_matrix=dynamical_matrix,
    route=['G', 'K', 'M', 'G'],
    points_per_segment=100,
    number_of_bands=All
    )

nlsave('gr_mp_fhi_dzp.hdf5', phonon_bandstructure)

filename = 'gr_mp_fhi_ph_band.dat'#.format(band_index)
with open(filename, 'w') as f:
    phonon_bandstructure.nlprint(f)
In  processes_per_displacement, the parameter of the dynamical matrix is set to just '4' using a single node, and then the results obtained match the literature. But when I use larger supercell repetitions=(11, 11, 1), I need to set  processes_per_displacement for more processes, so I set it to '28'. Then, the phonon energies are ~ 12000 meV (with negative energies, too), quite off from the simple graphene phonon dispersion. This processes_per_displacement  is a sensitive parameter. Why is it so? What am I missing here?

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
It shouldn't matter, but there is a lot going on here. What is the repetition in the calculation that gives good results? Do you use the Wigner-Seitz scheme in both cases? Are there convergence issues indicated in any log file? Code version (we fixed some symmetry-related problems that gave negative energies recently)?

There is no real need to optimize the geometry since the structure is symmetric (fractional coordinates constrained to 1/3 and 2/3). Yes, you can optimize the lattice constant, but that would be a simple equation of state (vary the parameter a for 5-10 runs and fit a parabola), much more efficient.

Do you need spin-polarization? Not sure it makes a difference, just takes longer... I try without first, at least, while testing other parameters.

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Thank you for your input!

I understand that I may have overgeneralized regarding geometry optimization and script parameters.

My primary focus is on the unexpected behaviour of phonon energies when changing the "processes_per_displacement" parameter within my script.

Using a large value for "processes_per_displacement" (e.g., 26) results in unusually high phonon energies (~12000 meV), while a smaller value (e.g., 4) produces the expected range for graphene (~200 meV).

This discrepancy is puzzling and leads me to question the impact of this parameter.

I am using QuantumATK Version U-2022.12. 
I used the Wigner-Seitz scheme in both cases. I changed only "processes_per_displacement".

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
I think we need to look at the exact scripts and output. If you don't want to share them publicly, you can email then.

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Here are the output (in zip files) and script for the case in which I used a large value (28) for processes_per_displacement.
The attached image is for the case when I used the value of processes_per_displacement =4.

The only question is why the energies are that much off!
« Last Edit: March 31, 2024, 09:10 by krabidix »

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Could you zip the log files too, might help to look at them before rerunning, the HDF5 files don't tell us much how the problem arises

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Here are the log files for forces and scf.

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
These are for 28 processes, right? The point will be to compare with the run using 4, so please upload those too.

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Hi,
These are relevant data of both cases ( with processes = 28 and 4).

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
These are the dynamical matrix files for both cases.
I think this is complete information!

Now, the question can be addressed: why phonon dispersion is senstitive to "processes_per_displacement" paramter?

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
From these files it is at least clear that the self-consistent loops proceeds very differently in the two cases for some displacements (but not all), and in the end of course this makes a big difference for the dynamical matrix.

Now, when I submit this to our bug fixing team, the first thing they will ask is if it was tested in the latest version. I personally don't have a machine with 168 cores at hand to test on, but you do, so it would be really helpful if you could rerun this with V-2023.12-SP1 (which has many new features compared to U anyway), since in theory it could have been fixed since the version you used. This should only take you 7-8 hours :-)

Second, I find your choice of processes_per_displacement=28 odd when you run on 24-core machines. This gives at the very least odd load-balancing and unnecessary network communications.

So, please check using processes_per_displacement=24 instead. One can argue that any value should work numerically, so we might still look into it, but only if it appears in V-version also.

The nice news, once we resolve this, is that the speedup using a large processes_per_displacement is substantial, basically linear!

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Hi,
The latest version does not make any difference.
The question remains why "process_per_displacement" parameter is so sensitive to phonon calculations ?

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
Here are the bulk_configuration and other script files if the QuantumATK team wants to investigate the issue.

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5576
  • Country: dk
  • Reputation: 96
    • View Profile
    • QuantumATK at Synopsys
Thanks for trying 2023, it is certainly concerning that you still see the problem. However, we have also run multiple versions of this calculation with the current version and cannot reproduce it. In theory there could be unusual connection to your hardware or MPI libraries etc, but would you also mind running it with 24 and not 28 processes?

Offline krabidix

  • Regular QuantumATK user
  • **
  • Posts: 22
  • Country: fi
  • Reputation: 0
    • View Profile
With '24' and '4' cores, the results match.