Author Topic: Simulating protein  (Read 3390 times)

0 Members and 1 Guest are viewing this topic.

Offline basantsaini

  • New QuantumATK user
  • *
  • Posts: 1
  • Country: in
  • Reputation: 0
    • View Profile
Simulating protein
« on: October 29, 2022, 08:18 »
I am trying to simulate a sensor based on silicon nano wire and olfactory receptor protein. I am getting this error.
=================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 72618 RUNNING AT servertech
=   EXIT CODE: 9
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

Any help from someone who has used QuantumATK for protein simulation, would be really appreciated.
Thank You.......

Offline AsifShah

  • QuantumATK Guru
  • ****
  • Posts: 173
  • Country: in
  • Reputation: 2
    • View Profile
Re: Simulating protein
« Reply #1 on: October 31, 2022, 08:29 »
Probably memory issue. I faced same in other simulations especialy DOS calculations.

My guess would be you are taking too much stringent conditions such as high density of k points, mesh, large energy intervals, high density for energy sampling points, or other convergence parameters.
« Last Edit: October 31, 2022, 08:31 by AsifShah »

Offline filipr

  • QuantumATK Staff
  • Heavy QuantumATK user
  • *****
  • Posts: 81
  • Country: dk
  • Reputation: 6
  • QuantumATK developer
    • View Profile
Re: Simulating protein
« Reply #2 on: October 31, 2022, 14:28 »
Quote
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9)

Means that your calculation was killed = stopped by an external process or user. This is in most cases the job scheduling system (Torque, Slurm, SGE, ...) on the cluster that kills the job if it takes longer than the allowed time, uses more memory than allowed or requested or uses more processes than there are cores on the node. If the job scheduler is configured correctly and you configured your submission script for it it should send you an email with clarification of why the job was killed.

If the job used too much memory you will have to run on machines with more memory or try to get the calculation to consume less or distribute the memory use over more nodes (physical machines). This can be done in a multitude of ways:

  • Use more OpenMP threads
  • Use more nodes and thus more MPI processes
  • Increase use of multilevel parallelism where possible (processes_per_<...>)
  • Reduce the computational load by decreasing system size or computational parameters

See also: