Author Topic: std::bad_alloc  (Read 2126 times)

0 Members and 1 Guest are viewing this topic.

Offline ams_nanolab

  • Supreme QuantumATK Wizard
  • *****
  • Posts: 389
  • Country: in
  • Reputation: 11
    • View Profile
std::bad_alloc
« on: April 10, 2014, 14:19 »
I am getting the following error after DFT calculation finishes

Calculating Complex Bands  : terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
rank 0 in job 2  localhost.localdomain_44046   caused collective abort of all ranks
  exit status of rank 0: killed by signal 9

A case of insufficient memory I think. Though the Estimate Memory Usage shows 1016 Mbyte required, while I have 8 GB of memory.

How much memory I should typically have to run ATK for medium supercells (in this case 5nm dia CNT 1x1x16 DFT-LDA, calculating complex BS 601 points, -2 to +2 eV).

I need these specs. as I am looking to procure a new server for my research group, and don't want to end up having insufficient hardware.  ;D

Offline Anders Blom

  • QuantumATK Staff
  • Supreme QuantumATK Wizard
  • *****
  • Posts: 5411
  • Country: dk
  • Reputation: 89
    • View Profile
    • QuantumATK at Synopsys
Re: std::bad_alloc
« Reply #1 on: April 11, 2014, 08:54 »
The memory estimate is crude, but in particular does not take the Analysis parts into account.

One should know that the algorithm for the complex band structure requires the structure to be repeated, so even if the SCF look takes 1 GB, the CBS analysis can take quite a lot more. Therefore, if you are running several MPI processes on the same machine, this may exhaust the available RAM. You can try two things:

1) If indeed you are running the 4 MPIs on the same machine, don't - just try to run the calculation in serial (just the CBS part) and measure the memory usage.
2) Else, if you already only have 1 MPI per node, you could send us the script and we can try to run it on a large memory machine to check the size.