When I submit the job, I simply drag a .py file into the job manager. During this process, it causes error like this
"Traceback (most recent call last):
File "zipdir/NL/GUI/Tools/Tool.py", line 915, in dropEvent
File "zipdir/NL/GUI/Tools/JobManager/JobManager.py", line 772, in addObjects
File "zipdir/NL/GUI/Tools/JobManager/JobManager.py", line 886, in addScripts
File "build/lib/python3.8/contextlib.py", line 120, in __exit__
File "zipdir/NL/GUI/Tools/JobManager/JobModel.py", line 103, in batching
File "zipdir/NL/GUI/Tools/JobManager/JobManager.py", line 562, in saveSettings
File "zipdir/NL/IO/NLSaveUtilities.py", line 453, in nlsave
File "zipdir/NL/IO/HDF5.py", line 409, in __init__
File "zipdir/NL/IO/HDF5.py", line 186, in __init__
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "build/lib/python3.8/site-packages/h5py/_hl/group.py", line 167, in __getitem__
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: 'Unable to open object (addr overflow, addr = 31749486, size=168, eoa=30745876)'
".
Even though the job can still be run in the cluster, the job manager of QuantumATK does not save the .py file as a history, so the .hdf5 file after finishing calculation will not be saved into the software. How can I solve this issue?
I tried to delete the .hdf5 in .vnl folder as suggested in
https://forum.quantumatk.com/index.php?topic=6437.msg27030#msg27030. Unfortunately, it does not work. Note that I use version 2021.06