Memory allocation and optimal input settings when running calculations for large systems |
- Date: 2024/10/29 08:14
- Name: Beniam
- Dear OpenMX Developers,
I am trying to run an NVT-NH MD on an MoS2/H2O system consisting of 900 atoms. Currently, I can complete 34 MD steps in 04:28:56 (hr:min:sec) but the calculation is failing with the following error:
[c314-144:3523221:0:3523221] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x800000034)
I suspect that this is due to a lack of memory, but I am already requesting a large amount of nodes and cores. How can I select the optimal amount of nodes to perform this calculation? It is hard to do scaling tests as many nodes/cores are needed to get the calculation started (studying a large system). Secondly, are there any optimizations I can make to the input settings to make the calculation run faster (for example, I am running DC-LNO diagonalization, 1x1x1 kgrid, etc.)? Based on experience, is the md steps completed / time slow for such a system running on a HPC cluster? Please let me know if I can provide any more information (such as cluster specific data, input and output files - don't know how to attach to this message) to solve this problem.
Thank you for your help!
| |