Re: How to reduce memory usage for very large systems? ( No.1 ) |
- Date: 2022/01/24 18:15
- Name: Naoya Yamaguchi
- Hi,
To decrease the RAM usage, you can make the PAOs poorer by decreasing the number of the orbitals.
And, you can roughly estimate the usage based on a calculation of a smaller system in advance. For example, if a 10000 orbital calculation needs 1 GB, a 100000 orbital calculation can need (100000/10000)^3=1000 GB.
Regards, Naoya Yamaguchi
|
Re: How to reduce memory usage for very large systems? ( No.2 ) |
- Date: 2022/01/25 05:04
- Name: Chong Wang <ch-wang@outlook.com>
- Hi Naoya,
Thanks for the tips.
If a job needs 1000G memory and I use two nodes to carry out the calculations, does each node only need 500G memory? My understanding is that it is unlikely this way since each node has its own copy of many things, but I wonder if you have a rough idea of how the memory usage scales with number of computing nodes?
Best, Chong
|
Re: How to reduce memory usage for very large systems? ( No.3 ) |
- Date: 2022/01/25 12:27
- Name: Naoya Yamaguchi
- Dear Chong,
As far as I know, the current version of OpenMX uses the ScaLAPACK that distributes matrices to several MPI processes and operates them, so that I wonder if the upper limit of a problem size increases, as the number of nodes increases.
Regards, Naoya Yamaguchi
|
Re: How to reduce memory usage for very large systems? ( No.4 ) |
- Date: 2022/01/25 12:29
- Name: Naoya Yamaguchi
- Dear Chong,
If you calculate the band structure, you can use the combination of the O(N) and conventional schemes. http://www.openmx-square.org/openmx_man3.9/node88.html
Regards, Naoya Yamaguchi
|