Re: The misterious performance deterioration ( No.1 ) |
- Date: 2017/03/21 01:53
- Name: Kylin
- I even test my system in my MPB with MPICH+gcc-v6+openBLAS
Despite the long time of Diagonalization, the performance in MPB with 1 cores is better than that on cluster with 24 cores in one node. Thus I think maybe the cluster encounter some problem, but could someone give us a instruction on how to identify it?
MPB Calculation with 5 SCF loops ***********************************************************
Elapsed.Time. 913.708
Min_ID Min_Time Max_ID Max_Time Total Computational Time = 0 913.708 0 913.708 readfile = 0 8.961 0 8.961 truncation = 0 0.000 0 0.000 MD_pac = 0 0.002 0 0.002 OutData = 0 0.641 0 0.641 DFT = 0 899.731 0 899.731
*** In DFT ***
Set_OLP_Kin = 0 75.318 0 75.318 Set_Nonlocal = 0 70.700 0 70.700 Set_ProExpn_VNA = 0 124.070 0 124.070 Set_Hamiltonian = 0 26.882 0 26.882 Poisson = 0 0.281 0 0.281 Diagonalization = 0 437.818 0 437.818 Mixing_DM = 0 1.037 0 1.037 Force = 0 123.659 0 123.659 Total_Energy = 0 19.421 0 19.421 Set_Aden_Grid = 0 0.296 0 0.296 Set_Orbitals_Grid = 0 0.518 0 0.518 Set_Density_Grid = 0 19.241 0 19.241 RestartFileDFT = 0 0.027 0 0.027 Mulliken_Charge = 0 0.040 0 0.040 FFT(2D)_Density = 0 0.314 0 0.314 Others = 0 0.110 0 0.110
|
Re: The misterious performance deterioration ( No.2 ) |
- Date: 2017/03/21 09:27
- Name: T. Ozaki
- Hi,
The most likely cause is the disk IO of your system. This can be checked by controlling amount of output files with the following keyword:
level.of.fileout 0
It would be also helpful to check whether the binary mode improves the degradation. See below:
http://www.openmx-square.org/openmx_man3.8/node172.html
In your case the second trial became quite slow. Thus, overwriting files with the same file name might cause some problem. To check this, delete files stored in a directory '*_rst' and other output files generated by the first trial, and start it from fully scratch.
Regards,
TO
|
|