Re: Unreasonable exchange coupling parameters (Jij) for bcc Fe with increasing Kgrids obtained by jx ( No.1 ) |
- Date: 2025/01/21 16:42
- Name: Guangzong Xing <xingguangzong@nimte.ac.cn>
I apologize for submitting the thread by mistake.
Recently, I have been trying to calculate the exchange coupling parameters (Jij) for bcc Fe using OpenMX and jx. However, I observed a complete divergence in the Jij results when increasing the number of k-grids. The procedures I followed are described as follows:
1. I used the same k-grid size (27x27x27) as provided in the example calculation in the work folder for both the SCF (OpenMX) and Jij (jx) calculations. Indeed, I was able to reproduce the same Jij results as those shown in the manual. Portions of the input and output files for the Jij calculation are attached below:
##############input for Jij##################
Flag.PeriodicSum off # default - off
Num.Poles 60 Num.Kgrid 27 27 27 Num.ij.pairs 20 # NOTE: Number of ij pairs. Bunch.ij.pairs 20 # default - 1 # NOTE: an Optional keyword to use when the memory consumption is too large at the default setting. # Should be same to or smaller than Num.ij.pairs. # The smaller Bunch.ij.pairs results in smaller memory consumption with larger calculation time.
<ijpairs.cellid 1 2 -2 -2 -2 1 2 -2 -2 1 1 2 -2 1 -2 1 2 -2 1 1
#####################output for Jij with 27x27x27 Kgrids##################
Jij calculation for a periodic structure Number of k-grids: 27 27 27 flag_periodic_sum = 0: coupling between site i at cell 0 and site j at cell R Number of poles of Fermi-Dirac continued fraction (PRB.75.035123): 60
i j c1 c2 c3 J [meV] J [mRy] time_eig [s] time_Jij [s] ------------------------------------------------------------------------------------------------- 1 2 -2 -2 -2 2.212623885957 0.162624857547 0.15452 0.53706 1 2 -2 -2 1 2.212623952941 0.162624862471 0.00000 0.53305 1 2 -2 1 -2 2.212623808411 0.162624851848 0.00000 0.55870 1 2 -2 1 1 2.212623812456 0.162624852145 0.00000 0.53318
2. I increased the number of k-grids to 50x50x50 for both the SCF (OpenMX) and Jij (jx) calculations. However, unphysical results with extremely large Jij values were obtained.
#####################output for Jij with 50x50x50 Kgrids##################
Jij calculation for a periodic structure Number of k-grids: 50 50 50 flag_periodic_sum = 0: coupling between site i at cell 0 and site j at cell R Number of poles of Fermi-Dirac continued fraction (PRB.75.035123): 60
i j c1 c2 c3 J [meV] J [mRy] time_eig [s] time_Jij [s] ------------------------------------------------------------------------------------------------- 1 2 -2 -2 -2 -22.189157035205 -1.630872976129 2.22917 5.50995 1 2 -2 -2 1 -22.189158004066 -1.630873047339 0.00000 5.88828 1 2 -2 1 -2 -22.189156551309 -1.630872940563 0.00000 5.99443 1 2 -2 1 1 -22.189157517999 -1.630873011614 0.00000 6.25619
Could you please help me resolve these issues? In case you need them, I have also attached all the libraries used to compile OpenMX and jx. By the way, I alse try to compile the jx with basic optimization (-O1) and no optimization (-O0), hwoever, it does not work.
linux-vdso.so.1 (0x00007ffec2be3000) libmkl_scalapack_lp64.so.2 => /public/software/compiler/intel/2022.1.0/mkl/lib/intel64/libmkl_scalapack_lp64.so.2 (0x000014a90faec000) libmkl_intel_lp64.so.2 => /public/software/compiler/intel/2022.1.0/mkl/lib/intel64/libmkl_intel_lp64.so.2 (0x000014a90ec37000) libmkl_intel_thread.so.2 => /public/software/compiler/intel/2022.1.0/mkl/lib/intel64/libmkl_intel_thread.so.2 (0x000014a90b4c3000) libmkl_core.so.2 => /public/software/compiler/intel/2022.1.0/mkl/lib/intel64/libmkl_core.so.2 (0x000014a9070ed000) libifcore.so.5 => /public/software/compiler/intel/2022.1.0/compiler/lib/intel64_lin/libifcore.so.5 (0x000014a9102db000) libmkl_blacs_intelmpi_lp64.so.2 => /public/software/compiler/intel/2022.1.0/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so.2 (0x000014a910293000) libiomp5.so => /public/software/compiler/intel/2022.1.0/compiler/lib/intel64_lin/libiomp5.so (0x000014a906cb4000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000014a906a94000) libm.so.6 => /lib64/libm.so.6 (0x000014a906712000) libdl.so.2 => /lib64/libdl.so.2 (0x000014a90650e000) libmpifort.so.12 => /public/software/oneapi/2024/mpi/2021.11/lib/release/libmpifort.so.12 (0x000014a906157000) libmpi.so.12 => /public/software/oneapi/2024/mpi/2021.11/lib/release/libmpi.so.12 (0x000014a90461f000) librt.so.1 => /lib64/librt.so.1 (0x000014a904417000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x000014a9041ff000) libc.so.6 => /lib64/libc.so.6 (0x000014a903e3a000) libimf.so => /public/software/compiler/intel/2022.1.0/compiler/lib/intel64_lin/libimf.so (0x000014a9037ac000) libsvml.so => /public/software/compiler/intel/2022.1.0/compiler/lib/intel64_lin/libsvml.so (0x000014a90174a000) libintlc.so.5 => /public/software/compiler/intel/2022.1.0/compiler/lib/intel64_lin/libintlc.so.5 (0x000014a9014d2000) /lib64/ld-linux-x86-64.so.2 (0x000014a910219000)
Thank you for your help.
Best, Guangzong Xing
|
|