Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.1 ) |
- Date: 2020/08/22 18:20
- Name: Naoya Yamaguchi
- Dear Eike,
I might find the cause and you might solve it by modifying "Unfolding_Bands.c". Please try to replace lines 2873 and 2961 with for (i=-2*atomnum; i<=2*atomnum; i++) for (j=-2*atomnum; j<=2*atomnum; j++) for (k=-2*atomnum; k<=2*atomnum; k++) { Because I haven't checked the whole result, please check it by yourself.
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.2 ) |
- Date: 2020/08/22 22:05
- Name: Chi-Cheng <cclee.physics@gmail.com>
- Hi Eike,
You can try to skip the loop between line 2965 and line 2973, and then replace them by "esti=0; estj=0; estk=0;".
The estimated i, j, and k failed because the slab is not-orthogonal and too long along the c axis. Hope this will help.
Regards, Chi-Cheng
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.3 ) |
- Date: 2020/08/22 22:40
- Name: Chi-Cheng <cclee.physics@gmail.com>
- Hi Eike,
Sorry, line 2965 and line 2973 I mentioned previously should be 2961 and 2969, respectively.
------------------------- replace the following
for (i=-2*atomnum; i<=2*atomnum; i+=5) for (j=-2*atomnum; j<=2*atomnum; j+=5) for (k=-2*atomnum; k<=2*atomnum; k+=5) { double x=(double)i*a[0]+(double)j*b[0]+(double)k*c[0]+origin[0]; double y=(double)i*a[1]+(double)j*b[1]+(double)k*c[1]+origin[1]; double z=(double)i*a[2]+(double)j*b[2]+(double)k*c[2]+origin[2]; tmp[0]=X-x; tmp[1]=Y-y; tmp[2]=Z-z; if (dis(tmp)<estl) { estl=dis(tmp); esti=i; estj=j; estk=k; } }
by
esti=0; estj=0; estk=0; ----------------------------
If the loop does not take a long time, you can also follow Naoya's suggestion.
Regards, Chi-Cheng
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.4 ) |
- Date: 2020/08/26 23:47
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san & Chi-Cheng,
thank you very much for your advice. For now I was able, with Yamaguchi-san's code lines, to successfully obtain an unfolded bandstructure. I will need to check if there is any inconsistency in the structure exist or if the unfolding worked as it was originally implemented.
best regards, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.5 ) |
- Date: 2020/09/17 22:05
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san & Chi-Cheng,
in the meantime I was able to test the unfolding. It works if I do not include SO coupling and seems to produce reasonable results. However, if SO coupling is included the unfolding produces no error, but empty unfold_orb files. I tested with 3.8.5 (without scalapack) and 3.9.2 on a smaller system (orthorhombic bulk with 17 atoms) and on a different PC. 3.8.5 works. 3.9.2 works up to a certain number of atoms (10) in an otherwise identical unit cell (I simply remmoved the atoms from the unit cell one by one). Above it produces zero size output.
Do you have any advice at to why this may happen?
best regards, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.6 ) |
- Date: 2020/09/17 23:44
- Name: Naoya Yamaguchi
- Dear Eike-san,
Can you show each of the actual examples of the inputs?
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.7 ) |
- Date: 2020/09/18 00:17
- Name: Chi-Cheng <cclee.physics@gmail.com>
- Hi Eike,
Could you also try to reduce the number of k points? Maybe the file size is too large to handle. If reducing # of kpoints works, you can calculate the unfolding piece by piece by dividing the entire k path smartly.
Regards, Chi-Cheng
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.8 ) |
- Date: 2020/09/21 15:19
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Chi-Cheng,
I tried the same input with version 3.8.5 and it works. Here the test input. Just removing the last two atoms produces output. I do not think it is related to memory as the calculatin is fairly simple in the example case and the two machines I was running it on had 300+ and 64 GB of memory at 4 k-points for the unfolding.
best regards, Eike
DATA.PATH /home/calc/src/openmx3.9/DFT_DATA19 System.CurrrentDirectory ./ # default=./
System.Name test level.of.stdout 1 # default=1 (1-3) level.of.fileout 0 # default=1 (0-2)
# # Definition of Atomic Species #
Species.Number 3 <Definition.of.Atomic.Species Te Te7.0-s2p2d1 Te_CA19 Ta Ta7.0-s2p2d1 Ta_CA19 Ni Ni6.0S-s2p2d1 Ni_CA19S Definition.of.Atomic.Species>
# # Atoms #
Atoms.Number 12 # 12 # 14 # 9 Atoms.SpeciesAndCoordinates.Unit FRAC # Ang|AU <Atoms.SpeciesAndCoordinates 1 Ta 0.70460999 0 0 6.5 6.5 2 Ta 0.29539001 0 0 6.5 6.5 3 Te -0 0.1202 0.146 8 8 4 Te 0.5 0.77525002 0.24975 8 8 5 Te 0.25 0.32056001 0.25 8 8 6 Te 0.75 0.32056001 0.25 8 8 7 Te -0 0.77525002 0.25025001 8 8 8 Ni 0.5 0.1202 0.354 8 8 9 Ta 0.20461001 -0 0.5 6.5 6.5 10 Ta 0.79539001 -0 0.5 6.5 6.5 11 Ni 0.5 0.87980002 0.64600003 8 8 12 Te -0 0.22475 0.74975002 8 8 Atoms.SpeciesAndCoordinates>
#13 Te 0.25 0.67944002 0.75 8 8 #14 Te 0.75 0.67944002 0.75 8 8
Atoms.UnitVectors.Unit Ang
<Atoms.UnitVectors 7.9029 0.0000000000 0.0000000000 0.0000000000 7.230800 0.0000000000 0.0000000000 0.0000000000 6.223100 Atoms.UnitVectors>
Unfolding.Electronic.Band on # on|off, default=off
<Unfolding.ReferenceVectors 7.9029 0.0000000000 0.0000000000 0.0000000000 7.230800 0.0000000000 0.0000000000 0.0000000000 6.223100 Unfolding.ReferenceVectors> Unfolding.LowerBound -3.5 Unfolding.UpperBound 1.5
<Unfolding.Map 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 Unfolding.Map>
Unfolding.desired_totalnkpt 4 Unfolding.Nkpoint 4
<Unfolding.kpoint X 0.5 0.0 0.0 G 0.0 0.0 0.0 Z 0.0 0.0 0.5 U 0.5 0.0 0.5 Unfolding.kpoint>
# # SCF or Electronic System #
scf.XcType LSDA-CA # LDA|LSDA-CA|LSDA-PW|GGA-PBE scf.SpinPolarization NC # On|Off|NC scf.SpinOrbit.Coupling on scf.ElectronicTemperature 300.0 # default=300 (K) scf.energycutoff 75.0 # default=150 (Ry) #scf.Ngrid 24 24 24 scf.maxIter 2 # default=40 scf.EigenvalueSolver band # DC|GDC|Cluster|Band scf.Kgrid 4 4 4 # means n1 x n2 x n3 scf.ProExpn.VNA on # default=on scf.Generation.Kpoint regular scf.Mixing.Type rmm-diisk # Simple|Rmm-Diis|Gr-Pulay|Kerker|Rmm-Diisk scf.Init.Mixing.Weight 0.25 # default=0.30 scf.Min.Mixing.Weight 0.001 # default=0.001 scf.Max.Mixing.Weight 0.350 # default=0.40 scf.Mixing.History 10 # default=5 scf.Mixing.StartPulay 7 # default=6 scf.Mixing.EveryPulay 1 # default=5 scf.criterion 1.0e-7 # default=1.0e-6 (Hartree) scf.lapack.dste dstevx # dstegr|dstedc|dstevx, default=dstevx scf.restart off
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.9 ) |
- Date: 2020/09/22 17:19
- Name: Naoya Yamaguchi
- Dear Eike-san,
It is not a problem of the unfolding code. As I have reproduced it, the problem was caused by the poor PAOs. The SCF convergence was got but it could not give the meaningful results; in fact, the number of occupied states for your input calculated by OpenMX 3.9 was 152.
And, I leave an example of choice of PAOs to meet the condition: <Definition.of.Atomic.Species Te Te7.0-s3p2d2 Te_CA19 Ta Ta7.0-s3p2d2 Ta_CA19 Ni Ni6.0S-s2p2d2 Ni_CA19S Definition.of.Atomic.Species>
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.10 ) |
- Date: 2020/09/22 21:16
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
thank you very much. I usually start with lower quality to check the overall behaviour before going to productin quality calculations. What I do not understand is why the 3.8 produces the results? But for now I can simply increase the basis and use 3.9.2, so thank you for the advice.
best wishes, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.11 ) |
- Date: 2020/09/23 01:19
- Name: Naoya Yamaguchi
- Dear Eike-san,
>What I do not understand is why the 3.8 produces the results?
3.9 couldn't find the 180 occupied states, and the chemical potential or Fermi level was unphysical. You can find that the calculation is collapsed from the standard output or an out file; there appear the wrong values of the Mulliken population even from the first SCF step. At least, the unfolding code requires the physically meaningful relation between states and chemical potential obtained from the SCF calculation. In your case, the estimated chemical potential was quite high.
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.12 ) |
- Date: 2020/09/23 19:02
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
thank you for the clarification. I naively assumed that if convergence with SO was achieved that the band structure should also be comparable to the same calculatin without SO coupling. I will keep a look at the output values you mentioned in the future.
best regards, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.13 ) |
- Date: 2020/10/02 21:08
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
I increased the basis set as you suggested for the original slab calculation (Pd:Bi2Te3). However, now I ran into a new problem. I do get the following seemingly memory related error, which does not stop the slurm job, but crashes the calculation during:
******************* MD= 1 SCF= 1 ******************* <Band> Solving the eigenvalue problem... KGrids1: -0.41667 -0.25000 -0.08333 0.08333 0.25000 0.41667 KGrids2: -0.41667 -0.25000 -0.08333 0.08333 0.25000 0.41667 KGrids3: 0.00000 slurmstepd-fat2: error: Detected 1 oom-kill event(s) in step 49704304.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler. srun: error: fat2: task 7: Out Of Memory slurmstepd-fat2: error: *** JOB 49704304 ON fat2 CANCELLED AT 2020-10-02T13:40:29 DUE TO TIME LIMIT *** srun: Job step aborted: Waiting up to 32 seconds for job step to finish. slurmstepd-fat2: error: *** STEP 49704304.0 ON fat2 CANCELLED AT 2020-10-02T13:40:29 DUE TO TIME LIMIT *** slurmstepd-fat2: error: Detected 3 oom-kill event(s) in step 49704304.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
I thought about the possibility to not have sufficient RAM and looked at the memory usage (memory.usage.fileout on), but the calculation uses at best 5 Gigabytes per node (using 18 nodes) and I am running on it a 2 TB Memory machine.
I ran into a similar error in another calculation (input attached at the end of the post) and when switching to 3.8.5 the error did not occur. slurmstepd-cn10g-09: error: Detected 1 oom-kill event(s) in step 49701005.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler. srun: error: cn10g-09: task 14: Out Of Memory srun: Job step aborted: Waiting up to 32 seconds for job step to finish. slurmstepd-cn10g-09: error: *** STEP 49701005.0 ON cn10g-09 CANCELLED AT 2020-09-25T13:01:14 *** slurmstepd-cn10g-09: error: *** JOB 49701005 ON cn10g-09 CANCELLED AT 2020-09-25T13:01:14 *** slurmstepd-cn10g-09: error: Detected 1 oom-kill event(s) in step 49701005.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
Both codes have been compiled with the same compiler (intel 19.0.5 20190815) and libraries (see below)
3.8.5 CC = mpiicc -O3 -xHOST -ip -no-prec-div -qopenmp -I/usr/lib/intel/compiler/include -I/usr/include/mkl/fftw FC = mpiifort -O3 -xHOST -ip -no-prec-div -qopenmp LIB= -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lgfortran -liomp5 -lpthread -lifcore
3.9.2 CC = mpiicc -O3 -xHOST -ip -no-prec-div -qopenmp -I/usr/lib/intel/compiler/include -I/usr/include/mkl/fftw FC = mpiifort -O3 -xHOST -ip -no-prec-div -qopenmp LIB= -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lgfortran -liomp5 -lpthread -lifcore
best regards, Eike
Input that works with 3.8.5, but gives memory error in 3.9.2
DATA.PATH /home/calc/src/openmx3.9/DFT_DATA19
System.CurrrentDirectory ./ # default=./
System.Name TaNiTe2_6L_SO level.of.stdout 1 # default=1 (1-3) level.of.fileout 1 # default=1 (0-2)
# # Definition of Atomic Species #
Species.Number 3 <Definition.of.Atomic.Species Te Te7.0-s3p3d2 Te_CA19 Ta Ta7.0-s3p3d2 Ta_CA19 Ni Ni6.0S-s3p3d2 Ni_CA19S Definition.of.Atomic.Species>
# # Atoms #
Atoms.Number 80 Atoms.SpeciesAndCoordinates.Unit FRAC # Ang|AU <Atoms.SpeciesAndCoordinates 1 Ta 0.70460999 0.25 0 6.5 6.5 2 Ta 0.29539001 0.5 0 6.5 6.5 3 Ta 0.70460999 0.625 0 6.5 6.5 4 Ta 0.29539001 0.125 0 6.5 6.5 5 Ta 0.70460999 0.375 0 6.5 6.5 6 Ta 0.29539001 0.375 0 6.5 6.5 7 Ta 0.70460999 0.5 0 6.5 6.5 8 Ta 0.29539001 0.625 0 6.5 6.5 9 Ta 0.29539001 0.25 0 6.5 6.5 10 Ta 0.70460999 0.125 0 6.5 6.5 11 Ni 0 0.64002502 0.146 8 8 12 Ni -0 0.51502502 0.146 8 8 13 Ni -0 0.26502499 0.146 8 8 14 Ni 0 0.39002499 0.146 8 8 15 Ni -0 0.140025 0.146 8 8 16 Te 0.5 0.59690624 0.24975 8 8 17 Te 0.5 0.47190624 0.24975 8 8 18 Te 0.5 0.34690624 0.24975 8 8 19 Te 0.5 0.22190624 0.24975 8 8 20 Te 0.5 0.096906252 0.24975 8 8 21 Te 0.75 0.66507 0.25 8 8 22 Te 0.75 0.54007 0.25 8 8 23 Te 0.75 0.41507 0.25 8 8 24 Te 0.75 0.29007 0.25 8 8 25 Te 0.75 0.16507 0.25 8 8 26 Te 0.25 0.66507 0.25 8 8 27 Te 0.25 0.54007 0.25 8 8 28 Te 0.25 0.41507 0.25 8 8 29 Te 0.25 0.29007 0.25 8 8 30 Te 0.25 0.16507 0.25 8 8 31 Te -0 0.096906252 0.25025001 8 8 32 Te 0 0.47190624 0.25025001 8 8 33 Te 0 0.34690624 0.25025001 8 8 34 Te -0 0.22190624 0.25025001 8 8 35 Te 0 0.59690624 0.25025001 8 8 36 Ni 0.5 0.64002502 0.354 8 8 37 Ni 0.5 0.51502502 0.354 8 8 38 Ni 0.5 0.39002499 0.354 8 8 39 Ni 0.5 0.26502499 0.354 8 8 40 Ni 0.5 0.140025 0.354 8 8 41 Ta 0.79539001 0.625 0.5 6.5 6.5 42 Ta 0.79539001 0.5 0.5 6.5 6.5 43 Ta 0.79539001 0.375 0.5 6.5 6.5 44 Ta 0.79539001 0.25 0.5 6.5 6.5 45 Ta 0.79539001 0.125 0.5 6.5 6.5 46 Ta 0.20461001 0.625 0.5 6.5 6.5 47 Ta 0.20461001 0.5 0.5 6.5 6.5 48 Ta 0.20461001 0.375 0.5 6.5 6.5 49 Ta 0.20461001 0.25 0.5 6.5 6.5 50 Ta 0.20461001 0.125 0.5 6.5 6.5 51 Ni 0.5 0.60997498 0.64600003 8 8 52 Ni 0.5 0.48497501 0.64600003 8 8 53 Ni 0.5 0.35997501 0.64600003 8 8 54 Ni 0.5 0.234975 0.64600003 8 8 55 Ni 0.5 0.109975 0.64600003 8 8 56 Te 0 0.65309376 0.74975002 8 8 57 Te 0 0.52809376 0.74975002 8 8 58 Te 0 0.40309376 0.74975002 8 8 59 Te -0 0.27809376 0.74975002 8 8 60 Te 0 0.15309376 0.74975002 8 8 61 Te 0.25 0.33493 0.75 8 8 62 Te 0.25 0.20993 0.75 8 8 63 Te 0.25 0.084930003 0.75 8 8 64 Te 0.75 0.58493 0.75 8 8 65 Te 0.75 0.45993 0.75 8 8 66 Te 0.75 0.33493 0.75 8 8 67 Te 0.75 0.20993 0.75 8 8 68 Te 0.75 0.084930003 0.75 8 8 69 Te 0.25 0.58493 0.75 8 8 70 Te 0.25 0.45993 0.75 8 8 71 Te 0.5 0.65309376 0.75024998 8 8 72 Te 0.5 0.52809376 0.75024998 8 8 73 Te 0.5 0.40309376 0.75024998 8 8 74 Te 0.5 0.27809376 0.75024998 8 8 75 Te 0.5 0.15309376 0.75024998 8 8 76 Ni -0 0.60997498 0.85399997 8 8 77 Ni 0 0.48497501 0.85399997 8 8 78 Ni 0 0.35997501 0.85399997 8 8 79 Ni -0 0.234975 0.85399997 8 8 80 Ni -0 0.109975 0.85399997 8 8 Atoms.SpeciesAndCoordinates>
Atoms.UnitVectors.Unit Ang
<Atoms.UnitVectors 7.9029 0.0000000000 0.0000000000 0.0000000000 57.8464 0.0000000000 0.0000000000 0.0000000000 6.223100 Atoms.UnitVectors>
# # SCF or Electronic System #
scf.XcType LSDA-CA # LDA|LSDA-CA|LSDA-PW|GGA-PBE scf.SpinPolarization NC # On|Off|NC scf.SpinOrbit.Coupling on scf.ElectronicTemperature 300.0 # default=300 (K) scf.energycutoff 100.0 # default=150 (Ry) #scf.Ngrid 24 24 24 scf.maxIter 400 # default=40 scf.EigenvalueSolver band # DC|GDC|Cluster|Band scf.Kgrid 6 1 6 # means n1 x n2 x n3 scf.ProExpn.VNA on # default=on scf.Generation.Kpoint regular scf.Mixing.Type rmm-diisk # Simple|Rmm-Diis|Gr-Pulay|Kerker|Rmm-Diisk scf.Init.Mixing.Weight 0.1 # default=0.30 scf.Min.Mixing.Weight 0.001 # default=0.001 scf.Max.Mixing.Weight 0.350 # default=0.40 scf.Mixing.History 22 # default=5 scf.Mixing.StartPulay 37 # default=6 scf.Mixing.EveryPulay 1 # default=5 scf.criterion 1.0e-7 # default=1.0e-6 (Hartree) scf.lapack.dste dstevx # dstegr|dstedc|dstevx, default=dstevx scf.restart on
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.14 ) |
- Date: 2020/10/02 21:36
- Name: Naoya Yamaguchi
- Dear Eike-san,
I tried the input, and the calculation can continue after "SCF=1" as follows. ******************* MD= 1 SCF= 1 ******************* <Band> Solving the eigenvalue problem... KGrids1: -0.41667 -0.25000 -0.08333 0.08333 0.25000 0.41667 KGrids2: 0.00000 KGrids3: -0.41666 -0.25000 -0.08333 0.08334 0.25000 0.41667 <Band_DFT> Eigen, time=153.232229 <Band_DFT> DM, time=0.000000 1 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 90.00 269.88) Ml 0.00 ( 25.31 65.88) Ml+s 0.00 ( 90.00 269.88) 2 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 89.99 269.88) Ml 0.00 (150.59 241.95) Ml+s 0.00 ( 90.00 269.88) 3 Ta MulP 6.02 6.02 sum 12.04 diff 0.01 ( 90.00 268.40) Ml 0.00 ( 22.30 94.30) Ml+s 0.01 ( 90.00 268.40) 4 Ta MulP 6.02 6.02 sum 12.04 diff 0.01 ( 90.00 268.50) Ml 0.00 (155.42 -83.48) Ml+s 0.01 ( 90.00 268.50) 5 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 90.00 -89.93) Ml 0.00 ( 38.02 73.76) Ml+s 0.00 ( 90.00 -89.93) 6 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 90.00 269.93) Ml 0.00 (144.73 245.80) Ml+s 0.00 ( 90.00 269.93) 7 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 90.00 -89.88) Ml 0.00 ( 29.73 99.99) Ml+s 0.00 ( 90.00 -89.88) 8 Ta MulP 6.02 6.02 sum 12.04 diff 0.01 ( 89.99 -88.40) Ml 0.00 (151.51 235.96) Ml+s 0.01 ( 90.00 -88.40) 9 Ta MulP 6.02 6.02 sum 12.04 diff 0.00 ( 90.00 -89.88) Ml 0.00 (151.90 -84.37) Ml+s 0.00 ( 90.00 -89.88) 10 Ta MulP 6.02 6.02 sum 12.04 diff 0.01 ( 90.01 -88.50) Ml 0.00 ( 27.17 54.75) Ml+s 0.01 ( 90.00 -88.50) 11 Ni MulP 8.47 8.46 sum 16.93 diff 0.00 ( 90.00 90.00) Ml 0.00 ( 83.32 -8.88) Ml+s 0.00 ( 90.00 90.00) 12 Ni MulP 8.46 8.46 sum 16.92 diff 0.00 ( 90.00 90.00) Ml 0.00 ( 82.72 1.62) Ml+s 0.00 ( 90.00 90.00) 13 Ni MulP 8.46 8.46 sum 16.92 diff 0.00 ( 90.00 90.00) Ml 0.00 ( 88.92 0.57) Ml+s 0.00 ( 90.00 90.00) 14 Ni MulP 8.46 8.46 sum 16.93 diff 0.00 ( 90.00 90.00) Ml 0.00 ( 84.36 -3.69) Ml+s 0.00 ( 90.00 90.00) 15 Ni MulP 8.47 8.47 sum 16.93 diff 0.00 ( 90.00 90.00) Ml 0.00 ( 83.68 -11.64) Ml+s 0.00 ( 90.00 90.00) 16 Te MulP 8.01 8.00 sum 16.01 diff 0.00 ( 90.00 -90.00) Ml 0.00 ( 77.73 25.58) Ml+s 0.00 ( 90.00 -90.00) 17 Te MulP 8.01 8.01 sum 16.03 diff 0.00 ( 90.00 270.00) Ml 0.00 ( 69.53 1.82) Ml+s 0.00 ( 90.00 -90.00) 18 Te MulP 8.01 8.01 sum 16.03 diff 0.00 ( 90.00 270.00) Ml 0.00 ( 64.05 -0.82) Ml+s 0.00 ( 90.00 -90.00) 19 Te MulP 8.01 8.01 sum 16.03 diff 0.00 ( 90.00 270.00) Ml 0.00 ( 72.62 -7.03) Ml+s 0.00 ( 90.00 -90.00) 20 Te MulP 8.00 8.00 sum 16.00 diff 0.00 ( 90.00 270.00) Ml 0.00 ( 82.50 0.18) Ml+s 0.00 ( 90.00 -90.00) .......... ......
Sum of MulP: up = 610.11312 down = 609.88688 total= 1220.00000 ideal(neutral)= 1220.00000 <DFT> Total Spin Moment (muB) 0.000000223 Angles 142.106183362 63.302650206 <DFT> Total Orbital Moment (muB) 0.000000062 Angles 42.979485824 171.611816823 <DFT> Total Moment (muB) 0.000000184 Angles 135.124613938 81.285233031 <DFT> Mixing_weight= 0.100000000000 <DFT> Uele =-1395.812445249730 dUele = 1.000000000000 <DFT> NormRD = 1.000000000000 Criterion = 0.000000100000
******************* MD= 1 SCF= 2 ******************* <Poisson> Poisson's equation using FFT... <Set_Hamiltonian> Hamiltonian matrix for VNA+dVH+Vxc... <Band> Solving the eigenvalue problem... KGrids1: -0.41667 -0.25000 -0.08333 0.08333 0.25000 0.41667 KGrids2: 0.00000 KGrids3: -0.41666 -0.25000 -0.08333 0.08334 0.25000 0.41667
So, maybe the problem is dependent of the computational environment. Please check it with different environments. I used gcc 9.3.0 with AMD EPYC CPUs.
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.15 ) |
- Date: 2020/10/05 20:38
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
I tried compiling to compile 3.9.2 with gcc (gcc (Debian 8.3.0-6) 8.3.0), but the compilation fails with the error below. May I ask for your advice on how to resolve this issue?
In file included from init.c:16: /usr/lib/intel/compiler/include/complex.h:30:3: error: #error "This Intel <complex.h> is for use with only the Intel compilers!" # error "This Intel <complex.h> is for use with only the Intel compilers!" ^~~~~ In file included from init.c:17: openmx_common.h:3536:13: warning: inline function ‘Spherical_Bessel’ declared but never defined inline void Spherical_Bessel( double x, int lmax, double *sb, double *dsb ) ; ^~~~~~~~~~~~~~~~ make: *** [makefile:221: init.o] Error 1
best regards, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.16 ) |
- Date: 2020/10/06 10:22
- Name: Naoya Yamaguchi
- Dear Eike-san,
>/usr/lib/intel/compiler/include/complex.h:30:3: error: #error "This Intel <complex.h> is for use with only the Intel compilers!" According to https://community.intel.com/t5/Intel-C-Compiler/how-to-solve-the-compiling-error-of-misusing-wrong-header/td-p/1075688, this error seems to occur laterally because "/usr/lib/intel/compiler/include/complex.h" is a part of Intel compilers but you compiled OpenMX with gcc. So the solution is to use "/usr/include/math.h", not "/usr/lib/intel/compiler/include/complex.h".
The environment I used is Debian 10.3 too, so I believe that if you modify the makefile or fix environment variables, you can solve the problem.
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.17 ) |
- Date: 2020/10/11 20:10
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
I was able to compile 3.9.2 with the following makefile
CC = mpicc.openmpi -O3 -fopenmp -ffast-math FC = mpif90.openmpi -O3 -fopenmp -ffast-math LIB= -lscalapack-openmpi -llapack -lblas -lfftw3 -L/usr/lib/x86_64-linux-gnu/openmpi/lib/ -lmpi -lmpi_mpifh -lgfortran
Unfortunately, the error persists as shown below. Do you have any advice on how to change the compilation further?
best regards, Eike
slurmstepd-fat2: error: Detected 1 oom-kill event(s) in step 49740263.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler. srun: error: fat2: task 16: Out Of Memory [fat2:142127] *** Process received signal *** [fat2:142127] Signal: Segmentation fault (11) [fat2:142127] Signal code: Address not mapped (1) [fat2:142127] Failing at address: 0x30 [fat2:142127] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x1554fa421730] [fat2:142127] [ 1] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_mtl_ofi.so(ompi_mtl_ofi_progress_no_inline+0x152)[0x1554ed9b7e52] [fat2:142127] [ 2] /lib/x86_64-linux-gnu/libopen-pal.so.40(opal_progress+0x2c)[0x1554f7f0cdec] [fat2:142127] [ 3] /lib/x86_64-linux-gnu/libmpi.so.40(ompi_request_default_wait+0x11d)[0x1554fa92893d] [fat2:142127] [ 4] /lib/x86_64-linux-gnu/libmpi.so.40(ompi_coll_base_barrier_intra_bruck+0xa4)[0x1554fa985af4] [fat2:142127] [ 5] /lib/x86_64-linux-gnu/libmpi.so.40(MPI_Barrier+0xa8)[0x1554fa9419d8] [fat2:142127] [ 6] /home/calc/src/openmx3.9-test/source/openmx(+0x125bb9)[0x557d11a7fbb9] [fat2:142127] [ 7] /home/calc/src/openmx3.9-test/source/openmx(+0x128fa1)[0x557d11a82fa1] [fat2:142127] [ 8] /home/calc/src/openmx3.9-test/source/openmx(+0x66a24)[0x557d119c0a24] [fat2:142127] [ 9] /home/calc/src/openmx3.9-test/source/openmx(+0x78f0)[0x557d119618f0] [fat2:142127] [10] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x1554fa27209b] [fat2:142127] [11] /home/calc/src/openmx3.9-test/source/openmx(+0x826a)[0x557d1196226a] [fat2:142127] *** End of error message *** [fat2:142131] *** Process received signal *** [fat2:142131] Signal: Segmentation fault (11) [fat2:142131] Signal code: Address not mapped (1) [fat2:142131] Failing at address: 0x30 [fat2:142131] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x14f024d5f730] [fat2:142131] [ 1] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_mtl_ofi.so(ompi_mtl_ofi_progress_no_inline+0x152)[0x14f0182f8e52] [fat2:142131] [ 2] /lib/x86_64-linux-gnu/libopen-pal.so.40(opal_progress+0x2c)[0x14f02284adec] [fat2:142131] [ 3] /lib/x86_64-linux-gnu/libmpi.so.40(ompi_request_default_wait+0x11d)[0x14f02526693d] [fat2:142131] [ 4] /lib/x86_64-linux-gnu/libmpi.so.40(ompi_coll_base_barrier_intra_bruck+0xa4)[0x14f0252c3af4] [fat2:142131] [ 5] /lib/x86_64-linux-gnu/libmpi.so.40(MPI_Barrier+0xa8)[0x14f02527f9d8] [fat2:142131] [ 6] /home/calc/src/openmx3.9-test/source/openmx(+0x125bb9)[0x55aadc19cbb9] [fat2:142131] [ 7] /home/calc/src/openmx3.9-test/source/openmx(+0x128fa1)[0x55aadc19ffa1] [fat2:142131] [ 8] /home/calc/src/openmx3.9-test/source/openmx(+0x66a24)[0x55aadc0dda24] [fat2:142131] [ 9] /home/calc/src/openmx3.9-test/source/openmx(+0x78f0)[0x55aadc07e8f0] [fat2:142131] [10] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x14f024bb009b] [fat2:142131] [11] /home/calc/src/openmx3.9-test/source/openmx(+0x826a)[0x55aadc07f26a] [fat2:142131] *** End of error message *** slurmstepd-fat2: error: *** STEP 49740263.0 ON fat2 CANCELLED AT 2020-10-08T16:02:35 *** srun: Job step aborted: Waiting up to 32 seconds for job step to finish. slurmstepd-fat2: error: *** JOB 49740263 ON fat2 CANCELLED AT 2020-10-08T16:02:35 *** slurmstepd-fat2: error: Detected 1 oom-kill event(s) in step 49740263.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.18 ) |
- Date: 2020/10/12 01:03
- Name: Naoya Yamaguchi
- Dear Eike-san,
>srun: error: fat2: task 16: Out Of Memory You might solve it by allocating an enough large amount of memory.
Regards, Naoya Yamaguchi
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.19 ) |
- Date: 2020/10/12 16:54
- Name: Eike F. Schwier <schwier@physik.uni-wuerzburg.de>
- Dear Yamaguchi-san,
thank you for your advice. Unfortunately I do not have access to more than 2 TB of memory on the cluster. But I also thought that the memory was not an issue as the analysis of memory usage showed only some GB per node (as per memory.usage.fileout on command). Wuld you mind sharing the compiler options and library linkers you use for your calculation which did not crash?
best regards, Eike
|
Re: Unfolding cannot assign atoms in supercell-slab calculations ( No.20 ) |
- Date: 2020/10/12 17:27
- Name: Naoya Yamaguchi
- Dear Eike-san,
>Unfortunately I do not have access to more than 2 TB of memory on the cluster. But I also thought that the memory was not an issue as the analysis of memory usage showed only some GB per node (as per memory.usage.fileout on command).
If your calculation is not large (because I don't know what you calculated), there are some problems of building OpenMX. I suspect that the dependency relation between some important files such as libraries is broken as shown in http://www.openmx-square.org/forum/patio.cgi?mode=view&no=2560 . If so, the simplest solution is the clean install of Debian.
>Wuld you mind sharing the compiler options and library linkers you use for your calculation which did not crash?
For AMD EPYC CPUs, with gcc 9.3.0, and Intel MPI and MKL, CC= /opt/intel/impi/2019.7.217/intel64/bin/mpicc -Dkcomp -Ofast -ffast-math -march=znver2 -mfma -fomit-frame-pointer -fopenmp -I${MKLROOT}/include/fftw FC= /opt/intel/impi/2019.7.217/intel64/bin/mpif90 -Dkcomp -Ofast -ffast-math -march=znver2 -mfma -fomit-frame-pointer -fopenmp -I${MKLROOT}/include/fftw LIB=-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core -lgfortran -lpthread
Regards, Naoya Yamaguchi
|