| 
|  Patch 3.7.7 to OpenMX Ver. 3.7 |  | 
 Date: 2014/01/31 17:40
 Name: T. Ozaki
 
Dear all, 
 A patch 3.7.7 to OpenMX Ver. 3.7 is released at
 http://www.openmx-square.org/download.html
 The purpose of the patch and how to apply it can be found below.
 Thank you very much for your cooperation in advance.
 
 Best regards,
 
 Taisuke Ozaki
 
 
 ------------------------------------------------------
 Content of README.txt
 
 
 ***** How to apply the patch3.7.7: *****
 
 cp ./patch3.7.7.tar.gz openmx3.7/source
 cd openmx3.7/source
 tar zxvf patch3.7.7.tar.gz
 make install
 
 ***** patch3.7.7.tar.gz *****
 contains
 
 Band_DFT_kpath.c
 Band_DFT_MO.c
 EigenBand_lapack.c
 Eigen_lapack2.c
 Eigen_lapack.c
 Eigen_PHH.c
 Eigen_PReHH.c
 Force.c
 lapack_dstedc1.c
 lapack_dstedc2.c
 lapack_dstedc3.c
 lapack_dstegr1.c
 lapack_dstegr2.c
 lapack_dstevx1.c
 lapack_dstevx2.c
 lapack_dstevx3.c
 lapack_dstevx4.c
 lapack_dstevx5.c
 neb.c
 RestartFileDFT.c
 Set_Allocate_Atom2CPU.c
 Set_Orbitals_Grid.c
 openmx_common.h
 
 ***** purpose of patch3.7.7.tar.gz *****
 
 Related to neb.c:
 In case that the OpenMP parallelism is disabled by adding
 -Dnoomp as compiler option for CC and FC in makefile,
 the patch should be applied.
 
 Related to Band_DFT_kpath.c, Band_DFT_MO.c, Force.c, and Set_Orbitals_Grid.c
 Variables were not properly assigned for the parallelization using OpenMP
 in those routines, which may cause serious problems.
 The problem will be resovled by applying the patch.
 Also, there was a serious bug in Band_DFT_MO.c, which causes an incorrect output
 of LCAO coefficients in *.out. The problem will be resovled by applying the patch.
 
 Related to RestartFileDFT.c:
 It seems that a deadlock sometimes happens in the MPI calculation when BullxMPI
 is used as MPI library. The problem is resolved by applying the patch.
 
 Related to EigenBand_lapack.c:
 In some cases, it is possible that erratic noise appears in the band dispersion.
 The problem is resolved by applying the patch.
 
 Related to Set_Allocate_Atom2CPU.c:
 In the MPI parallelization, two or more MPI processes are used for a system including
 a single atom, the calculation may stop due to segmentation fault.
 The problem is resolved by applying the patch.
 
 Related to Eigen_lapack2.c, Eigen_lapack.c, Eigen_PHH.c, Eigen_PReHH.c, lapack_dstedc1.c
 lapack_dstedc2.c, lapack_dstedc3.c, lapack_dstegr1.c, lapack_dstegr2.c,
 lapack_dstevx1.c, lapack_dstevx2.c, lapack_dstevx3.c, lapack_dstevx4.c
 lapack_dstevx5.c, openmx_common.h
 An absolute error tolerance for lapack routines is set.
  |  |