发现在Hg那个文件夹下,终端输入vasp可以执行,界面有运行数据出现,但是执行mpirun -np 4 vasp后没...
or directory /var/spool/slurmd/job00713/slurm_script: line 10: mpirun: command not found ...
输出的结果没有 not found 说明成功链接。 2. 测试 VASP 先贴上文件 INCAR、KPOINTS、POSCAR、POTCAR https://pan.baidu.com/s/1EkjuPM3c1Pj0IoqNIJ8vOA?pwd=1234 将上面的文件放到一个文件夹,位置随便,进入到该文件夹,执行如下命令,其中数字 4 是电脑 CPU 核数。 $ mpirun -np4vasp_std 如果没有出错...
将testsuite\runtest文件中三处“mpirun -np 4”修改为“mpirun -np 1 --allow-run-as-root ”...
mpirun has exited due to process rank 1 with PID 22309 on node gg1423 exiting without calling ...
k-point 32 : -.1250-.12500.3750 plane waves: 5188 maximum and minimum number of plane-...
debug('Found {0} PROCS'.format(NPROCS)) if NPROCS == 1: # no question. running in serial. vaspcmd = VASPRC['vasp.executable.serial'] log.debug('NPROCS = 1. running in serial') exitcode = os.system(vaspcmd) return exitcode else: # vanilla MPI run. multiprocessing does not work ...
debug('Found {0} PROCS'.format(NPROCS)) if NPROCS == 1: # no question. running in serial. vaspcmd = VASPRC['vasp.executable.serial'] log.debug('NPROCS = 1. running in serial') exitcode = os.system(vaspcmd) return exitcode else: # vanilla MPI run. multiprocessing does not work ...
but when I run use the mpirun command,vasp returns the following message: running on 2 nodes distr: one band on 2 nodes, 1 groups vasp.4.6.27 26Jun05 complex POSCAR found : 1 types and 5 ions LDA part: xc-table for Ceperly-Alder, standard interpolation /wi/vasp/run/vasp:...
/hpc/home/zqs/.lsbatch/1582014240.2580.shell: line 14: $'\r': command not found/hpc/home/zqs/.lsbatch/1582014240.2580.shell: line 15: /opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/bin/mpirun: No such file or directory+ rm -f /hpc/home/zqs/ltf/Ni/Ni-N4/.hostfile....