My job runs fine when "export I_MPI_ASYNC_PROGRESS=0". I am attaching error and ouput files of the job. Following is my job script: #!/bin/bash#SBATCH -N 6#SBATCH --ntasks-per-node=48#SBATCH --exclusive#SBATCH --time=00:30:00#SBATCH --job-name=ex2#SBATCH --error=e...
On a node with single E5-2667v3 (8 phys. cores), single ConnectX-4 board (81:00.0 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]), CentOS 7, UCX version 1.6.0, I'm experiencing an unexpected program action with I_MPI_ASYNC_PROGRESS=1 with ...
MPI_Wait(request, &status);}//--- MPI_Finalize(); return 0;} --- Running it like: # cat hostsn54229n54230 # mpiexec.hydra -f hosts -np 4 -ppn 2 --errfile-pattern=err.%r --outfile-pattern=out.%r ./simple-iallreduce => works ok # I_MPI_ASYNC_PROGRESS=1 mpiexec.hydra -...
I am trying to run the Sandia MPI overlap benchmark (https://github.com/sandialabs/SMB/tree/599675fe131baca55329a530b1d001add15bdbdb/src/mpi_overhead) with Intel MPI, using Intel MPI 2021.6.0. It works with the default settings, but any MPI launch with I_MPI_ASYNC_PROGRESS e...
My job runs fine when "export I_MPI_ASYNC_PROGRESS=0". I am attaching error and ouput files of the job. Following is my job script: #!/bin/bash#SBATCH -N 6#SBATCH --ntasks-per-node=48#SBATCH --exclusive#SBATCH --time=00:30:00#SBATCH --job-name=ex2#SBATCH --error=ex2.e%J...
My job runs fine when "export I_MPI_ASYNC_PROGRESS=0". I am attaching error and ouput files of the job. Following is my job script: #!/bin/bash#SBATCH -N 6#SBATCH --ntasks-per-node=48#SBATCH --exclusive#SBATCH --time=00:30:00#SBATCH --job-name=ex2#SBATCH --error=ex2.e%J...
On a node with single E5-2667v3 (8 phys. cores), single ConnectX-4 board (81:00.0 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]), CentOS 7, UCX version 1.6.0, I'm experiencing an unexpected program action with I_MPI_ASYNC_PROGRESS=1 with ...
MPI_Wait(request, &status);}//--- MPI_Finalize(); return 0;} --- Running it like: # cat hostsn54229n54230 # mpiexec.hydra -f hosts -np 4 -ppn 2 --errfile-pattern=err.%r --outfile-pattern=out.%r ./simple-iallreduce => works ok # I_MPI_ASYNC_PROGRESS=1 mpiexec.hydra -...
MPI_Wait(request, &status);}//--- MPI_Finalize(); return 0;} --- Running it like: # cat hostsn54229n54230 # mpiexec.hydra -f hosts -np 4 -ppn 2 --errfile-pattern=err.%r --outfile-pattern=out.%r ./simple-iallreduce => works ok # I_MPI_ASYNC_PROGRESS=1 mpiexec.hydra -...
It works with the default settings, but any MPI launch with I_MPI_ASYNC_PROGRESS enabled fails. The error message is: pthread_setaffinity_np failed Abort(566543) on node 2 (rank 2 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack: MPIR_Init_thread(239)....