I have 100 Gb/s EDR two port card (ConnectX-5) in CentOS system. I have installed “InfiniBand Support” group package and have the opensm running in fabric. Driver creates four interfaces ib[0-3] where ib0 and ib2 are i…
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] Thanks! Eric Tags: Cluster Computing General Support Intel® Cluster Ready Message Passing Interface (MPI) Parallel Computing 0 Kudos Reply All forum topics Previous topic Next topic 3 Replies ferrao Novice...
(40Gb/s) and 10GbE, PCIe3.0 x8 8GT/s, tall bracket, RoHS R6 Yes No MCX342A-XCCN MCX342A-XCGN MT_1680110023 MT_1680114023 ConnectX®-3 EN network interface card for OCP, 10GbE, dual-port SFP+, PCIe3.0 x8, IPMI disabled, no bracket, RoHS R6 ConnectX®-3 EN network ...
(40Gb/s) and 10GbE, PCIe3.0 x8 8GT/s, tall bracket, RoHS R6 Yes No MCX342A-XCCN MCX342A-XCGN MT_1680110023 MT_1680114023 ConnectX®-3 EN network interface card for OCP, 10GbE, dual-port SFP+, PCIe3.0 x8, IPMI disabled, no bracket, RoHS R6 ConnectX®-3 EN network ...
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] Thanks! Eric Translate Tags: Cluster Computing General Support Intel® Cluster Ready Message Passing Interface (MPI) Parallel Computing 0 Kudos Reply All forum topics Previous topic Next topic 3 R...