Python platform: Linux-5.15.136-tegra-aarch64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Orin (nvgpu) Nvidia driver version: N/A cuDNN version: Probably one of the following: /usr/lib/aar...
CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB ...
Setting ds_accelerator to cuda (auto detect) Generate Samples WARNING: No training data specified using world size: 1 and model-parallel size: 1 > using dynamic loss scaling > initializing model parallel with size 1 > initializing model parallel cuda seeds on global rank 0, model parallel rank...
I used it #define CUDA_MODULE_LOADING LAZY but it doesn't work, setting it in computer properties is useful.
Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 GPU 2: NVIDIA GeForce RTX 3090 GPU 3: NVIDIA GeForce RTX 3090 GPU 4: NVIDIA GeForce RTX 3090 GPU...
(64-bit runtime) Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.6.124 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX A6000 GPU 1: NVIDIA RTX A6000 Nvidia driver version: 510.47.03 cuDNN...
Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB GPU 1: Tesla V100-SXM2-16GB GPU 2: Tesla V100-SXM2-16GB GPU 3: Tesla V100-SXM2-16GB GPU 4: Tesla V100-SXM2-16GB GPU 5: Tesla V100...
(64-bit runtime) Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: 12.0.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU Nvidia driver version: 535.183.01 cuDNN version...
(64-bit runtime) Python platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP run...
My cuda is 10.2 and yolov5 V3.0, when I instll torch1.5.1 giving the error :AttributeError: module 'torch.nn' has no attribute 'Hardswish'! When I installing torch1.6.0 will report error :torch.nn.modules.module.ModuleAttributeError: 'BatchNorm2d' object has no attribute '_non_persisten...