In general, it looks like settingHIP_VISIBLE_DEVICES=xleads to GPU(x+2)%4being used: HIP_VISIBLE_DEVICES=0-> GPU2 HIP_VISIBLE_DEVICES=1-> GPU3 HIP_VISIBLE_DEVICES=2-> GPU0 HIP_VISIBLE_DEVICES=3-> GPU1 Versions Used therocm/pytorch:latestdocker image (image id:b80124b96134) from ...
N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz Versions of relevant libraries:...
PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version:...
I have seen make_fx https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L1395, and it some tests wasn't as robust torch.compile which is why went that route. Looking into the code, my understanding is that torch.export still uses this experimental make_fx ...
PyTorch - ✅ Yes, Initial Native Apple Silicon Support for CPU only - GPU Acceleration Status Report Update Qt Creator - ✅ Yes, Native Apple Silicon Support as of 6.0.0 - Official Article Release Notes Verification Qt Framework - ✅ Yes, Native Apple Silicon Support as of 6.2.0 - Of...
Docs: https://lightning.ai/docs/pytorch/stable/accelerators/tpu.html We won't be able to support XLA+DDP like you requested. awaelchli added question strategy: ddp and removed bug needs triage ver: 2.1.x labels Jun 22, 2024 awaelchli changed the title DDPStrategy fails when using accelera...
CUDA used to build PyTorch: 12.6 ROCM used to build PyTorch: N/AOS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.4 Libc version: glibc-2.35Python...
🐛 Describe the bug I'm trying to build PyTorch on OprangePi PC (H3 Quad-core Cortex-A7) but for some reason I get Error: unknown architecture `armv7-a;' is that semicolon in a wrong place ? The actual -march is: $:> gcc -c -Q -march=nati...
System Info Collecting environment information... PyTorch version: 2.6.0+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Fedora Linux 40 (Workstation Edition) (x86_64) GCC version: (GCC) 14...
It's not big help but, after compiling from source pytorch version 1.6, in libtorch on device this function for me works. So looks like the pip or conda installed version doesn't have multi-threading support, and we have to compile from source? Contributor ilia-cher commented Aug 28, 2020...