17 17 #include <c10/cuda/CUDACachingAllocator.h> 18 + #include <c10/cuda/CUDAFunctions.h> 18 19 #include <c10/util/irange.h> 19 20 20 21 #if AT_CUDNN_ENABLED() @@ -223,7 +224,7 @@ const at::cuda::NVRTC& CUDAHooks::nvrtc() const { 223 224 224 225 int64_t cur...
Sign in Sign up pytorch / pytorch Public Notifications Fork 23.6k Star 87.9k Code Issues 5k+ Pull requests 1.1k Actions Projects 12 Wiki Security 1 Insights Assign User on Comment [logging] Set compile_id in the CachingAutotuner during compilation so we have it for dynamo_timed ...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - make `_inductor.config.rocm.supported_arch` set order deterministic for caching · pytorch/pytorch@42f93e2
+1-2Lines changed: 1 addition & 2 deletions Original file line numberDiff line numberDiff line change @@ -15,7 +15,6 @@ 15 15 #include <ATen/native/cuda/CuFFTPlanCache.h> 16 16 #include <c10/util/Exception.h> 17 17 #include <c10/cuda/CUDACachingAllocator.h> 18 - #include...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Update on "[logging] Set compile_id in the CachingAutotuner during co… · pytorch/pytorch@b473991
Failed to find test times file `/home/runner/work/pytorch/pytorch/.additional_ci_files/test-class-times.json`. Using round robin sharding. Lint The ubuntu-20.04 runner image will be end of life on 2025-04-01. Jobs using the ubuntu-20.04 image should be updated to ubuntu-22.04 or ubuntu...
[logging] Set compile_id in the CachingAutotuner during compilation so we have it for dynamo_timed logging #10422 Sign in to view logs Summary Jobs triage Run details Usage Workflow file Triggered via pull request March 11, 2025 20:50 pytorchmergebot reopened #148693 gh/masnesral/179/...
[logging] Set compile_id in the CachingAutotuner during compilation so we have it for dynamo_timed logging #213444 Sign in to view logs Summary Jobs bc_linter Run details Usage Workflow file Triggered via pull request March 11, 2025 20:50 pytorchmergebot reopened #148693 gh/masnesral/...