Make @support_torch_compile work for XLA backend. With the custom dispatcher, overhead of dynamo guard evaluation is eliminated. For TPU backend, each models have 2 FX graphs/dynamo bytecodes: During profiling
🚀 The feature, motivation and pitch This RFC proposes an enhancement to torch.compile to improve its backend agnosticism. The goal is to enable a more seamless experience for users working with devices that may not be well-supported by t...
torch.compileraisestorch._dynamo.exc.BackendCompilerFailed importtorchtorch.manual_seed(420)classModel(torch.nn.Module):def__init__(self):super().__init__()self.conv=torch.nn.Conv2d(3,1,1)defforward(self,x):h=self.conv(x)h=torch.mul(h,3)a=torch.clamp_min(torch.clamp_max(h,6.0)...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [cudagraph] torch.compile(backend="cudagraphs") + StableDiffusion2.1 doesn't work · pytorch/pytorch@d3a11a0
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [cudagraph] torch.compile(backend="cudagraphs") + StableDiffusion2.1 doesn't work · pytorch/pytorch@f6838d5
backend eager takes 8 minutes to compile TORCH_LOGS="+dynamo": 61MB unzipped: simple_train_logs.zip repro: xmfan/Jamba@be6b5c1, add backend="eager" Versions 2.3.0 (pinned by zeta) cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 ...
🐛 Describe the bug When compiling torch.fill in cuda environment, the compiled function will raise BackendCompilerFailed error when input is an uint tensor. It seems that this issue is caused by invalid argument type when using triton's ...
Similar to #10747, but applied specifically to PT HPU lazy backend. While PyTorch for Gaudi has torch.compile support, it currently needs to be enabled explicitly, and best performance is achieved with HPUGraphs instead. This patch disables torch.compile for PT lazy mode and HPUGraphs (HPU ex...
🐛 Describe the bug My repro, note that it works fine without @torch._dynamo.optimize("eager"): import torch._dynamo import torch import torch.nn as nn class Model(nn.Module): export = False def __init__(self, linear): super().__init__() ...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Support static method of torchbind attributes in torch.compile with inductor backend · pytorch/pytorch@0030a68