/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/autograd/graph.py:824: UserWarning: Error detected in ReluBackward0. Traceback of forward call that caused the error: File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0309/torch.nn.functional.relu_.py", line 12, in fo...
🐛 Describe the bug Calling loss.backward() after passing a nested_tensor through nn.ReLu or nn.GELU raises a NotImplementedError. import torch import torch.nn as nn L, E = (1, 2), 3 x = torch.nested.nested_tensor([ torch.rand(L[0], E), t...
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 1280, 28, 28]], which is output 0 of LeakyReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the oper...
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 1280, 28, 28]], which is output 0 of LeakyReluBackward1, is at version 2; 2020-07-13 18:41 − ... yun...
🐛 Describe the bug Invoking the nn.Dropout(..., inplace=True) will make the training crash one of the variables needed for gradient computation has been modified by an inplace operation:Tensor [], which is output 0 of ReluBackward0, is a...
Grep fortest_compile_backward_nn_functional_relu_cuda_float32 There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Sample error message Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch...