🐛 Describe the bug When running the code below, I get the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/workspaces/pytorch/grad/checkpoint.py", line 45, in main loss.backward() File "/opt/...
用户收到此警告是因为在调用torch.utils.checkpoint时没有明确指定use_reentrant参数。从PyTorch的文档和警告信息来看,这是为了提醒用户,在未来的版本中,use_reentrant的默认值可能会改变,因此最好显式指定以避免潜在的问题。 4. 提供解决方案:如何正确使用use_reentrant参数或采取其他措施来避免此警告 为了避免此警告并...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - CheckpointError with checkpoint(..., use_reentrant=False) & autocast() · pytorch/pytorch@920e436
Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations - Set use_reentrant=False in checkpoint · octree-nn/ognn-pytorch@8e192c9
Referenced prior issue wheregradient_checkpointing_kwargswas initialized as an empty dict in the Trainer. This led to thegradient_checkpointing_enablefunction not handling it asNone, causing a 'use_reentrant=False' warning. The issue was resolved by removing the unnecessary initialization, ensuring ...
according to new pytorch, you need to now explicitly set use_reentrant as it will be changed from use_reentrant=True to use_reentrant=False in near future transformers.models.llama.modeling_llama def forward... layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(decoder_layer...