(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, std::optional<at::Tensor> const&, double&&, bool&&, std::optional<at::Tensor> const&, std::optional<double>&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/...
optional<double>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714 #12 0x7f6461a4cc83 in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, double, bool, std::optional<at::Tensor> const&, ...
Floating point exception (core dumped) Versions PyTorch version: 2.5.0a0+git32f585d Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Floating point exception (core dumped) in `torch.ao.nn.quantized.ConvTranspose1d\ConvTranspose2d\ConvTranspose3d` when stride=0 · pytorch/pytorch@d1bb8e8
Describe the bug "pred_onnx = session.run([hm,hmp,hw,reg], {"input.1": img_data})". the bug occurs in this step. the error is "Floating point exception(core dumped) Urgency the deadline is the after tommorrow System information OS Platfo...
I have tested my int8-model using Benchmark C++ Tool, still get the 'Floating point exception(core dumped)', and the full output is below: [Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [ INFO ] Files were added: 1 ...
import torch x = torch.tensor([-2147483648], dtype=torch.int32) y = torch.tensor([-1], dtype=torch.int32) torch.div(x, y, rounding_mode='trunc') When x=-2147483648 and y=-1, setting the rounding_mode to trunc or floor will lead to a floating point exception. Instead, when no...
🐛 Describe the bug When the devisor is -1 and input is a large negative integer, torch.floor_divide will throw a floating point exception. import torch input = torch.tensor([[-9223372036854775808]]) other = torch.tensor([-1]) torch.floor...
🐛 Describe the bug When I try to run the following simple piece of code: import numpy as np import torch np.random.seed(42) x = torch.from_numpy(np.random.rand(100)).float() print(x) exp_x = torch.exp(x) print(exp_x) I get a floating poi...
Tiny refactoring to use directly is_floating_point() to check if the tensor is of floating type. Unclear whether there was a specific reason why we did the check this way and what is the preferred idiom. This came up on another PR #3238, so it's worth cl