When exporting a half precision (fp16) model to onnx it creates an invalid onnx file. This appears to be because of a node that remains in fp32 as a result of this line in torch.nn.functional.interpolate To Reproduce (REQUIRED) Steps to reproduce the behavior: Open tutorial "export-onn...
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::fft_fft' to ONNX opset version 12 is not supported. Versions The version message is as follows: Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_...
ONNX: export success ✅ 2.3s, saved as yolov5s.onnx (28.0 MB) Export complete (5.5s) Results saved to /content/yolov5 Detect: python detect.py --weights yolov5s.onnx Validate: python val.py --weights yolov5s.onnx PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'cus...
RuntimeError: Exporting the operator resolve_conj to ONNX opset version 12 is not supported. RuntimeError: Exporting the operator resolve_conj to ONNX opset version 12 is not supported. Describe the bug Traceback (most recent call last): File "convert_to_onnx.py", line 367, in <module>...
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: onnx>=1.12.0 in /opt/conda/lib/python3.1...
Search before asking I have searched the YOLOv8 issues and found no similar bug report. YOLOv8 Component Export Bug This is what happens when I export as the onnx format: Now this is what happens when I export as the engine format: The s...