Again, the situation is that a number of frameworks are written assuming that you can do tensor.numpy() at any time, which is true for everything but for bfloat16. Implementing something like torch.default_bfloat16_numpy_type(torch.float32) would solve this problem in a very reasonably cl...
TypeError: cannot assign 'torch.cuda.BFloat16Tensor' as parameter 'weight' (torch.nn.Parameter or None expected) and RuntimeError: trainer.py 1485 _call_strategy_hook linalg.inv: Low precision dtypes not supported. Got BFloat16 As I said, this two bug will not appear when i set strateg...
Tensors and Dynamic neural networks in Python with strong GPU acceleration - Add bfloat16 support for per tensor/channel cpu/cuda fake quantize ops · pytorch/pytorch@5c6d354
Notes This should be catched by thetest_make_bfloat16_tensor_rawtest casehere. I confirmed it by running this test manually. Not sure why it isn't caught by the CI pipeline. As a side note, a lot of the logic in the private_to_arrayfunctionhereare "zombine" code and cannot be reac...
For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/ Show more
🚀 The feature, motivation and pitch Noticed this odd gap in coverage when looking at optests: https://github.com/CaoE/pytorch/blob/a1394be10096b91c0b5528fccf709e6e73078832/torch/testing/_internal/common_methods_invocations.py#L13575C1-L1...
check-labels.yml on: pull_request_target Check labels Oh hello! Nice to see you. Made with ️ by humans.txt Annotations 1 error Check Labels Canceling since a higher priority waiting request for 'Check Labels-139306-false' exists ...
Description I cannot train a transformer_base model using bfloat type for both activation and weights using GPU (GTX 1080Ti). From the error I got ValueError: Tensor conversion requested dtype bfloat16 for Tensor with dtype float32: 'Ten...
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/quantization:test_quantization -- --exact 'caffe2/test/quantization:test_quantization - test_forward_per_tensor_cachemask_cpu (caffe2.test.quantization.core.test_workflow_ops.TestFakeQuantizeOps)' buck2 test 'fbcode//mode/dev-nosan'...
🐛 Describe the bug torch.linalg.matmul and torch.Tensor.matmul with torch.bfloat16 can still run without mkldnn and return incorrect results, even in PyTorch 1.13.1 (the latest released docker environment) This unexpected behavior is rel...