[MPS] Tracking issue for ModuleInfo failures when enabling testing for torch.float16 #73055 Sign in to view logs Summary Jobs assign Run details Usage Workflow file Triggered via issue September 8, 2024 07:34 hvaara commented on #119108 042f2f7 Status Success Total duration 12s Artifacts ...
test_forward_nn_Softmax2d_mps_float16 inf nan error test_forward_nn_LogSoftmax_mps_float16 Incorrect output dtype? values for attribute 'dtype' do not match: torch.float16 != torch.float32. test_forward_nn_HuberLoss_mps_float16
<torch/csrc/jit/serialization/pickler.h> #include <torch/csrc/lazy/python/init.h> #include <torch/csrc/monitor/python_init.h> #include <torch/csrc/mps/Module.h> #include <torch/csrc/multiprocessing/init.h> #include <torch/csrc/onnx/init.h> #include <torch/csrc/profiler/python/init...
It's not clear to me from your error report where is torch.mps getting imported from. However, you should be able to avoid the error by upgrading totorch>=2.0 awaelchliadded3rd partyRelated to a 3rd-partywaiting on authorWaiting on user action, correction, or updateand removedneeds triage...
It's not clear to me from your error report where is torch.mps getting imported from. However, you should be able to avoid the error by upgrading totorch>=2.0 awaelchliadded3rd partyRelated to a 3rd-partywaiting on authorWaiting on user action, correction, or updateand removedneeds triage...