is_torch_npu_availableandis_torch_tpu_availableare not equivalent, I'm afraid. I suspect that yourtransformersversion is a bit too old, as it seems likeis_torch_npu_availabledoes not yet exist in it. Could you runpip show transformersto see what version you're on? For reference, Sentence...
File"/home/ems/miniconda3/lib/python3.10/site-packages/torch/utils/backend_registration.py", line 103,in_get_current_device_indexreturngetattr(getattr(torch, custom_backend_name), _get_device_index)() File"/home/ems/miniconda3/lib/python3.10/site-packages/torch_npu/npu/utils.py", line 59,i...
11.0-3.0.tr6/CODE/torch_npu/csrc/framework/OpParamMaker.cpp:136 NPU error,NPU error code is:500002 EZ9999: Inner Error, Please contact support engineer! EZ9999 The input dtype of x1 x2 y is equal, please check![FUNC:IndexPutVerify][FILE:matrix_calculation_ops.cc][LINE:4676] TraceBack...
Revert "Update flag_models.py to fix is_torch_npu_available as a func… … Verified 2eb4360 Fixes all remaining calls 581f2ed Contributor Author MeTaNoV commented Feb 21, 2024 @staoxiao now you can merge again, I fixed all the calls since it was a bit more than 1 :) 👍...
self.fc3 = torch.nn.Linear(50, 2) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = self.fc2(x) x = torch.relu(x) x = self.fc3(x) return x # check the avail device device = torch.device("npu" if torch.npu.is_available() else "cpu") ...
torch.cuda.is_available()returns False Interestingly, I have dual-booted with Windows, and my GPU does work with Pytorch within Windows. Does anyone have any ideas why my 3060 is unavailable as a CUDA device? Thanks in advance Robert_Crovella2022 年10 月 27 日 22:442 ...
Revert "Update flag_models.py to fix is_torch_npu_available as a function call" af538b6 2eb4360 File tree FlagEmbedding flag_models.py 1 file changed +1 -1 lines changed +1-1 0commit comments Comments 0(0) Pleasesign into comment....
1年前 Agent-Chu修改了描述1年前 huangyunlong将任务状态从TODO修改为WIP12个月前 huangyunlong将任务状态从WIP修改为DONE11个月前 cord7个月前 登录后才可以发表评论 状态 DONE 负责人 未设置 标签 未设置 项目 未立项任务 里程碑 未关联里程碑 Pull Requests ...
torch.compile breaks when using hasattr but succeeds when using isinstance(torch.Tensor). This commit short-circuits the hasattr call for torch.Tensors if possible. Note: is_npu_available is also not torch.compile compatible due to (1) lru_cache and (2) importlib checks, so I've moved it...
RuntimeError: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:208 NPU error, error code is 500002 [Error]: A GE error occurs in the system. Rectify the fault based on the error information in the ascend log. EZ3002: Optype [Im2col] of Ops kernel [AIcoreEngine] is unsupported. Reaso...