Tensor) -> torch.Tensor: tab_slice = slice(0, self.tab_incoming_dim) text_slice = slice( self.tab_incoming_dim, self.tab_incoming_dim + self.text_incoming_dim ) image_slice = slice( self.tab_incoming_dim + self.text_incoming_dim, self.tab_incoming_dim + self.text_incoming_dim +...
kwargs) File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\pytorch2onnx.py", line 64, in torch2...
@文心快码runtimeerror: torch.nn.functional.binary_cross_entropy and torch.nn.bceloss are unsafe to autocast. many models use a sigmoid layer right before the binary cross entropy layer. in this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits or torch.n...
(It should be just a read-only flag, to allow passing the need of grad_fn to child tensors, independent of whether the gradient actually should be retained in .grad. For retaining, see (2.).) In tensor factories like torch.tensor(), rename the argument requires_grad to retains_grad, ...
Tensor) -> torch.Tensor: tab_slice = slice(0, self.tab_incoming_dim) text_slice = slice( self.tab_incoming_dim, self.tab_incoming_dim + self.text_incoming_dim ) image_slice = slice( self.tab_incoming_dim + self.text_incoming_dim, self.tab_incoming_dim + self.text_incoming_dim +...