nan_diff = np.not_equal(np.isnan(data_expected), np.isnan(data_me)) inf_diff = np.not_equal(np.isinf(data_expected), np.isinf(data_me)) neginf_diff = np.not_equal(np.isneginf(data_expected), np.isneginf(data_me)) greater = greater + nan_diff + inf_diff + neginf_diff ...
🐛 Describe the bug From the pytorch version 1.13.0, KLDivLoss backward computation produces nan gradient. The code runs without error in the pytorch version 1.12. import numpy as np import torch import torch.nn as nn torch.autograd.set_d...