没有找到官方实现.有一些非官方实现(GitHub: hosseinshn/GradNorm、brianlan/pytorch-grad-norm(简易版)、brianlan/complex-grad-norm(复杂版)). 下面是本人仿写的代码: import torch import torch.nn as nn import torch.optim as optim class GradNormLoss(nn.Module): def __init__(self, num_of_task, al...
Pytorch implementation of the GradNorm. GradNorm addresses the problem of balancing multiple losses for multi-task learning by learning adjustable weight coefficients. - brianlan/pytorch-grad-norm