pytorch-gradual-warmup-lr Gradually warm-up(increasing) learning rate for pytorch's optimizer. Proposed in 'Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour'. Example : Gradual Warmup for 100 epoch, after that, use cosine-annealing. Install $ pip install git+https://github.com...
$ pip install git+https://github.com/ildoonet/pytorch-gradual-warmup-lr.git Usage Seerun.pyfile. importtorchfromtorch.optim.lr_schedulerimportStepLR,ExponentialLRfromtorch.optim.sgdimportSGDfromwarmup_schedulerimportGradualWarmupSchedulerif__name__=='__main__':model=[torch.nn.Parameter(torch....
if epoch > self.warmup_epoch: # 超过warmup范围,使用自带的类,也就是CosineAnnealingLR self.after_scheduler.step(epoch - self.warmup_epoch) # 注意CosineAnnealingLR要从0epoch开始,所以需要减去 else: super(GradualWarmupScheduler, self).step(epoch) # warmup范围,使用当前重构类的() 1. 2. 3. 4...