Expand Up@@ -33,7 +33,7 @@ trainer: limit_predict_batches:null overfit_batches:0.0 val_check_interval:null check_val_every_n_epoch:250 check_val_every_n_epoch:100 num_sanity_val_steps:null log_every_n_steps:null enable_checkpointing:null Expand Down...
I run with LightningCLI. Set check_val_every_n_epoch > 1 (e.g. 2) to run an experiment with 20 max_epoches, the model ckpt is save by lightning.pytorch.callbacks.ModelCheckpoint. The learning rate schedular is torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=self.trai...
🐛 Bug When the reload_dataloaders_every_n_epochs and check_val_every_n_epoch flags of the Trainer are used, the validation dataloader may reload inconsistently or not reload at all. A few examples: When reload_dataloaders_every_n_epochs ...