如果是因为函数调用错误,你需要确保在调用on_train_epoch_end()时传递所有必需的参数。例如,如果outputs是必需的,你应该这样调用: python self.on_train_epoch_end(epoch, logs=logs, outputs=outputs) 如果是因为函数定义更新,你需要更新所有调用on_train_epoch_end()的地方,确保它们传递了新添加的outputs参数。
在PyTorch Lightning 中,可以通过 self.current_epoch 属性来获取当前的 epoch 值。在 on_train_epoch_end 方法中,可以通过 self.current_epoch 来获取当前 epoch 的值,如下所示: import pytorch_lightning a…
As soon as the first epoch completes training, I am getting this error. I think it is due to some changes in the updated library, as I am getting the same error with one more implementation on vae. Kindly help me on this. Thanks, Regards...