Added LightningModule.toggle_optimizer (#4058) Added LightningModule.manual_backward (#4063) Changed Integrated metrics API with self.log (#3961) Decoupled Appex (#4052, #4054, #4055, #4056, #4058, #4060, #4061, #4062, #4063, #4064, #4065) Renamed all backends to Accelerator (#4066) ...
Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {{ message }} yc-gao / pytorch-lightning Public forked from Lightning-AI/pytorch-lightning Notifications You must be signed in to change notification settings ...
Removed deprecated optimizer argument in LightningModule.manual_backward(); Toggling optimizers in manual optimization should be done using LightningModule.{un}toggle_optimizer() (#8287) Removed DeepSpeed FP16 Exception as FP32 is now supported (#8462) Removed environment variable PL_EXP_VERSION from...
LightningOptimizer manual optimizer is more flexible and expose toggle_model (#5771) MlflowLogger limit parameter value length to 250 char (#5893) Re-introduced fix for Hydra directory sync with multiple process (#5993)DeprecatedFunction stat_scores_multiple_classes is deprecated in favor of stat_...
(*args, **kwargs) 1047 │ 1048 │ def toggle_optimizer(self, optimizer: Union[Optimizer, LightningOptimizer]) -> None: 1049 │ │ """Makes sure only the gradients of the current optimizer's parameters are calcu /usr/local/lib/python3.10/dist-packages/torch/_tensor.py:487 in backward ...
Toggle optimizer is a shortcut in PyTorch Lightning that ensures thatonlythe gradients of a a single optimizer are calculated, by settingrequires_grad=Falsefor all the parameters updated by the other optimizers. In our case, we're using this to setrequires_grad=Falsefor the discriminator paramete...
Fixed missing call to LightningModule.untoggle_optimizer in training loop when running gradient accumulation with multiple optimizers (#8284) Fixed hash of LightningEnum to work with value instead of name (#8421). Fixed a bug where an extra checkpoint was saved at the end of training if the ...
pytorch_lightning.utilities.grads.grad_norm now raises an exception if parameter norm_type <= 0 (#9765) Updated error message for interactive incompatible plugins (#9896) Moved the optimizer_step and clip_gradients hook from the Accelerator and TrainingTypePlugin into the PrecisionPlugin (#10143,...
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/src/lightning/pytorch/core/module.py at master · Lightning-AI/pytorch-lightning
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/pytorch_lightning/core/lightning.py at 0.7.6 · Lightning-AI/pytorch-lightning