Added LightningModule.toggle_optimizer (#4058) Added LightningModule.manual_backward (#4063) Changed Integrated metrics API with self.log (#3961) Decoupled Appex (#4052, #4054, #4055, #4056, #4058, #4060, #4061, #4062, #4063, #4064, #4065) Renamed all backends to Accelerator (#4066) ...
Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {{ message }} yc-gao / pytorch-lightning Public forked from Lightning-AI/pytorch-lightning Notifications You must be signed in to change notification settings ...
Removed deprecated optimizer argument in LightningModule.manual_backward(); Toggling optimizers in manual optimization should be done using LightningModule.{un}toggle_optimizer() (#8287) Removed DeepSpeed FP16 Exception as FP32 is now supported (#8462) Removed environment variable PL_EXP_VERSION from...
LightningOptimizer manual optimizer is more flexible and expose toggle_model (#5771) MlflowLogger limit parameter value length to 250 char (#5893) Re-introduced fix for Hydra directory sync with multiple process (#5993)DeprecatedFunction stat_scores_multiple_classes is deprecated in favor of stat_...
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/src/lightning/pytorch/core/module.py at master · Lightning-AI/pytorch-lightning
("py:obj", "lightning.pytorch.utilities.memory.is_out_of_cpu_memory"), ("py:func", "lightning.pytorch.utilities.rank_zero.rank_zero_only"), ("py:class", "lightning.pytorch.utilities.types.LRSchedulerConfig"), ("py:class", "lightning.pytorch.utilities.types.OptimizerLRSchedulerConfig"), ...
Fixed missing call to LightningModule.untoggle_optimizer in training loop when running gradient accumulation with multiple optimizers (#8284) Fixed hash of LightningEnum to work with value instead of name (#8421). Fixed a bug where an extra checkpoint was saved at the end of training if the ...
Removed deprecated optimizer argument in LightningModule.manual_backward(); Toggling optimizers in manual optimization should be done using LightningModule.{un}toggle_optimizer() (#8287) Removed DeepSpeed FP16 Exception as FP32 is now supported (#8462) Removed environment variable PL_EXP_VERSION from...
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/pytorch_lightning/core/lightning.py at 0.7.6 · Lightning-AI/pytorch-lightning
Toggle optimizer is a shortcut in PyTorch Lightning that ensures thatonlythe gradients of a a single optimizer are calculated, by settingrequires_grad=Falsefor all the parameters updated by the other optimizers. In our case, we're using this to setrequires_grad=Falsefor the discriminator paramete...