Added LightningModule.toggle_optimizer (#4058) Added LightningModule.manual_backward (#4063) Changed Integrated metrics API with self.log (#3961) Decoupled Appex (#4052, #4054, #4055, #4056, #4058, #4060, #4061, #4062, #4063, #4064, #4065) Renamed all backends to Accelerator (#4066) ...
LightningOptimizer manual optimizer is more flexible and expose toggle_model (#5771) MlflowLogger limit parameter value length to 250 char (#5893) Re-introduced fix for Hydra directory sync with multiple process (#5993)DeprecatedFunction stat_scores_multiple_classes is deprecated in favor of stat_...
Optimizer Reinitialization(advanced) Continuous Integration Fine-Tuning Scheduler is rigorously tested across multiple CPUs, GPUs and against major Python and PyTorch versions. Each Fine-Tuning Scheduler minor release (major.minor.patch) is paired with a Lightning minor release (e.g. Fine-Tuning Schedu...
Removed deprecated optimizer argument in LightningModule.manual_backward(); Toggling optimizers in manual optimization should be done using LightningModule.{un}toggle_optimizer() (#8287) Removed DeepSpeed FP16 Exception as FP32 is now supported (#8462) Removed environment variable PL_EXP_VERSION from...
Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {{ message }} yc-gao / pytorch-lightning Public forked from Lightning-AI/pytorch-lightning Notifications You must be signed in to change noti...
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/src/lightning/pytorch/core/module.py at master · Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes. - pytorch-lightning/docs/source-pytorch/conf.py at 1bc2aadd461f2bcbdff63477559871327286ca0e · Lightning-AI/pytorch-lightning
test_mobile_optimizer.py test_model_exports_to_core_aten.py test_module_tracker.py test_modules.py test_monitor.py test_mps.py test_multiprocessing.py test_multiprocessing_spawn.py test_namedtensor.py test_namedtuple_return_api.py test_native_functions.py test_native_mha.py test_nestedtensor...
Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {{ message }} Lightning-AI / pytorch-lightning Public Notifications You must be signed in to change notification settings Fork 3.4k Star 28.1k Code ...
Pass in the model, loss(es), and StokeOptimizer from above as well as any flags/choices to set different backends/functionality/extensions and any necessary configurations. As an example, we set the device type to GPU, use the PyTorch DDP backend for distributed multi-GPU training, toggle ...