Traceback (most recent call last): File "main.py", line 64, in <module> model = LitModel.load_from_checkpoint(hparams.checkpoint) File "/home/siahuat0727/.local/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 138, in load_from_checkpoint model = cls._load_model_sta...
.load_from_checkpoint cannot be called on an instance. Please call it on the class type and make sure the return value is used. Bug description I've been using pytorch_lightning for quite a while. Recently, I've started using newly proposed imports such as: import lightning as L pytorch-...
50 changes: 26 additions & 24 deletions50pytorch_lightning/core/saving.py Original file line numberDiff line numberDiff line change Expand Up@@ -52,7 +52,6 @@ class ModelIO(object): defload_from_checkpoint( cls, checkpoint_path:str, ...
The checkpoint is mapped, but this doesn't seem to affect the model created and returned later. https://github.com/Lightning-AI/lightning/blob/fd4697c62c059fc7b9946e84d91625ecb6efdbe5/src/lightning/pytorch/core/saving.py#L51-L92
-Fixed issue where`Model.load_from_checkpoint("checkpoint.ckpt", map_location=map_location)`would always return model on CPU ([#17308](https://github.com/Lightning-AI/lightning/pull/17308)) ##[2.0.1]- 2023-03-30 Expand Down 13 changes: 9 additions & 4 deletions13src/lightning/pytorch/...
pytorch_lightning.core.saving.ModelIO.load_from_checkpoint()hasargsin the arguments to initializepl.LightningModule. If the module has multiple arguments, the method doesn't work correctly. Snippets not to add arguments forpl.LightningModule
pip3 install --upgrade git+https://github.com/PyTorchLightning/pytorch-lightning.git ContributorAuthor sshleifer Jun 8, 2020 • edited Tried that, get better traceback but no solution: KeyError: 'Trying to restore training state but checkpoint contains only the model. This is probably due to...
pytorch-lightning: 1.0.0 tqdm: 4.41.1 System: OS: Linux architecture: 64bit processor: x86_64 python: 3.6.9 version:Proposal for help#1SMP Thu Jul 23 08:00:38 PDT 2020 Additional context Part of the problem seems to stem fromcheckpoint_connector.py: ...
How to modify the code to load the checkpoint and also resume from it ? Environment PyTorch Lightning Version 1.6.5 Torch 1.13.0 Python version 3.8 CUDA Version: 11.4 4 NVIDIA A100-SXM4-40GBs Deepspeed 0.9.1 More info No response
Even with model_test = CoolSystem(hyperparams_test).load_from_checkpoint('checkpoints/try_ckpt_epoch_1.ckpt'), PyTorch Lightning is still complaining that 'dict' object has no attribute 'data_dir' Am I doing something wrong here? williamFalcon commented on Mar 7, 2020 williamFalcon on Mar...