s/pytorch_lightning-1.6.0.dev0-py3.8.egg/pytorch_lightning/core/memory.py:16: LightningDeprecationWarning: `pytorch_lightning.core.memory.ge t_memory_profile` and `pytorch_lightning.core.memory.get_gpu_memory_ma
maybe your pip is not referencing to the python installed on your system. Either try using an env like conda or virtualenv or install lightning using python-mpipinstallpytorch-lightningpythontrain.py installation you used to install must be referring to something else. can you post? whichpipwhich...
I recently started using Pytorch Lightning, and want to use multiple GPUs to speed up my model training. Among others I used this example:https://pytorch-lightning.readthedocs.io/en/stable/clouds/cluster_advanced.html#build-your-slurm-script Here is a preview of my code: class model(pl.Light...
pytorch-lightning==1.7.7 torch==1.12.1 transformers==4.23.1 Comment options rwoodard-progNov 15, 2022 - I think this has to do with how Jupyter handles passing objects to workers. The bug report script works fine with multiple workers but the notebook fails in the way described. ...
pytorch_lightning\loops\base.py", line 145, in run self.advance(*args, **kwargs) File "C:\Users\user\.conda\envs\sti\lib\site-packages\pytorch_lightning\loops\dataloader\evaluation_loop.py", line 110, in advance dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_...
"pytorch-lightning==1.9.4", "torch==2.0.1+cu118", "torchdiffeq==0.2.3", "torchmetrics==1.2.0", "torchsde==0.2.5", "torchvision==0.15.2+cu118" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", ...
I implement a custom DistributedSampler for my own Dataset, but global_rank and world_size are not accessible in LightningDataModule. These two arguments should be passed into a DistributedSampler instance used in train_dataloader and val_dataloader methods respectively. Code def train_dataloader(self...