torchrun --nnodes 1 --nproc_per_node 8 finetuning.py --enable_fsdp --low_cpu_fsdp --fsdp_config.pure_bf16 --model_name /path_of_model_folder/70B --batch_size_training 1 --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned ``` @@ -79,16 +79,16 ...
This PR changed--pure_bf16to--fsdp_config.pure_bf16on all related documents, because--pure_bf16will give incorrect warningWarning: unknown parameter pure_bf16as mentioned inthis issue. This PR also corrected any path that contains "examples/" as example folder no long exist. Fixes #Issue 3...
if someone explicitly creates an FSDPConfig object and passes that to the trainer, that wasn't marked as deprecated and we shouldn't error. We should only error for the deprecation path, which was for dict fsdp config +
For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/ Show more
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by default · pytorch/pytorch@1653f77
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by default · pytorch/pytorch@18705e3
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by default · pytorch/pytorch@5655f4b
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by default · pytorch/pytorch@9696e52
Tensors and Dynamic neural networks in Python with strong GPU acceleration - [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by… · pytorch/pytorch@6790eb5