gradient_checkpointing_kwargs = {"use_reentrant": False} model.config.use_cache = False peft_model = get_peft_model(model, LORA_CONFIG) tokenizer = AutoTokenizer.from_pretrained(MODEL, token=TOKEN, max_length=8192, padding_side="left") # load data data = load_dataset("json", data_...
We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {...
training_args["gradient_checkpointing_kwargs"] = {"use_reentrant": False} if config.mixed_precision == "fp16": training_args["fp16"] = True if config.mixed_precision == "bf16": training_args["bf16"] = True if config.trainer == "reward": training_args["max_length"] = confi...
(resnet), hidden_states, temb, use_reentrant=False ) hidden_states = torch.utils.checkpoint.checkpoint( create_custom_forward(temp_conv), hidden_states, num_frames, use_reentrant=False ) else: hidden_states = torch.utils.checkpoint.checkpoint( create_custom_forward(resnet), hidden_states, ...
Use accelerator to replace cuda in setup and runner by @Andy666G in #5769 Link GDS blog to site by @tjruwase in #5820 Non-reentrant checkpointing hook fix by @ic-synth in #5781 Fix NV references by @tjruwase in #5821 Fix docs building guide by @tjruwase in #5825 Update clang-fo...
Use accelerator to replace cuda in setup and runner by @Andy666G in https://github.com/microsoft/DeepSpeed/pull/5769 Link GDS blog to site by @tjruwase in https://github.com/microsoft/DeepSpeed/pull/5820 Non-reentrant checkpointing hook fix by @ic-synth in https://github.com...
please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on th...
training_args["gradient_checkpointing_kwargs"] = {"use_reentrant": False} if config.mixed_precision == "fp16": training_args["fp16"] = True if config.mixed_precision == "bf16": training_args["bf16"] = True if config.trainer == "reward": training_args["max_length"] = config...