lora_config)unet=get_peft_model(unet,lora_config)# 如果设置为继续训练,则加载上一次的模型权重,当然,你可以修改 model_path 来指定其他的路径ifresume:# 加载上次训练的模型权重,注意这里只加载权重,而不是覆盖整个模型,覆盖:model = torch.load(...)text_encoder=torch.load(os.path...
…稍微修改注释。 Fix LoRA fine-tuning issue in section 14.b: prevent ineffective LoRA application by removing `get_peft_model()` before loading LoRA weights. Removed redundant code and made minor comment adjustments.master Hoper-J committed Sep 30, 2024 1 parent a032e79 commit 6934fde Show...
Sign in to comment Reviewers BenjaminBossan Assignees No one assigned Labels None yet Projects None yet Milestone No milestone Development Successfully merging this pull request may close these issues. PEFT model doesn't update params when having changed LoRA config 3 participants ...