lr_scheduler: _target_: torch.optim.lr_scheduler.LambdaLR _partial_: true lr_lambda: _target_: fish_speech.scheduler.get_cosine_schedule_with_warmup_lr_lambda _partial_: true num_warmup_steps: 0 num_training_steps: ${trainer.max_steps} final_lr_ratio: 0.05 28 changes: 16 additions &...
max_steps=1875, lr_scheduler_type="constant", optim="paged_adamw_32bit", learning_rate=0.0002, group_by_length=True, bf16=True, warmup_ratio=0.03, max_grad_norm=0.3, ) trainer = AdapterTrainer( model=model, tokenizer=tokenizer, data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=Fa...
Cancel Create saved search Sign in Sign up Reseting focus {{ message }} alibaba-damo-academy-wck / FunASR Public forked from modelscope/FunASR Notifications You must be signed in to change notification settings Fork 0 Star 0 Code ...