Do I have false expectations or is there still something to configure? What does CUDA not available mean in that context? Installed software in running os: S | Name | Type | Version | Arch | Repository ---+---+---+---+---+--- i | kernel-firmware-nvidia | package | 20250206-1....
model_load_kwargs['low_cpu_mem_usage'] = False # Set RoPE scaling factor config = transformers.AutoConfig.from_pretrained( model_args.model_name_or_path, @@ -302,6 +315,7 @@ def train(): ) if training_args.use_lora and lora_args.q_lora else None, **model_load_kwargs, ) toke...