/home/ultimis/LLM/llama.cpp/src/llama.cpp:9695: GGML_ASSERT(hparams.n_embd_head_k % ggml_blck_size(type_k) == 0) failed llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 32768 llama_init_from_
GGML_ASSERT(seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == MEAN") failed #13689 closed May 22, 2025 Eval bug: MUSA backend cause non-sense output on unsloth/deepseek-r1 quantized model #12779 closed May 22, 2025 Misc. bug: Metric names ar...
hparams.wav_n_vocab}, 0);conv1d = create_tensor(tn(LLM_TENSOR_CONV1D, "weight"), {7, hparams.n_embd_features, hparams.posnet.n_embd}, 0);conv1d_b = create_tensor(tn(LLM_TENSOR_CONV1D, "bias"), {1, hparams.posnet.n_embd}, 0);diff --git a/src/llama.cpp b/src...
llama_model_loader: - kv 9: general.base_model.0.organization str = Google llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/google/gemma-3... llama_model_loader: - kv 11: general.tags arr[str,1] = ["image-text-to-text"] llama_model_load...