\hf\mosaicml-mpt-7b-chat-gguf\ggml-mosaicml-mpt-7b-chat-Q2_K.gguf (version GGUF V2 (latest)) llama_model_loader: - tensor 0: token_embd.weight q2_K [ 4096, 50432, 1, 1 ] llama_model_loader: - tensor 1: output.weight q6_K [ 4096, 50432, 1, 1 ] llama_model_loader: -...