Here are 115 public repositories matching this topic... Language:All Sort:Most stars getumbrel/llama-gpt Star11k A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
Language:All Sort:Fewest forks bdqfork/go-llama.cpp Star5 go binding for llama.cpp, offer low level and high level api gollamagptchatgptllamacppllama-cpp UpdatedJun 11, 2023 Go blav/llama_cpp_openai Star3 Lightweight implementation of the OpenAI open API on top of local models ...
Add example script:https://github.com/yoshoku/llama_cpp.rb/tree/main/examples [0.2.0] - 2023-06-11 Bump bundled llama.cpp from master-ffb06a3 to master-4de0334. Fix installation files for CUDA. $ gem install llama_cpp -- --with-metal ...
.github ci : set GITHUB_ACTION env var for server tests (#12162) 2个月前 Sources/llama llama : use cmake for swift build (#10525) 5个月前 ci repo : update links to new url (#11886) 3个月前 cmake build : fix llama.pc (#11658) ...
.github chore(deps): bump docker/setup-buildx-action from 3.9.0 to 3.10.0 Mar 6, 2025 examples Bump version to 0.1.107 [skip ci] May 4, 2025 llama-cpp-2 Bump version to 0.1.107 [skip ci] May 4, 2025 llama-cpp-sys-2
git clone https://github.com/ggerganov/llama.cpp cd llama.cpp make python3 -m pip install -r requirements.txt 模型型態轉換 pth -> f16 python3 convert.py /path_to_model/chinese-alpaca-2-7b/ 模型精度轉換 f16 -> q4 ./quantize /path_to_model/chinese-alpaca-2-7b/ggml-model-f16.bi...
git clone https://github.com/ggerganov/llama.cpp cd llama.cpp In order to build llama.cpp you have four different options. Using make: On Linux or MacOS: make On Windows (x86/x64 only, arm64 requires cmake): Download the latest fortran version of w64devkit. Extract w64devkit on you...
好的,对于 llama.cpp 而言,其实官方提供了预编译的可执行程序,具体请参考这里:github.com/ggerganov/ll。通常情况下,普通的 Windows 用户只需要选择类似 llama-b2084-bin-win-openblas-x64.zip 这样的发行版本即可。如果你拥有高性能显卡,可以选择类似 llama-b2084-bin-win-cublas-cu12.2.0-x64.zip 这样的发...
LLM inference in C/C++. Contribute to ggml-org/llama.cpp development by creating an account on GitHub.
git clone https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build 再编译应用,这里推荐安装Visual Studio 2022和Cmake: 先点Configure至没红色报错,如果你需要用GPU,请选上LLAMA_CUDA,但这需要你电脑上安装CUDA Toolkit 12.1 Downloads。然后点击Generate,再点Open Project用Visual Studio打开编译,如下...