Here are 115 public repositories matching this topic... Language:All Sort:Most stars getumbrel/llama-gpt Star11k A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
Language:All Sort:Fewest forks bdqfork/go-llama.cpp Star5 go binding for llama.cpp, offer low level and high level api gollamagptchatgptllamacppllama-cpp UpdatedJun 11, 2023 Go blav/llama_cpp_openai Star3 Lightweight implementation of the OpenAI open API on top of local models ...
LLM inference in C/C++. Contribute to xyc/llama.cpp development by creating an account on GitHub.
Add example script:https://github.com/yoshoku/llama_cpp.rb/tree/main/examples [0.2.0] - 2023-06-11 Bump bundled llama.cpp from master-ffb06a3 to master-4de0334. Fix installation files for CUDA. $ gem install llama_cpp -- --with-metal ...
VS Code extension for FIM completions:https://github.com/ggml-org/llama.vscode Universaltool call supportinllama-serverhttps://github.com/ggml-org/llama.cpp/pull/9639 Vim/Neovim plugin for FIM completions:https://github.com/ggml-org/llama.vim ...
git clone https://github.com/ggerganov/llama.cpp cd llama.cpp In order to build llama.cpp you have four different options. Using make: On Linux or MacOS: make On Windows (x86/x64 only, arm64 requires cmake): Download the latest fortran version of w64devkit. Extract w64devkit on you...
llama.cpp based AI chat app for macOS. Contribute to psugihara/FreeChat development by creating an account on GitHub.
pip install llama-cpp-python \ --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/<cuda-version> Where <cuda-version> is one of the following: cu121: CUDA 12.1 cu122: CUDA 12.2 cu123: CUDA 12.3 cu124: CUDA 12.4 For example, to install the CUDA 12.1 wheel: pip insta...
LLM inference in C/C++. Contribute to ggml-org/llama.cpp development by creating an account on GitHub.
git clone https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build 再编译应用,这里推荐安装Visual Studio 2022和Cmake: 先点Configure至没红色报错,如果你需要用GPU,请选上LLAMA_CUDA,但这需要你电脑上安装CUDA Toolkit 12.1 Downloads。然后点击Generate,再点Open Project用Visual Studio打开编译,如下...