clichatbotopenaicode-generationai-agentsragai-assistantllmchatgptanthropicllamacppllm-agentopenrouterllm-apps UpdatedMay 27, 2025 Python The most no-nonsense, locally or API-hosted AI code completion plugin for
GitHub is where people build software. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects.
LLM inference in C/C++. Contribute to ggml-org/llama.cpp development by creating an account on GitHub.
.github chore(deps): bump docker/setup-buildx-action from 3.9.0 to 3.10.0 Mar 6, 2025 examples Bump version to 0.1.107 [skip ci] May 4, 2025 llama-cpp-2 Bump version to 0.1.107 [skip ci] May 4, 2025 llama-cpp-sys-2
https://github.com/ggerganov/llama.cpp 方便大家使用 暂无标签 README MIT 2Stars 1Watching 0Forks 保存更改 发行版 暂无发行版 贡献者(1156) 全部 语言 C++57.8%C15.8%Python7.8%Cuda6.0%Objective-C2.2%Other10.4% 近期动态 26天前同步了仓库
.github .husky bin ext lib test .clang-format .gitignore .rubocop.yml .yardopts CHANGELOG.md CODE_OF_CONDUCT.md Gemfile LICENSE.txt README.md Rakefile commitlint.config.js llama_cpp.gemspec package.json Latest commit yoshoku v0.18.2
docker run -d -it --gpus all -p 8501:8501 -v PATH/TO/docs:/LlamaCpp_AllUNeed/docs --name alpaca-chat alpaca-chat sh 4. 進入 Docker 容器的終端 docker exec -it alpaca-chat sh 啟動Alpaca-2: Chat 文檔 streamlit run chat.py 啟動Alpaca-2: Retrieval QA 文檔 streamlit run qa.py ...
pip install llama-cpp-python \ --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/<cuda-version> Where <cuda-version> is one of the following: cu121: CUDA 12.1 cu122: CUDA 12.2 cu123: CUDA 12.3 cu124: CUDA 12.4 For example, to install the CUDA 12.1 wheel: pip insta...
git submodule add https://github.com/kherud/java-llama.cpp Declare the library as a source in your build.gradle android{ val jllamaLib=file("java-llama.cpp")//Execute "mvn compile" if folder target/ doesn't exist at ./java-llama.cpp/if(!file("$jllamaLib/target").exists()) { exe...
从Github下载llama.cpp项目 git clone https://github.com/ggerganov/llama.cpp cd llama.cpp 编译,分为CPU和GPU # CPU,llama.cpp在根目录运行命令 make # GPU,llama.cpp在根目录运行命令 make LLAMA_CUDA=1 模型格式转换 新建conda虚拟环境 conda create -n llamacpp python==3.10 # llama.cpp在根目录运行...