GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.github.com/Lightning-AI/lit-llama 背景 硬件设备显卡:4090 24G*1 目...
转为GGUF格式,方便在ollama、Llama.cpp中使用 安装Llama.cpp sudo apt update sudo apt install build-essential python3-pip python3-dev python3-venv gcc g++ make jq mkdir llama cd llama git clone <https://github.com/ggerganov/llama.cpp> cd llama.cpp mkdir build cd build sudo apt install c...
geminillamaloramistralhuggingfaceopenai-apilarge-language-modelsllmgenerative-ailangchainllmopsllama-indexqlorachainlitfinetuning-llmsgradio-python-llmmergekitllama3llama3-meta-aigpt4o UpdatedJan 24, 2025 TirendazAcademy/PandasAI-Tutorials Star109 ...
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. - ydli-ai/lit-llama
litgpt finetune \ --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml ✅ Use configs to customize training Configs let you customize training for all granular parameters like:
【Lit-StableLM:基于nanoGPT的StableLM/Pythia语言模型的可修改实现,支持flash注意力、Int8和GPTQ 4bit量化、LoRA和LLaMA-Adapter微调和预训练】'Lit-StableLM - Implementation of the StableLM/Pythia language models based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LL...
我刚刚将新的 Llama 3.2 1B 和 3B 模型添加到了我帮助开发的开源 LLM 库 LitGPT,该库专注于效率和代码可读性。 LitGPT 允许你在云端或笔记本电脑上微调和使用这些模型。 所以,如果你周末想找点东西玩: 1. 微调模型 litgpt finetune_lora meta-llama/Llama-3.2-1B \ --data JSON \ --data.json_path ...
LlamaIndex 🦙(@llama_index):利用Llama3在@GroqInc、@chainlit_io和@llama_index上构建一个高速的RAG聊天机器人。这是Jayita B.提供的一个很棒的资源,教你如何不仅构建一个先进的RAG索引/查询管道,还将其转化为具有快速响应的全栈应用程序。 利用Llama3在@GroqInc、@chainlit_io和@llama_index上构建一个高速...
Deploying a Vision model with LitServe and a LLM - llama3.2 model with litserve from The School of AI EMLO-V4 course assignmenthttps://theschoolof.ai/#programs loraapi-developmentpeft-fine-tuning-llmlitservellama3-2llama-3-2-1bllm-api-development ...
litgpt finetune \ --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml✅ Use configs to customize training Configs let you customize training for all granular parameters like: # The path to the base model's checkpoint directory to ...