DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [? HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) | | DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [? HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ...
本节我们简要介绍如何基于 transformers、peft 等框架,对DeepSeek-Coder-V2-Lite-Instruct 模型进行 Lora 微调。Lora 是一种高效微调方法,深入了解其原理可参见博客:[知乎|深入浅出Lora](https://zhuanlan.zhihu.com/p/650197598)。 44 55 66 这个教程会在同目录下给大家提供一个[nodebook](./04-DeepSeek-Coder...
-[ ]DeepSeek-Coder-V2-Lite-Instruct FastApi 部署调用 -[ ]DeepSeek-Coder-V2-Lite-Instruct langchain 接入 -[ ]DeepSeek-Coder-V2-Lite-Instruct WebDemo 部署 -[ ]DeepSeek-Coder-V2-Lite-Instruct vLLM 部署调用 -[ ]DeepSeek-Coder-V2-Lite-Instruct Lora 微调 ...
Mirror of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct 主页 取消 保存更改 1 https://gitee.com/mingkee168/DeepSeek-Coder-V2-Lite-Instruct.git git@gitee.com:mingkee168/DeepSeek-Coder-V2-Lite-Instruct.git mingkee168 DeepSeek-Coder-V2-Lite-Instruct DeepSeek-Coder-V2-Lite...
main 该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。 克隆/下载 git config --global user.name userName git config --global user.email userEmail DeepSeek-Coder-V2-Lite-Instruct / tokenization_deepseek_fast.py
#DeepSeek-Coder-V2-Lite-Instruct WebDemo 部署 2+ 3+ ##环境准备 4+ 5+ 在[AutoDL](https://www.autodl.com/)平台中租一个 2*3090 等 48G 显存的显卡机器,如下图所示镜像选择`PyTorch`-->`2.1.0`-->`3.10(ubuntu22.04)`-->`12.1`。
deepseek-ai/DeepSeek-Coder-V2Public NotificationsYou must be signed in to change notification settings Fork795 Star5.4k New issue Open 教程地址:https://github.com/datawhalechina/self-llm/tree/master/DeepSeek-Coder-V2 Activity Sign up for freeto join this conversation on GitHub.Already have an ...
Jul 14, 2024 Contributor godkuncommentedJul 13, 2024 KMnO4-zxmerged commit349fafcintodatawhalechina:masterJul 14, 2024 Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment 2 participants
model_type: str = "deepseek_v2" vocab_size: int = 102400 hidden_size: int = 4096 intermediate_size: int = 11008 moe_intermediate_size: int = 1407 num_hidden_layers: int = 30 num_attention_heads: int = 32 num_key_value_heads: int = 32 n_shared_experts: Optional[int] = None n...
# DeepSeek-Coder-V2-Lite-Instruct Lora 微调 本节我们简要介绍如何基于 transformers、peft 等框架,对 Qwen2-7B-Instruct 模型进行 Lora 微调。Lora 是一种高效微调方法,深入了解其原理可参见博客:[知乎|深入浅出Lora](https://zhuanlan.zhihu.com/p/650197598)。 本节我们简要介绍如何基于 transformers、peft 等...