🔥 Our WizardCoder-15B-v1.0 model achieves the 57.3 pass@1 on the HumanEval Benchmarks, which is 22.3 points higher than the SOTA open-source Code LLMs. 🔥 We released WizardCoder-15B-v1.0 trained with 78k ev
{ model_url: "https://huggingface.co/mlc-ai/mlc-chat-WizardCoder-15B-V1.0-q4f32_1/resolve/main/", local_id: "WizardCoder-15B-V1.0-q4f32_1", } then added the libmap "WizardCoder-15B-V1.0-q4f32_1": "https://raw.githubusercontent.com/mlc-ai/binary-mlc-llm-libs/main/WizardCod...
I have started the vllm server using the below command: CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.api_server --model WizardLM/WizardCoder-15B-V1.0 --tensor-parallel-size 2 --trust-remote-code Output of which is : INFO 08-14 20:...
If you want to inference with WizardLM/WizardCoder-15B/3B/1B-V1.0, please change the stop_tokens = [''] to stop_tokens = ['<|endoftext|>'] in the script. Citation Please cite the repo if you use the data, method or code in this repo. @misc{luo2023wizardcoder, title={WizardCo...
Thanks for fixing #254 . After I updated the code to the latest version, when I executed the following command: python -m vllm.entrypoints.openai.api_server --model /home/foo/workshop/text-generation-webui/models/WizardLM_WizardCoder-15B...