VE FORBRYDERNE - Contributed many features such as the Editing overhaul, Adventure Mode, expansions to the world info section, breakmodel integration, scripting support, API, softpromtps and much more. As well as vastly improving the TPU compatibility and integrating external code into KoboldAI so...
VE FORBRYDERNE - Contributed many features such as the Editing overhaul, Adventure Mode, expansions to the world info section, breakmodel integration, scripting support, API, softpromtps and much more. As well as vastly improving the TPU compatibility and integrating external code into KoboldAI so...
VE FORBRYDERNE - Contributed many features such as the Editing overhaul, Adventure Mode, expansions to the world info section, breakmodel integration, scripting support, API, softpromtps and much more. As well as vastly improving the TPU compatibility and integrating external code into KoboldAI so...
: ComfyUI can now be used as an image generation backend API from within KoboldAI Lite. No workflow customization is necessary.Note: ComfyUI must be launched with the flags --listen --enable-cors-header '*' to enable API access.Then you may use it normally like any other Image Gen back...
Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feed...
It's a single self contained distributable from Concedo, that builds off llama.cpp, and adds a versatile Kobold API endpoint, additional format support, Stable Diffusion image generation, backward compatibility, as well as a fancy UI with persistent stories, editing tools, save formats, memory, ...
Fyi, the adapter can be sent over the API, but it also accepts plain JSON files that can be loaded independently of any frontend such as open-webui. Here's a sample file:https://github.com/LostRuins/koboldcpp/wiki#what-is---chatcompletionsadapter ...
For Jetson user, if you have Jetson Orin, you can try this: Offical Support. If you are using an old model(nano/TX2), need some additional operations before compiling. Using make: make LLAMA_CUDA=1 Using CMake: cmake -B build -DLLAMA_CUDA=ON cmake --build build --config Release ...
(or any other mention of future information)\n- The user is asking a question that can be answered by searching the internet and is not part of your general knowledge.\n\n\nAPIRequest\n\nYou can send API requests with different methods such as GET, POST, PUT, PATCH, and DELETE to ...
The current OpenAI-like API uses hardcoded chat templates. This PR implements a non-breaking adapter users can exploit to use models requiring various chat templates. Testing request against Mistra...