Closes#15200 Hard code context windows for the most popular models. In the future we should probably allow overriding this via settings or determining it dynamically via API. Depending on hardware, users may wish to increase their timeout when work : "language_models": { "ollama": { "low_...
Code Issues1.4k Pull requests208 Actions Security Insights Additional navigation options New issue Closed What is the issue? It seems like Ollama (non-docker) models crash and restart while any output is being processed With 70k Context: