INFO: 172.17.0.1:47950 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/pydantic/type_adapter.py", line 270, in _init_core_attrs self._core_schema = _g...
in generate | async for output in await self.add_request( | File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 113, in generator | raise result | File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 55, in _log_ta...
An example project folder like this one where `test_completions_service.py` uses a fixture available in `conftest.py` will fail running in pycharm. Trying to run any individual test (or test folder) will generate a run configuration like this, where the working directory points...
{"timestamp":"2024-07-11T17:32:24.361684Z","level":"ERROR","message":"`top_p` must be > 0.0 and < 1.0","target":"text_generation_router::infer","filename":"router/src/infer.rs","line_number":137,"span":{"name":"generate_stream"},"spans":[ {"name":"chat_completions"},{...
engine.generate(prompt, sampling_params, request_id) if stream: background_tasks = BackgroundTasks() # Using background_tasks to abort the request # if the client disconnects. background_tasks.add_task(self.may_abort_request, request_id) return StreamingResponse( self.stream_results(results_...
'generate_name': None, 'generation': None, 'initializers': None, 'labels': None, 'name': 'test', 'namespace': None, 'owner_references': None, 'resource_version': None, 'self_link': None, 'uid': None}, 'spec': {'active_deadline_seconds': None, ...
I do get VimTex completions via Omni when writing \usepackage{}, so cmp-omni does seem to be working. The VimTex menu also works great, so I can open pdfs while hovering over citations etc. Upon triggering completion with when my cursor is here \citet{|}, I get the following error...