llama-stack-mirror/llama_stack/providers/adapters/inference/vllm
Steve Grubb b6e2526f60 Correct a traceback in vllm
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion
async for chunk in process_chat_completion_stream_response(
TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given

This corrects the error by deleting the requesr variable
2024-11-04 17:11:10 -05:00
..
__init__.py Add vLLM inference provider for OpenAI compatible vLLM server (#178) 2024-10-20 18:43:25 -07:00
config.py Add vLLM inference provider for OpenAI compatible vLLM server (#178) 2024-10-20 18:43:25 -07:00
vllm.py Correct a traceback in vllm 2024-11-04 17:11:10 -05:00