mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion async for chunk in process_chat_completion_stream_response( TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given This corrects the error by deleting the request variable |
||
---|---|---|
.. | ||
apis | ||
cli | ||
distribution | ||
providers | ||
scripts | ||
templates | ||
__init__.py |