mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion async for chunk in process_chat_completion_stream_response( TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given This corrects the error by deleting the request variable |
||
|---|---|---|
| .. | ||
| agents | ||
| inference | ||
| memory | ||
| safety | ||
| telemetry | ||
| __init__.py | ||