llama-stack-mirror/llama_stack
Steve Grubb 122793ab92
Correct a traceback in vllm (#366)
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion
async for chunk in process_chat_completion_stream_response(
TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given

This corrects the error by deleting the request variable
2024-11-04 20:49:35 -08:00
..
apis Fix shield_type and routing table breakage 2024-11-04 19:57:15 -08:00
cli Kill --name from llama stack build (#340) 2024-10-28 23:07:32 -07:00
distribution The server now depends on SQLite by default 2024-11-04 20:35:53 -08:00
providers Correct a traceback in vllm (#366) 2024-11-04 20:49:35 -08:00
scripts Add a test for CLI, but not fully done so disabled 2024-09-19 13:27:07 -07:00
templates update distributions compose/readme (#338) 2024-10-28 16:34:43 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00