llama-stack-mirror/llama_stack/providers/adapters
Steve Grubb 122793ab92
Correct a traceback in vllm (#366)
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion
async for chunk in process_chat_completion_stream_response(
TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given

This corrects the error by deleting the request variable
2024-11-04 20:49:35 -08:00
..
agents [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
inference Correct a traceback in vllm (#366) 2024-11-04 20:49:35 -08:00
memory pgvector fixes (#369) 2024-11-04 17:01:09 -08:00
safety Fix shield_type and routing table breakage 2024-11-04 19:57:15 -08:00
telemetry [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00