llama-stack-mirror/llama_stack/providers/adapters
Steve Grubb b6e2526f60 Correct a traceback in vllm
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/vllm/vllm.py", line 136, in _stream_chat_completion
async for chunk in process_chat_completion_stream_response(
TypeError: process_chat_completion_stream_response() takes 2 positional arguments but 3 were given

This corrects the error by deleting the requesr variable
2024-11-04 17:11:10 -05:00
..
agents [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
inference Correct a traceback in vllm 2024-11-04 17:11:10 -05:00
memory feat: Qdrant Vector index support (#221) 2024-10-22 12:50:19 -07:00
safety Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
telemetry [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00