llama-stack-mirror/llama_stack/providers
2024-10-21 11:12:26 -07:00
..
adapters Add vLLM inference provider for OpenAI compatible vLLM server (#178) 2024-10-21 10:46:45 -07:00
impls Make all methods async def again; add completion() for meta-reference (#270) 2024-10-21 10:46:40 -07:00
registry vllm 2024-10-21 11:12:26 -07:00
tests Make all methods async def again; add completion() for meta-reference (#270) 2024-10-21 10:46:40 -07:00
utils Remove request arg from chat completion response processing (#240) 2024-10-15 13:03:17 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00