llama-stack/llama_stack/providers/adapters/inference
2024-10-21 22:26:33 -07:00
..
bedrock Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
databricks Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
fireworks Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
ollama add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00
sample Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
tgi Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
together Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
vllm Add vLLM inference provider for OpenAI compatible vLLM server (#178) 2024-10-20 18:43:25 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00