llama-stack-mirror/llama_stack/providers
Ashwin Bharambe ade075152e
chore: kill inline::vllm (#2824)
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.

See Discord discussion:
1395849853
2025-07-18 15:52:18 -07:00
..
inline chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
registry chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
remote feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
utils feat: create dynamic model registration for OpenAI and Llama compat remote inference providers (#2745) 2025-07-16 12:49:38 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00