llama-stack-mirror/llama_stack
Ashwin Bharambe ade075152e
chore: kill inline::vllm (#2824)
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.

See Discord discussion:
1395849853
2025-07-18 15:52:18 -07:00
..
apis feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
cli fix(cli): image name should not default to CONDA_DEFAULT_ENV (#2806) 2025-07-17 16:40:35 -07:00
distribution feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
ui fix: re-hydrate requirement and fix package (#2774) 2025-07-16 05:46:15 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00