llama-stack-mirror/docs/source
Ashwin Bharambe ade075152e
chore: kill inline::vllm (#2824)
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.

See Discord discussion:
1395849853
2025-07-18 15:52:18 -07:00
..
advanced_apis docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
building_applications docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
concepts docs: add missing bold title to match others (#2782) 2025-07-16 18:05:48 +02:00
contributing docs: revamp testing documentation (#2155) 2025-05-13 11:28:29 -07:00
deploying chore: update k8s template (#2786) 2025-07-16 15:07:26 -07:00
distributions docs: add virtualenv instructions for running starter distro (#2780) 2025-07-18 09:07:43 -07:00
getting_started docs: fix steps in the Quick Start Guide (#2800) 2025-07-18 09:08:46 -07:00
providers chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
references docs: update outdated llama stack client documentation (#2758) 2025-07-15 11:49:59 -07:00
conf.py docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00
index.md docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00