llama-stack-mirror/docs/source/providers/inference
Ashwin Bharambe ade075152e
chore: kill inline::vllm (#2824)
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.

See Discord discussion:
1395849853
2025-07-18 15:52:18 -07:00
..
index.md chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
inline_meta-reference.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
inline_sentence-transformers.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_anthropic.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_bedrock.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_cerebras-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_cerebras.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_databricks.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_fireworks-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_fireworks.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_gemini.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_groq-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_groq.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_hf_endpoint.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_hf_serverless.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_llama-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_nvidia.md fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
remote_ollama.md feat(ollama): periodically refresh models (#2805) 2025-07-18 12:20:36 -07:00
remote_openai.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_passthrough.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_runpod.md feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
remote_sambanova-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_sambanova.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_tgi.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_together-openai-compat.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_together.md feat: consolidate most distros into "starter" (#2516) 2025-07-04 15:58:03 +02:00
remote_vllm.md docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
remote_watsonx.md fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00