llama-stack/docs/source/distributions/self_hosted_distro
2025-04-17 06:50:40 -07:00
..
bedrock.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
cerebras.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
dell-tgi.md fix: docker run with --pull always to fetch the latest image (#1733) 2025-03-20 15:35:48 -07:00
dell.md fix: docker run with --pull always to fetch the latest image (#1733) 2025-03-20 15:35:48 -07:00
fireworks.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq.md fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
meta-reference-gpu.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
meta-reference-quantized-gpu.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
nvidia.md chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (#1985) 2025-04-17 06:50:40 -07:00
ollama.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
passthrough.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
remote-vllm.md docs: add example for intel gpu in vllm remote (#1952) 2025-04-15 07:56:23 -07:00
sambanova.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
together.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00