llama-stack/docs/source/distributions/self_hosted_distro
Yuan Tang c4570bcb48
docs: Add tips for debugging remote vLLM provider (#1992)
# What does this PR do?

This is helpful when debugging issues with vLLM + Llama Stack after this
PR https://github.com/vllm-project/vllm/pull/15593

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-04-18 14:47:47 +02:00
..
bedrock.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
cerebras.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
dell-tgi.md fix: docker run with --pull always to fetch the latest image (#1733) 2025-03-20 15:35:48 -07:00
dell.md fix: docker run with --pull always to fetch the latest image (#1733) 2025-03-20 15:35:48 -07:00
fireworks.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq.md fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
meta-reference-gpu.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
meta-reference-quantized-gpu.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
nvidia.md chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (#1985) 2025-04-17 06:50:40 -07:00
ollama.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
passthrough.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
remote-vllm.md docs: Add tips for debugging remote vLLM provider (#1992) 2025-04-18 14:47:47 +02:00
sambanova.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
together.md test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00