llama-stack/docs/source/distributions
AlexHe99 983f6feeb8
docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858)
Add the content to use AMD GPU as the vLLM server. Split the original
part to two sub chapters,
1. AMD vLLM server
2. NVIDIA vLLM server (orignal)

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

---------

Signed-off-by: Alex He <alehe@amd.com>
2025-04-08 21:35:32 -07:00
..
ondevice_distro docs: Fix trailing whitespace error (#1669) 2025-03-17 08:53:30 -07:00
remote_hosted_distro feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
self_hosted_distro docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858) 2025-04-08 21:35:32 -07:00
building_distro.md fix: Use CONDA_DEFAULT_ENV presence as a flag to use conda mode (#1555) 2025-03-27 17:13:22 -04:00
configuration.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
importing_as_library.md docs: update importing_as_library.md (#1863) 2025-04-07 12:31:04 +02:00
index.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
kubernetes_deployment.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
list_of_distributions.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
starting_llama_stack_server.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00