llama-stack-mirror/docs/source
AlexHe99 983f6feeb8
docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858)
Add the content to use AMD GPU as the vLLM server. Split the original
part to two sub chapters,
1. AMD vLLM server
2. NVIDIA vLLM server (orignal)

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

---------

Signed-off-by: Alex He <alehe@amd.com>
2025-04-08 21:35:32 -07:00
..
building_applications docs: Minor updates to docs to make them a little friendlier to new users (#1871) 2025-04-04 08:10:35 -04:00
concepts docs: fix typos in evaluation concepts (#1745) 2025-03-21 12:00:53 -07:00
contributing docs: Updating docs to source from CONTRIBUTING.md (#1850) 2025-04-01 14:50:04 +02:00
distributions docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858) 2025-04-08 21:35:32 -07:00
getting_started docs: fixing sphinx imports (#1884) 2025-04-05 14:21:45 -07:00
introduction docs: Remove mentions of focus on Llama models (#1690) 2025-03-19 00:17:22 -04:00
playground chore: clean up distro doc (#1804) 2025-03-27 12:12:14 -07:00
providers docs: Document sqlite-vec faiss comparison (#1821) 2025-03-28 17:41:33 +01:00
references feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
conf.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
index.md docs: llama4 getting started nb (#1878) 2025-04-06 18:51:34 -07:00