llama-stack/llama_stack
AlexHe99 983f6feeb8
docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858)
Add the content to use AMD GPU as the vLLM server. Split the original
part to two sub chapters,
1. AMD vLLM server
2. NVIDIA vLLM server (orignal)

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

---------

Signed-off-by: Alex He <alehe@amd.com>
2025-04-08 21:35:32 -07:00
..
apis refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cli refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
distribution refactor: move missing tests to test directory (#1892) 2025-04-08 18:54:00 -07:00
models refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
providers test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Update remote-vllm.md with AMD GPU vLLM server supported. (#1858) 2025-04-08 21:35:32 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py chore: make mypy happy with webmethod (#1758) 2025-03-22 08:17:23 -07:00