mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 04:20:02 +00:00
Add the content to use AMD GPU as the vLLM server. Split the original part to two sub chapters, 1. AMD vLLM server 2. NVIDIA vLLM server (orignal) |
||
|---|---|---|
| .. | ||
| ondevice_distro | ||
| remote_hosted_distro | ||
| self_hosted_distro | ||
| building_distro.md | ||
| configuration.md | ||
| importing_as_library.md | ||
| index.md | ||
| kubernetes_deployment.md | ||
| list_of_distributions.md | ||
| starting_llama_stack_server.md | ||