forked from phoenix-oss/llama-stack-mirror
# What does this PR do? vLLM website just added a [new index page for installing for different hardware accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html). This PR adds a link to that page with additional edits to make sure readers are aware that the use of GPUs on this page are for demonstration purposes only. This closes https://github.com/meta-llama/llama-stack/issues/1813. Signed-off-by: Yuan Tang <terrytangyuan@gmail.com> |
||
|---|---|---|
| .. | ||
| ondevice_distro | ||
| remote_hosted_distro | ||
| self_hosted_distro | ||
| building_distro.md | ||
| configuration.md | ||
| importing_as_library.md | ||
| index.md | ||
| kubernetes_deployment.md | ||
| list_of_distributions.md | ||
| starting_llama_stack_server.md | ||