llama-stack/docs/source/distributions
Yuan Tang 1be66d754e
docs: Redirect instructions for additional hardware accelerators for remote vLLM provider (#1923)
# What does this PR do?

vLLM website just added a [new index page for installing for different
hardware
accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html).
This PR adds a link to that page with additional edits to make sure
readers are aware that the use of GPUs on this page are for
demonstration purposes only.

This closes https://github.com/meta-llama/llama-stack/issues/1813.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-04-10 10:04:17 +02:00
..
ondevice_distro docs: Fix trailing whitespace error (#1669) 2025-03-17 08:53:30 -07:00
remote_hosted_distro feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
self_hosted_distro docs: Redirect instructions for additional hardware accelerators for remote vLLM provider (#1923) 2025-04-10 10:04:17 +02:00
building_distro.md fix: Use CONDA_DEFAULT_ENV presence as a flag to use conda mode (#1555) 2025-03-27 17:13:22 -04:00
configuration.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
importing_as_library.md docs: update importing_as_library.md (#1863) 2025-04-07 12:31:04 +02:00
index.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
kubernetes_deployment.md docs: Avoid bash script syntax highlighting for dark mode (#1918) 2025-04-09 15:43:43 -07:00
list_of_distributions.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
starting_llama_stack_server.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00