llama-stack/llama_stack
Yuan Tang 1be66d754e
docs: Redirect instructions for additional hardware accelerators for remote vLLM provider (#1923)
# What does this PR do?

vLLM website just added a [new index page for installing for different
hardware
accelerators](https://docs.vllm.ai/en/latest/getting_started/installation.html).
This PR adds a link to that page with additional edits to make sure
readers are aware that the use of GPUs on this page are for
demonstration purposes only.

This closes https://github.com/meta-llama/llama-stack/issues/1813.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-04-10 10:04:17 +02:00
..
apis refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cli refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
distribution chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
models fix: on-the-fly int4 quantize parameter (#1920) 2025-04-09 15:00:12 -07:00
providers fix: use ollama list to find models (#1854) 2025-04-09 10:34:26 +02:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Redirect instructions for additional hardware accelerators for remote vLLM provider (#1923) 2025-04-10 10:04:17 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py chore: make mypy happy with webmethod (#1758) 2025-03-22 08:17:23 -07:00