llama-stack-mirror/llama_stack
Derek Higgins ec4c04fa2d docs: Fix missing --gpu all flag in Docker run commands
dding the --gpu all flag to Docker run commands
for meta-reference-gpu distributions ensures models are
loaded into GPU instead of CPU.

Fixes: #1798

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-25 12:38:37 +01:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat: include run.yaml in the container image (#2005) 2025-04-24 11:29:53 +02:00
distribution feat: include run.yaml in the container image (#2005) 2025-04-24 11:29:53 +02:00
models fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
providers fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969) 2025-04-23 15:33:19 +02:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Fix missing --gpu all flag in Docker run commands 2025-04-25 12:38:37 +01:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00