mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-28 12:02:00 +00:00
dding the --gpu all flag to Docker run commands for meta-reference-gpu distributions ensures models are loaded into GPU instead of CPU. Fixes: #1798 Signed-off-by: Derek Higgins <derekh@redhat.com> |
||
|---|---|---|
| .. | ||
| ondevice_distro | ||
| remote_hosted_distro | ||
| self_hosted_distro | ||
| building_distro.md | ||
| configuration.md | ||
| importing_as_library.md | ||
| index.md | ||
| kubernetes_deployment.md | ||
| list_of_distributions.md | ||
| starting_llama_stack_server.md | ||