mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-30 17:03:12 +00:00
fix: rewording
Signed-off-by: Jeff MAURY <jmaury@redhat.com>
This commit is contained in:
parent
dd86427ce3
commit
aa68e98b7a
5 changed files with 11 additions and 13 deletions
|
|
@ -14,7 +14,7 @@ The `llamastack/distribution-{{ name }}` distribution consists of the following
|
|||
|
||||
{{ providers_table }}
|
||||
|
||||
You should use this distribution if you have a regular desktop machine without very powerful GPUs. Of course, if you have powerful GPUs, you can still continue using this distribution since Ollama supports GPU acceleration.
|
||||
You should use this distribution if you have a regular desktop machine without very powerful GPUs. Of course, if you have powerful GPUs, you can still continue using this distribution since Podman AI Lab supports GPU acceleration.
|
||||
|
||||
{% if run_config_env_vars %}
|
||||
### Environment Variables
|
||||
|
|
@ -29,7 +29,7 @@ The following environment variables can be configured:
|
|||
|
||||
## Setting up Podman AI Lab server
|
||||
|
||||
Please check the [Podman AI Lab Documentation](https://github.com/containers/podman-desktop-extension-ai-lab) on how to install and run Ollama. After installing Ollama, you need to run `ollama serve` to start the server.
|
||||
Please check the [Podman AI Lab Documentation](https://github.com/containers/podman-desktop-extension-ai-lab) on how to install and run Podman AI Lab.
|
||||
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, you will also need to pull and run the safety model.
|
||||
|
|
@ -37,7 +37,6 @@ If you are using Llama Stack Safety / Shield APIs, you will also need to pull an
|
|||
```bash
|
||||
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
|
||||
|
||||
# ollama names this model differently, and we must use the ollama name when loading the model
|
||||
export PODMAN_AI_LAB_SAFETY_MODEL="llama-guard3:1b"
|
||||
```
|
||||
|
||||
|
|
@ -71,7 +70,7 @@ docker run \
|
|||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-v ./llama_stack/templates/ollama/run-with-safety.yaml:/root/my-run.yaml \
|
||||
-v ./llama_stack/templates/podman-ai-lab/run-with-safety.yaml:/root/my-run.yaml \
|
||||
llamastack/distribution-{{ name }} \
|
||||
--yaml-config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue