mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-28 23:11:58 +00:00
feat: RamaLama Documentation and Templates
RamaLama is a fully Open Source AI Model tool that facilitate local management of AI Models. https://github.com/containers/ramalama It is fully open source and supports pulling models from HuggingFace, Ollama, OCI Images, and via URI file://, http://, https:// It uses the llama.cpp and vllm AI engines for running the MODELS. It also defaults to running the models inside of containers. Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
parent
4de45560bf
commit
c9a41288a3
14 changed files with 1331 additions and 354 deletions
194
docs/source/distributions/self_hosted_distro/ramalama.md
Normal file
194
docs/source/distributions/self_hosted_distro/ramalama.md
Normal file
|
|
@ -0,0 +1,194 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
|
||||
# RamaLama Distribution
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
self
|
||||
```
|
||||
|
||||
The `llamastack/distribution-ramalama` distribution consists of the following provider configurations.
|
||||
|
||||
| API | Provider(s) |
|
||||
|-----|-------------|
|
||||
| agents | `inline::meta-reference` |
|
||||
| datasetio | `remote::huggingface`, `inline::localfs` |
|
||||
| eval | `inline::meta-reference` |
|
||||
| inference | `remote::ramalama` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::code-interpreter`, `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
||||
You should use this distribution if you have a regular desktop machine without very powerful GPUs. Of course, if you have powerful GPUs, you can still continue using this distribution since RamaLama supports GPU acceleration.
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The following environment variables can be configured:
|
||||
|
||||
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `8321`)
|
||||
- `RAMALAMA_URL`: URL of the RamaLama server (default: `http://0.0.0.0:8080/v1`)
|
||||
- `INFERENCE_MODEL`: Inference model loaded into the RamaLama server (default: `meta-llama/Llama-3.2-3B-Instruct`)
|
||||
- `SAFETY_MODEL`: Safety model loaded into the RamaLama server (default: `meta-llama/Llama-Guard-3-1B`)
|
||||
|
||||
|
||||
## Setting up RamaLama server
|
||||
|
||||
Please check the [RamaLama Documentation](https://github.com/containers/ramalama) on how to install and run RamaLama. After installing RamaLama, you need to run `ramalama serve` to start the server.
|
||||
|
||||
In order to load models, you can run:
|
||||
|
||||
```bash
|
||||
export RAMALAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16"
|
||||
|
||||
export INFERENCE_MODEL="~/path_to_model/meta-llama/Llama-3.2-3B-Instruct"
|
||||
|
||||
ramalama serve $RAMALAMA_INFERENCE_MODEL
|
||||
```
|
||||
RamaLama requires the inference model to be the fully qualified path to the model on disk when running on MacOS, on Linux it can just be the model name.
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, you will also need to pull and run the safety model.
|
||||
|
||||
```bash
|
||||
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
|
||||
|
||||
# ramalama names this model differently, and we must use the ramalama name when loading the model
|
||||
export RAMALAMA_SAFETY_MODEL="llama-guard3:1b"
|
||||
ramalama run $RAMALAMA_SAFETY_MODEL --keepalive 60m
|
||||
```
|
||||
|
||||
## Running Llama Stack
|
||||
|
||||
Now you are ready to run Llama Stack with RamaLama as the inference provider. You can do this via Conda, Venv, or Podman which has a pre-built image.
|
||||
|
||||
### Via Podman
|
||||
|
||||
This method allows you to get started quickly without having to build the distribution code.
|
||||
|
||||
```bash
|
||||
export LLAMA_STACK_PORT=5001
|
||||
podman run \
|
||||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama:z \
|
||||
llamastack/distribution-ramalama \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env RAMALAMA_URL=http://0.0.0.0:8080/v1
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
# You need a local checkout of llama-stack to run this, get it using
|
||||
# git clone https://github.com/meta-llama/llama-stack.git
|
||||
cd /path/to/llama-stack
|
||||
|
||||
podman run \
|
||||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama:z \
|
||||
-v ./llama_stack/templates/ramalama/run-with-safety.yaml:/root/my-run.yaml:z \
|
||||
llamastack/distribution-ramalama \
|
||||
--yaml-config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||
--env RAMALAMA_URL=http://host.containers.internal:8080/v1
|
||||
```
|
||||
|
||||
### Via Docker
|
||||
|
||||
This method allows you to get started quickly without having to build the distribution code.
|
||||
|
||||
```bash
|
||||
export LLAMA_STACK_PORT=5001
|
||||
docker run \
|
||||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
llamastack/distribution-ramalama \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env RAMALAMA_URL=http://host.docker.internal:8080/v1
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
# You need a local checkout of llama-stack to run this, get it using
|
||||
# git clone https://github.com/meta-llama/llama-stack.git
|
||||
cd /path/to/llama-stack
|
||||
|
||||
docker run \
|
||||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-v ./llama_stack/templates/ramalama/run-with-safety.yaml:/root/my-run.yaml \
|
||||
llamastack/distribution-ramalama \
|
||||
--yaml-config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||
--env RAMALAMA_URL=http://host.docker.internal:8080/v1
|
||||
```
|
||||
|
||||
### Via Conda
|
||||
|
||||
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
|
||||
|
||||
```bash
|
||||
export LLAMA_STACK_PORT=5001
|
||||
|
||||
llama stack build --template ramalama --image-type conda
|
||||
llama stack run ./run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env RAMALAMA_URL=http://host.docker.internal:8080/v1
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
llama stack run ./run-with-safety.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||
--env RAMALAMA_URL=http://host.docker.internal:8080/v1
|
||||
```
|
||||
|
||||
|
||||
### (Optional) Update Model Serving Configuration
|
||||
|
||||
```{note}
|
||||
Please check the [model_aliases](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ramalama/ramalama.py#L45) for the supported RamaLama models.
|
||||
```
|
||||
|
||||
To serve a new model with `ramalama`
|
||||
```bash
|
||||
ramalama run <model_name>
|
||||
```
|
||||
|
||||
To make sure that the model is being served correctly, run `ramalama ps` to get a list of models being served by ramalama.
|
||||
```
|
||||
$ ramalama ps
|
||||
|
||||
NAME ID SIZE PROCESSOR UNTIL
|
||||
llama3.1:8b-instruct-fp16 4aacac419454 17 GB 100% GPU 4 minutes from now
|
||||
```
|
||||
|
||||
To verify that the model served by ramalama is correctly connected to Llama Stack server
|
||||
```bash
|
||||
$ llama-stack-client models list
|
||||
+----------------------+----------------------+---------------+-----------------------------------------------+
|
||||
| identifier | llama_model | provider_id | metadata |
|
||||
+======================+======================+===============+===============================================+
|
||||
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | ramalama0 | {'ramalama_model': 'llama3.1:8b-instruct-fp16'} |
|
||||
+----------------------+----------------------+---------------+-----------------------------------------------+
|
||||
```
|
||||
Loading…
Add table
Add a link
Reference in a new issue