mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? I noticed that the documentation for other providers have this header, so I have added it to the Cerebras docs too. ``` --- orphan: true --- # TGI Distribution ```{toctree} :maxdepth: 2 :hidden: self ``` ``` This also fixes a typo in README.md where the link to the Cerebras docs included an extra `getting_started` section. I did notice however that https://hub.docker.com/r/llamastack/distribution-cerebras still does not exist. How do I get the Cerebras Docker image uploaded? cc: @ashwinb @raghotham ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
1.7 KiB
1.7 KiB
orphan |
---|
true |
Cerebras Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-cerebras
distribution consists of the following provider configurations.
API | Provider(s) |
---|---|
agents | inline::meta-reference |
inference | remote::cerebras |
memory | inline::meta-reference |
safety | inline::llama-guard |
telemetry | inline::meta-reference |
tool_runtime | remote::brave-search , remote::tavily-search , inline::code-interpreter , inline::memory-runtime |
Environment Variables
The following environment variables can be configured:
LLAMA_STACK_PORT
: Port for the Llama Stack distribution server (default:5001
)CEREBRAS_API_KEY
: Cerebras API Key (default: ``)
Models
The following models are available by default:
meta-llama/Llama-3.1-8B-Instruct (llama3.1-8b)
meta-llama/Llama-3.3-70B-Instruct (llama-3.3-70b)
Prerequisite: API Keys
Make sure you have access to a Cerebras API Key. You can get one by visiting cloud.cerebras.ai.
Running Llama Stack with Cerebras
You can do this via Conda (build code) or Docker which has a pre-built image.
Via Docker
This method allows you to get started quickly without having to build the distribution code.
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-cerebras \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT \
--env CEREBRAS_API_KEY=$CEREBRAS_API_KEY
Via Conda
llama stack build --template cerebras --image-type conda
llama stack run ./run.yaml \
--port 5001 \
--env CEREBRAS_API_KEY=$CEREBRAS_API_KEY