mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-22 12:37:53 +00:00
* docker compose ollama * comment * update compose file * readme for distributions * readme * move distribution folders * move distribution/templates to distributions/ * rename * kill distribution/templates * readme * readme * build/developer cookbook/new api provider * developer cookbook * readme * readme * [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264) * fix case where memory bank is registered without provider_id * memory test * agents unit test * Add an option to not use elastic agents for meta-reference inference (#269) * Allow overridding checkpoint_dir via config * Small rename * Make all methods `async def` again; add completion() for meta-reference (#270) PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def". The rationale was that this allowed the user (within llama-stack) of this to use it as: ``` async for chunk in api.chat_completion(params) ``` However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like: ``` async for chunk in await api.chat_completion(params) ``` Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :) * Improve an important error message * update ollama for llama-guard3 * Add vLLM inference provider for OpenAI compatible vLLM server (#178) This PR adds vLLM inference provider for OpenAI compatible vLLM server. * Create .readthedocs.yaml Trying out readthedocs * Update event_logger.py (#275) spelling error * vllm * build templates * delete templates * tmp add back build to avoid merge conflicts * vllm * vllm --------- Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com> Co-authored-by: Yuan Tang <terrytangyuan@gmail.com> Co-authored-by: raghotham <rsm@meta.com> Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
91 lines
2.7 KiB
Markdown
91 lines
2.7 KiB
Markdown
# Ollama Distribution
|
|
|
|
The `llamastack/distribution-ollama` distribution consists of the following provider configurations.
|
|
|
|
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
|
|----------------- |---------------- |---------------- |---------------------------------- |---------------- |---------------- |
|
|
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chroma | remote::ollama | meta-reference |
|
|
|
|
|
|
### Start a Distribution (Single Node GPU)
|
|
|
|
> [!NOTE]
|
|
> This assumes you have access to GPU to start a Ollama server with access to your GPU.
|
|
|
|
```
|
|
$ cd llama-stack/distribution/ollama/gpu
|
|
$ ls
|
|
compose.yaml run.yaml
|
|
$ docker compose up
|
|
```
|
|
|
|
You will see outputs similar to following ---
|
|
```
|
|
[ollama] | [GIN] 2024/10/18 - 21:19:41 | 200 | 226.841µs | ::1 | GET "/api/ps"
|
|
[ollama] | [GIN] 2024/10/18 - 21:19:42 | 200 | 60.908µs | ::1 | GET "/api/ps"
|
|
INFO: Started server process [1]
|
|
INFO: Waiting for application startup.
|
|
INFO: Application startup complete.
|
|
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
|
[llamastack] | Resolved 12 providers
|
|
[llamastack] | inner-inference => ollama0
|
|
[llamastack] | models => __routing_table__
|
|
[llamastack] | inference => __autorouted__
|
|
```
|
|
|
|
To kill the server
|
|
```
|
|
docker compose down
|
|
```
|
|
|
|
### Start the Distribution (Single Node CPU)
|
|
|
|
> [!NOTE]
|
|
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
|
|
|
|
```
|
|
$ cd llama-stack/distribution/ollama/cpu
|
|
$ ls
|
|
compose.yaml run.yaml
|
|
$ docker compose up
|
|
```
|
|
|
|
### (Alternative) ollama run + llama stack Run
|
|
|
|
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands.
|
|
|
|
#### Start Ollama server.
|
|
- Please check the [Ollama Documentations](https://github.com/ollama/ollama) for more details.
|
|
|
|
**Via Docker**
|
|
```
|
|
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
|
```
|
|
|
|
**Via CLI**
|
|
```
|
|
ollama run <model_id>
|
|
```
|
|
|
|
#### Start Llama Stack server pointing to Ollama server
|
|
|
|
**Via Docker**
|
|
```
|
|
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./ollama-run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack-local-cpu --yaml_config /root/llamastack-run-ollama.yaml
|
|
```
|
|
|
|
Make sure in you `ollama-run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g.
|
|
```
|
|
inference:
|
|
- provider_id: ollama0
|
|
provider_type: remote::ollama
|
|
config:
|
|
url: http://127.0.0.1:14343
|
|
```
|
|
|
|
**Via Conda**
|
|
|
|
```
|
|
llama stack build --config ./build.yaml
|
|
llama stack run ./gpu/run.yaml
|
|
```
|