mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-12 04:50:39 +00:00
docs: update ollama doc url
Signed-off-by: reidliu <reid201711@gmail.com>
This commit is contained in:
parent
6033e6893e
commit
5743988e34
3 changed files with 3 additions and 3 deletions
|
@ -130,7 +130,7 @@ llama stack run ./run-with-safety.yaml \
|
||||||
### (Optional) Update Model Serving Configuration
|
### (Optional) Update Model Serving Configuration
|
||||||
|
|
||||||
```{note}
|
```{note}
|
||||||
Please check the [model_entries](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L45) for the supported Ollama models.
|
Please check the [model_entries](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/models.py) for the supported Ollama models.
|
||||||
```
|
```
|
||||||
|
|
||||||
To serve a new model with `ollama`
|
To serve a new model with `ollama`
|
||||||
|
|
|
@ -40,7 +40,7 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
|
||||||
ollama run llama3.2:3b-instruct-fp16 --keepalive -1m
|
ollama run llama3.2:3b-instruct-fp16 --keepalive -1m
|
||||||
```
|
```
|
||||||
**Note**:
|
**Note**:
|
||||||
- The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43)
|
- The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/models.py)
|
||||||
- `keepalive -1m` is used so that ollama continues to keep the model in memory indefinitely. Otherwise, ollama frees up memory and you would have to run `ollama run` again.
|
- `keepalive -1m` is used so that ollama continues to keep the model in memory indefinitely. Otherwise, ollama frees up memory and you would have to run `ollama run` again.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
|
@ -119,7 +119,7 @@ llama stack run ./run-with-safety.yaml \
|
||||||
### (Optional) Update Model Serving Configuration
|
### (Optional) Update Model Serving Configuration
|
||||||
|
|
||||||
```{note}
|
```{note}
|
||||||
Please check the [model_entries](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L45) for the supported Ollama models.
|
Please check the [model_entries](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/models.py) for the supported Ollama models.
|
||||||
```
|
```
|
||||||
|
|
||||||
To serve a new model with `ollama`
|
To serve a new model with `ollama`
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue