This commit is contained in:
Xi Yan 2024-11-03 11:23:11 -08:00
parent c7f87fcbe7
commit 70dea317fc
2 changed files with 2 additions and 2 deletions

View file

@ -81,7 +81,7 @@ llama stack run ./gpu/run.yaml
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack/distribution-ollama --yaml_config /root/llamastack-run-ollama.yaml
```
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g.
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Ollama endpoint. E.g.
```
inference:
- provider_id: ollama0

View file

@ -20,7 +20,7 @@ The `llamastack/distribution-together` distribution consists of the following pr
$ cd distributions/together && docker compose up
```
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Together URL server endpoint. E.g.
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Together URL server endpoint. E.g.
```
inference:
- provider_id: together