add more distro templates (#279)

* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
This commit is contained in:
Xi Yan 2024-10-21 18:15:08 -07:00 committed by GitHub
parent cf27d19dd5
commit 4d2bd2d39e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
18 changed files with 265 additions and 42 deletions

View file

@ -71,10 +71,10 @@ ollama run <model_id>
**Via Docker**
```
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./ollama-run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack-local-cpu --yaml_config /root/llamastack-run-ollama.yaml
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run.yaml:/root/llamastack-run-ollama.yaml --gpus=all distribution-ollama --yaml_config /root/llamastack-run-ollama.yaml
```
Make sure in you `ollama-run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g.
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g.
```
inference:
- provider_id: ollama0