mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-29 15:23:51 +00:00
Merge branch 'main' into evals_9
This commit is contained in:
commit
e2aa592e26
2 changed files with 3 additions and 3 deletions
|
@ -30,7 +30,7 @@ inference:
|
|||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### (Alternative) TGI server + llama stack run (Single Node GPU)
|
||||
### (Alternative) llama stack run (Single Node CPU)
|
||||
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-fireworks --yaml_config /root/my-run.yaml
|
||||
|
|
|
@ -33,7 +33,7 @@ inference:
|
|||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### (Alternative) TGI server + llama stack run (Single Node GPU)
|
||||
### (Alternative) llama stack run (Single Node CPU)
|
||||
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-together --yaml_config /root/my-run.yaml
|
||||
|
@ -52,7 +52,7 @@ inference:
|
|||
Together distribution comes with weaviate as Memory provider. We also need to configure the remote weaviate API key and URL in `run.yaml` to get memory API.
|
||||
```
|
||||
memory:
|
||||
- provider_id: meta0
|
||||
- provider_id: weaviate0
|
||||
provider_type: remote::weaviate
|
||||
config:
|
||||
weaviate_api_key: <ENTER_WEAVIATE_API_KEY>
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue