forked from phoenix-oss/llama-stack-mirror
docs
This commit is contained in:
parent
cc61fd8083
commit
b0b9c905b3
1 changed files with 3 additions and 3 deletions
|
@ -245,7 +245,7 @@ $ llama stack build --template meta-reference-gpu --image-type conda
|
||||||
$ llama stack run ~/.llama/distributions/llamastack-meta-reference-gpu/meta-reference-gpu-run.yaml
|
$ llama stack run ~/.llama/distributions/llamastack-meta-reference-gpu/meta-reference-gpu-run.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
||||||
```
|
```
|
||||||
memory:
|
memory:
|
||||||
- provider_id: faiss-0
|
- provider_id: faiss-0
|
||||||
|
@ -286,7 +286,7 @@ inference:
|
||||||
$ llama stack run ~/.llama/distributions/llamastack-tgi/tgi-run.yaml
|
$ llama stack run ~/.llama/distributions/llamastack-tgi/tgi-run.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
||||||
```
|
```
|
||||||
memory:
|
memory:
|
||||||
- provider_id: faiss-0
|
- provider_id: faiss-0
|
||||||
|
@ -334,7 +334,7 @@ llama stack build --template ollama --image-type conda
|
||||||
llama stack run ~/.llama/distributions/llamastack-ollama/ollama-run.yaml
|
llama stack run ~/.llama/distributions/llamastack-ollama/ollama-run.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
Note: If you wish to use pgvector or chromadb as memory provider. You may need to update generated `run.yaml` file to point to the desired memory provider. See [Memory Providers](https://llama-stack.readthedocs.io/en/latest/api_providers/memory_api.html) for more details. Or comment out the pgvector or chromadb memory provider in `run.yaml` file to use the default inline memory provider, keeping only the following section:
|
||||||
```
|
```
|
||||||
memory:
|
memory:
|
||||||
- provider_id: faiss-0
|
- provider_id: faiss-0
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue