mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-16 06:53:47 +00:00
Update provider_type -> inline::llama-guard in templates, update run.yaml
This commit is contained in:
parent
15ffceb533
commit
4971113f92
24 changed files with 121 additions and 98 deletions
|
@ -36,9 +36,9 @@ the provider types (implementations) you want to use for these APIs.
|
|||
Tip: use <TAB> to see options for the providers.
|
||||
|
||||
> Enter provider for API inference: meta-reference
|
||||
> Enter provider for API safety: meta-reference
|
||||
> Enter provider for API safety: inline::llama-guard
|
||||
> Enter provider for API agents: meta-reference
|
||||
> Enter provider for API memory: meta-reference
|
||||
> Enter provider for API memory: inline::faiss
|
||||
> Enter provider for API datasetio: meta-reference
|
||||
> Enter provider for API scoring: meta-reference
|
||||
> Enter provider for API eval: meta-reference
|
||||
|
@ -203,8 +203,8 @@ distribution_spec:
|
|||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
memory: inline::faiss
|
||||
safety: inline::llama-guard
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue