Update provider_type -> inline::llama-guard in templates, update run.yaml

This commit is contained in:
Ashwin Bharambe 2024-11-11 09:12:17 -08:00
parent 15ffceb533
commit 4971113f92
24 changed files with 121 additions and 98 deletions

View file

@ -36,9 +36,9 @@ the provider types (implementations) you want to use for these APIs.
Tip: use <TAB> to see options for the providers.
> Enter provider for API inference: meta-reference
> Enter provider for API safety: meta-reference
> Enter provider for API safety: inline::llama-guard
> Enter provider for API agents: meta-reference
> Enter provider for API memory: meta-reference
> Enter provider for API memory: inline::faiss
> Enter provider for API datasetio: meta-reference
> Enter provider for API scoring: meta-reference
> Enter provider for API eval: meta-reference
@ -203,8 +203,8 @@ distribution_spec:
description: Like local, but use ollama for running LLM inference
providers:
inference: remote::ollama
memory: meta-reference
safety: meta-reference
memory: inline::faiss
safety: inline::llama-guard
agents: meta-reference
telemetry: meta-reference
image_type: conda