mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-02 20:40:36 +00:00
Splits the meta-reference safety implementation into three distinct providers: - inline::llama-guard - inline::prompt-guard - inline::code-scanner Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/ (Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
13 lines
397 B
YAML
13 lines
397 B
YAML
name: meta-reference-gpu
|
|
distribution_spec:
|
|
docker_image: pytorch/pytorch:2.5.0-cuda12.4-cudnn9-runtime
|
|
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
|
providers:
|
|
inference: meta-reference
|
|
memory:
|
|
- meta-reference
|
|
- remote::chromadb
|
|
- remote::pgvector
|
|
safety: inline::llama-guard
|
|
agents: meta-reference
|
|
telemetry: meta-reference
|