Split safety into (llama-guard, prompt-guard, code-scanner) (#400)

Splits the meta-reference safety implementation into three distinct providers:

- inline::llama-guard
- inline::prompt-guard
- inline::code-scanner

Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/

(Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
This commit is contained in:
Ashwin Bharambe 2024-11-11 09:29:18 -08:00 committed by GitHub
parent 6d38b1690b
commit c1f7ba3aed
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
47 changed files with 464 additions and 500 deletions

View file

@ -18,7 +18,7 @@ DEFAULT_PROVIDER_COMBINATIONS = [
pytest.param(
{
"inference": "meta_reference",
"safety": "meta_reference",
"safety": "llama_guard",
"memory": "meta_reference",
"agents": "meta_reference",
},
@ -28,7 +28,7 @@ DEFAULT_PROVIDER_COMBINATIONS = [
pytest.param(
{
"inference": "ollama",
"safety": "meta_reference",
"safety": "llama_guard",
"memory": "meta_reference",
"agents": "meta_reference",
},
@ -38,7 +38,7 @@ DEFAULT_PROVIDER_COMBINATIONS = [
pytest.param(
{
"inference": "together",
"safety": "meta_reference",
"safety": "llama_guard",
# make this work with Weaviate which is what the together distro supports
"memory": "meta_reference",
"agents": "meta_reference",