llama-stack/llama_stack/providers/remote/inference
Ashwin Bharambe c1f7ba3aed
Split safety into (llama-guard, prompt-guard, code-scanner) (#400)
Splits the meta-reference safety implementation into three distinct providers:

- inline::llama-guard
- inline::prompt-guard
- inline::code-scanner

Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/

(Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
2024-11-11 09:29:18 -08:00
..
bedrock Split safety into (llama-guard, prompt-guard, code-scanner) (#400) 2024-11-11 09:29:18 -08:00
databricks impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
fireworks [LlamaStack][Fireworks] Update client and add unittest (#390) 2024-11-07 10:11:28 -08:00
ollama Split safety into (llama-guard, prompt-guard, code-scanner) (#400) 2024-11-11 09:29:18 -08:00
sample migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
tgi migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
together fix together inference validator (#393) 2024-11-07 11:31:53 -08:00
vllm migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00