llama-stack-mirror/llama_stack/providers/remote/inference/bedrock
2025-04-09 15:47:02 -04:00
..
__init__.py Split safety into (llama-guard, prompt-guard, code-scanner) (#400) 2024-11-11 09:29:18 -08:00
bedrock.py Add unsupported OpenAI mixin to all remaining inference providers 2025-04-09 15:47:02 -04:00
config.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
models.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00