mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
Implements AWS Bedrock inference provider using OpenAI-compatible endpoint for Llama models available through Bedrock. Changes: - Add BedrockInferenceAdapter using OpenAIMixin base - Configure region-specific endpoint URLs - Add NotImplementedError stubs for unsupported endpoints - Implement authentication error handling with helpful messages - Remove unused models.py file - Add comprehensive unit tests (12 total) - Add provider registry configuration |
||
|---|---|---|
| .. | ||
| bedrock | ||
| test_bedrock_adapter.py | ||
| test_bedrock_config.py | ||
| test_inference_client_caching.py | ||
| test_litellm_openai_mixin.py | ||
| test_openai_base_url_config.py | ||
| test_remote_vllm.py | ||