llama-stack-mirror/tests/unit/providers
skamenan7 4ff367251f feat: add OpenAI-compatible Bedrock provider with error handling
Implements AWS Bedrock inference provider using OpenAI-compatible endpoint
for Llama models available through Bedrock.

Changes:
- Add BedrockInferenceAdapter using OpenAIMixin base
- Configure region-specific endpoint URLs
- Add NotImplementedError stubs for unsupported endpoints
- Implement authentication error handling with helpful messages
- Remove unused models.py file
- Add comprehensive unit tests (12 total)
- Add provider registry configuration
2025-11-05 15:14:58 -05:00
..
batches feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
inference feat: add OpenAI-compatible Bedrock provider with error handling 2025-11-05 15:14:58 -05:00
inline feat: Add responses and safety impl extra_body (#3781) 2025-10-15 15:01:37 -07:00
nvidia chore(test): migrate unit tests from unittest to pytest nvidia test eval (#3249) 2025-11-04 10:29:07 +01:00
utils fix: allowed_models config did not filter models (#4030) 2025-11-03 11:43:39 -08:00
vector_io fix!: remove chunk_id property from Chunk class (#3954) 2025-10-29 18:59:59 -07:00
test_bedrock.py feat: add OpenAI-compatible Bedrock provider with error handling 2025-11-05 15:14:58 -05:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00