llama-stack-mirror/tests/unit/providers/inference
skamenan7 4ff367251f feat: add OpenAI-compatible Bedrock provider with error handling
Implements AWS Bedrock inference provider using OpenAI-compatible endpoint
for Llama models available through Bedrock.

Changes:
- Add BedrockInferenceAdapter using OpenAIMixin base
- Configure region-specific endpoint URLs
- Add NotImplementedError stubs for unsupported endpoints
- Implement authentication error handling with helpful messages
- Remove unused models.py file
- Add comprehensive unit tests (12 total)
- Add provider registry configuration
2025-11-05 15:14:58 -05:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
test_bedrock_adapter.py feat: add OpenAI-compatible Bedrock provider with error handling 2025-11-05 15:14:58 -05:00
test_bedrock_config.py feat: add OpenAI-compatible Bedrock provider with error handling 2025-11-05 15:14:58 -05:00
test_inference_client_caching.py feat: add provider data keys for Cerebras, Databricks, NVIDIA, and RunPod (#3734) 2025-10-27 13:09:35 -07:00
test_litellm_openai_mixin.py fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
test_openai_base_url_config.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
test_remote_vllm.py feat(api)!: BREAKING CHANGE: support passing extra_body through to providers (#3777) 2025-10-10 16:21:44 -07:00