mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 02:03:44 +00:00
fix: refactor auth and improve error handling for Bedrock provider
Refactor to use auth_credential for consistent credential management and improve error handling with defensive checks. Changes: - Use auth_credential instead of api_key for better credential handling - Simplify model availability check to accept all pre-registered models - Guard metrics collection when usage data is missing in responses - Add debug logging for better troubleshooting of API issues - Update unit tests for auth_credential refactoring
This commit is contained in:
parent
dc27537cce
commit
454aeaaf3e
6 changed files with 51 additions and 40 deletions
|
|
@ -16,7 +16,7 @@ AWS Bedrock inference provider using OpenAI compatible endpoint.
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | Amazon Bedrock API key |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `region_name` | `<class 'str'>` | No | us-east-2 | AWS Region for the Bedrock Runtime endpoint |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue