mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-13 15:52:37 +00:00
Remote provider authentication errors (401/403) were being converted to 500 Internal Server Error, hiding the real cause from users. Now checks if exceptions have a status_code attribute and preserves it. This fixes authentication error handling for all remote inference providers using OpenAI SDK (groq, openai, together, fireworks, etc.) and similar provider SDKs. Before: - HTTP 500: "Internal server error: An unexpected error occurred." After: - HTTP 401: "Error code: 401 - Invalid API Key" Fixes #2990 Test Plan: 1. Build stack: llama stack build --image-type venv --providers inference=remote::groq 2. Start stack: llama stack run 3. Send request with invalid API key via x-llamastack-provider-data header 4. Verify response is 401 with provider error message (not 500) 5. Repeat for openai, together, fireworks providers |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| core | ||
| distributions | ||
| models | ||
| providers | ||
| strong_typing | ||
| testing | ||
| ui | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||