llama-stack-mirror/llama_stack
Ashwin Bharambe 8fe4a216b5
fix(inference): propagate 401/403 errors from remote providers (#3762)
## Summary
Fixes #2990

Remote provider authentication errors (401/403) were being converted to
500 Internal Server Error, preventing users from understanding why their
requests failed.

## The Problem
When a request with an invalid API key was sent to a remote provider:
- Provider correctly returns 401 with error details
- Llama Stack's `translate_exception()` didn't recognize provider SDK
exceptions
- Fell through to generic 500 error handler
- User received: "Internal server error: An unexpected error occurred."

## The Fix
Added handler in `translate_exception()` that checks for exceptions with
a `status_code` attribute and preserves the original HTTP status code
and error message.

**Before:**
```json
HTTP 500
{"detail": "Internal server error: An unexpected error occurred."}
```

**After:**
```json
HTTP 401
{"detail": "Error code: 401 - {'error': {'message': 'Invalid API Key', 'type': 'invalid_request_error', 'code': 'invalid_api_key'}}"}
```

## Tested With
-  groq: 401 "Invalid API Key"  
-  openai: 401 "Incorrect API key provided"
-  together: 401 "Invalid API key provided"
-  fireworks: 403 "unauthorized"

## Test Plan

**Automated test script:**
https://gist.github.com/ashwinb/1199dd7585ffa3f4be67b111cc65f2f3

The test script:
1. Builds separate stacks for each provider
2. Registers models (with validation temporarily disabled for testing)
3. Sends requests with invalid API keys via `x-llamastack-provider-data`
header
4. Verifies HTTP status codes are 401/403 (not 500)

**Results before fix:** All providers returned 500  
**Results after fix:** All providers correctly return 401/403

**Manual verification:**
```bash
# 1. Build stack
llama stack build --image-type venv --providers inference=remote::groq

# 2. Start stack
llama stack run

# 3. Send request with invalid API key
curl http://localhost:8321/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H 'x-llamastack-provider-data: {"groq_api_key": "invalid-key"}' \
  -d '{"model": "groq/llama3-70b-8192", "messages": [{"role": "user", "content": "test"}]}'

# Expected: HTTP 401 with provider error message (not 500)
```

## Impact
- Works with all remote providers using OpenAI SDK (groq, openai,
together, fireworks, etc.)
- Works with any provider SDK that follows the pattern of exceptions
with `status_code` attribute
- No breaking changes - only affects error responses
2025-10-09 18:34:39 -07:00
..
apis docs: API docstrings cleanup for better documentation rendering (#3661) 2025-10-06 10:46:33 -07:00
cli chore!: remove model mgmt from CLI for Hugging Face CLI (#3700) 2025-10-09 16:50:33 -07:00
core fix(inference): propagate 401/403 errors from remote providers (#3762) 2025-10-09 18:34:39 -07:00
distributions chore!: remove model mgmt from CLI for Hugging Face CLI (#3700) 2025-10-09 16:50:33 -07:00
models chore: remove dead code (#3729) 2025-10-07 20:26:02 -07:00
providers feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
strong_typing feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
testing fix(testing): improve api_recorder error messages for missing recordings (#3760) 2025-10-09 15:04:16 -07:00
ui chore(ui-deps): bump react-dom and @types/react-dom in /llama_stack/ui (#3693) 2025-10-06 00:02:31 -04:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py fix(tests): ensure test isolation in server mode (#3737) 2025-10-08 12:03:36 -07:00
schema_utils.py feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00