llama-stack-mirror/llama_stack/core
Ashwin Bharambe 8fe4a216b5
fix(inference): propagate 401/403 errors from remote providers (#3762)
## Summary
Fixes #2990

Remote provider authentication errors (401/403) were being converted to
500 Internal Server Error, preventing users from understanding why their
requests failed.

## The Problem
When a request with an invalid API key was sent to a remote provider:
- Provider correctly returns 401 with error details
- Llama Stack's `translate_exception()` didn't recognize provider SDK
exceptions
- Fell through to generic 500 error handler
- User received: "Internal server error: An unexpected error occurred."

## The Fix
Added handler in `translate_exception()` that checks for exceptions with
a `status_code` attribute and preserves the original HTTP status code
and error message.

**Before:**
```json
HTTP 500
{"detail": "Internal server error: An unexpected error occurred."}
```

**After:**
```json
HTTP 401
{"detail": "Error code: 401 - {'error': {'message': 'Invalid API Key', 'type': 'invalid_request_error', 'code': 'invalid_api_key'}}"}
```

## Tested With
-  groq: 401 "Invalid API Key"  
-  openai: 401 "Incorrect API key provided"
-  together: 401 "Invalid API key provided"
-  fireworks: 403 "unauthorized"

## Test Plan

**Automated test script:**
https://gist.github.com/ashwinb/1199dd7585ffa3f4be67b111cc65f2f3

The test script:
1. Builds separate stacks for each provider
2. Registers models (with validation temporarily disabled for testing)
3. Sends requests with invalid API keys via `x-llamastack-provider-data`
header
4. Verifies HTTP status codes are 401/403 (not 500)

**Results before fix:** All providers returned 500  
**Results after fix:** All providers correctly return 401/403

**Manual verification:**
```bash
# 1. Build stack
llama stack build --image-type venv --providers inference=remote::groq

# 2. Start stack
llama stack run

# 3. Send request with invalid API key
curl http://localhost:8321/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H 'x-llamastack-provider-data: {"groq_api_key": "invalid-key"}' \
  -d '{"model": "groq/llama3-70b-8192", "messages": [{"role": "user", "content": "test"}]}'

# Expected: HTTP 401 with provider error message (not 500)
```

## Impact
- Works with all remote providers using OpenAI SDK (groq, openai,
together, fireworks, etc.)
- Works with any provider SDK that follows the pattern of exceptions
with `status_code` attribute
- No breaking changes - only affects error responses
2025-10-09 18:34:39 -07:00
..
access_control chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
conversations chore: require valid logging category (#3712) 2025-10-08 11:10:33 +02:00
prompts feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
routers fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
routing_tables feat: make object registration idempotent (#3752) 2025-10-09 17:04:28 -07:00
server fix(inference): propagate 401/403 errors from remote providers (#3762) 2025-10-09 18:34:39 -07:00
store feat: make object registration idempotent (#3752) 2025-10-09 17:04:28 -07:00
ui feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build.py feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
build_container.sh chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
build_venv.sh fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
client.py feat: introduce API leveling, post_training, eval to v1alpha (#3449) 2025-09-26 16:18:07 +02:00
common.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
configure.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
datatypes.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
distribution.py feat: allow for multiple external provider specs (#3341) 2025-10-06 15:26:38 +02:00
external.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
id_generation.py feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
inspect.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
library_client.py feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00
providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
request_headers.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
resolver.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
stack.py feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
start_stack.sh chore!: remove --env from llama stack run (#3711) 2025-10-07 20:58:15 -07:00
testing_context.py feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00