llama-stack-mirror/llama_stack
Ben Browning 43d4b7527b fix: Return HTTP 400 for OpenAI API validation errors
When clients called the Open AI API with invalid input that wasn't
caught by our own Pydantic API validation but instead only caught by
the backend inference provider, that backend inference provider was
returning a HTTP 400 error. However, we were wrapping that into a HTTP
500 error, obfuscating the actual issue from calling clients and
triggering OpenAI client retry logic.

This change adjusts our existing `translate_exception` method in
`server.py` to wrap `openai.BadRequestError` as HTTP 400 errors,
passing through the string representation of the error message to the
calling user so they can see the actual input validation error and
correct it. I tried changing this in a few other places, but
ultimately `translate_exception` was the only real place to handle
this for both streaming and non-streaming requests across all
inference providers that use the OpenAI server APIs.

This also tightens up our validation a bit for the OpenAI chat
completions API, to catch empty `messages` parameters, invalid
`tool_choice` parameters, invalid `tools` items, or passing
`tool_choice` when `tools` isn't given.

Lastly, this extends our OpenAI API chat completions verifications to
also check for consistent input validation across providers. Providers
behind Llama Stack should automatically pass all the new tests due to
the input validation added here, but some of the providers fail this
test when not run behind Llama Stack due to differences in how they
handle input validation and errors.

To test this, start an OpenAI API  verification stack:

```
llama stack run --image-type venv tests/verifications/openai-api-verification-run.yaml
```

Then, run the new verification tests with your provider(s) of choice:

```
python -m pytest -s -v \
  tests/verifications/openai_api/test_chat_completion.py \
  --provider openai-llama-stack

python -m pytest -s -v \
  tests/verifications/openai_api/test_chat_completion.py \
  --provider together-llama-stack
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-21 22:07:04 -04:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat: allow building distro with external providers (#1967) 2025-04-18 17:18:28 +02:00
distribution fix: Return HTTP 400 for OpenAI API validation errors 2025-04-21 22:07:04 -04:00
models fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
providers fix: OpenAI Completions API and Fireworks (#1997) 2025-04-21 11:49:12 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates feat: Update NVIDIA to GA docs; remove notebook reference until ready (#1999) 2025-04-18 19:13:18 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00