llama-stack-mirror/tests/unit/server
Ben Browning cfa4b61a01 fix: Additional streaming error handling
This expands the `test_sse` test suite and fixes some edge cases with
bugs in our SSE error handling to ensure streaming clients always get
a proper error response.

First, we handle the case where a client disconnects before we
actually start streaming the response back. Previously we only handled
the case where a client disconnected as we were streaming the
response, but there was an edge case where a client disconnecting
before we streamed any response back did not trigger our logic to
cleanly handle that disconnect.

Second, we handle the case where an error is thrown from the server
before the actual async generator gets created from the provider. This
happens in scenarios like the newly merged OpenAI API input
validation, where we eagerly raise validation errors before returning
the async generator object that streams the responses back.

Tested via:

```
python -m pytest -s -v tests/unit/server/test_sse.py
```

Both test cases failed before, and passed afterwards. The test cases
were written based on me experimenting with actual clients that would
do bad things like randomly disconnect or send invalid input in
streaming mode and I hit these two cases, where things were
misbehaving in our error handling.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-23 13:04:16 -04:00
..
test_access_control.py feat(server): add attribute based access control for resources (#1703) 2025-03-19 21:28:52 -07:00
test_auth.py feat(server): add attribute based access control for resources (#1703) 2025-03-19 21:28:52 -07:00
test_replace_env_vars.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_resolver.py test: first unit test for resolver (#1475) 2025-03-07 10:20:51 -08:00
test_sse.py fix: Additional streaming error handling 2025-04-23 13:04:16 -04:00