llama-stack-mirror/tests/unit
Ben Browning a713221280 fix: Responses API: handle type=None in streaming tool calls
In the Responses API, we convert incoming response requests to chat
completion requests. When streaming the resulting chunks of those chat
completion requests, inference providers that use OpenAI clients will
often return a `type=None` value in the tool call parts of the
response. This causes issues when we try to dump and load that
response into our pydantic model, because type cannot be None in the
Responses API model we're loading these into.

So, strip the "type" field, if present, off those chat completion tool
call results before dumping and loading them as our typed pydantic
models, which will apply our default value for that type field.

This was found via manual testing of the Responses API with codex,
where I was getting errors in some tool call situations. I added a
unit test to simulate this scenario and verify the fix, as well as
manual codex testing to verify the fix.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-14 16:31:23 -04:00
..
cli chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
distribution chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
models feat: support '-' in tool names (#1807) 2025-04-12 14:23:03 -07:00
providers fix: Responses API: handle type=None in streaming tool calls 2025-05-14 16:31:23 -04:00
rag fix: raise an error when no vector DB IDs are provided to the RAG tool (#1911) 2025-05-12 11:25:13 +02:00
registry chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
server chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
fixtures.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
README.md docs: revamp testing documentation (#2155) 2025-05-13 11:28:29 -07:00

Llama Stack Unit Tests

You can run the unit tests by running:

source .venv/bin/activate
./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.10), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh