llama-stack-mirror/tests/unit
Shabana Baig add64e8e2a
feat: Add instructions parameter in response object (#3741)
# Problem
The current inline provider appends the user provided instructions to
messages as a system prompt, but the returned response object does not
contain the instructions field (as specified in the OpenAI responses
spec).

# What does this PR do?
This pull request adds the instruction field to the response object
definition and updates the inline provider. It also ensures that
instructions from previous response is not carried over to the next
response (as specified in the openAI spec).

Closes #[3566](https://github.com/llamastack/llama-stack/issues/3566)

## Test Plan

- Tested manually for change in model response w.r.t supplied
instructions field.
- Added unit test to check that the instructions from previous response
is not carried over to the next response.
- Added integration tests to check instructions parameter in the
returned response object.
- Added new recordings for the integration tests.

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-20 13:10:37 -07:00
..
cli chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
conversations feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
core feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
distribution refactor(build): rework CLI commands and build process (1/2) (#2974) 2025-10-17 19:52:14 -07:00
files chore(files tests): update files integration tests and fix inline::localfs (#3195) 2025-08-20 14:22:40 -04:00
models feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
prompts/prompts feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
providers feat: Add instructions parameter in response object (#3741) 2025-10-20 13:10:37 -07:00
rag feat(api)!: support extra_body to embeddings and vector_stores APIs (#3794) 2025-10-12 19:01:52 -07:00
registry refactor: replace default all-MiniLM-L6-v2 embedding model by nomic-embed-text-v1.5 in Llama Stack (#3183) 2025-10-14 10:44:20 -04:00
server fix: nested claims mapping in OAuth2 token validation (#3814) 2025-10-20 12:34:55 -07:00
tools feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils feat: Allow :memory: for kvstore (#3696) 2025-10-13 11:19:27 +02:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
fixtures.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
README.md test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00

Llama Stack Unit Tests

Unit Tests

Unit tests verify individual components and functions in isolation. They are fast, reliable, and don't require external services.

Prerequisites

  1. Python Environment: Ensure you have Python 3.12+ installed
  2. uv Package Manager: Install uv if not already installed

You can run the unit tests by running:

./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh

Test Configuration

  • Test Discovery: Tests are automatically discovered in the tests/unit/ directory
  • Async Support: Tests use --asyncio-mode=auto for automatic async test handling
  • Coverage: Tests generate coverage reports in htmlcov/ directory
  • Python Version: Defaults to Python 3.12, but can be overridden with PYTHON_VERSION environment variable

Coverage Reports

After running tests, you can view coverage reports:

# Open HTML coverage report in browser
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows