mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-15 14:08:00 +00:00
rewrote all slop
This commit is contained in:
parent
f4281ce66a
commit
1e2bbd08da
9 changed files with 452 additions and 930 deletions
|
@ -1,9 +1,64 @@
|
|||
# Llama Stack Tests
|
||||
There are two obvious types of tests:
|
||||
|
||||
Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.
|
||||
| Type | Location | Purpose |
|
||||
|------|----------|---------|
|
||||
| **Unit** | [`tests/unit/`](unit/README.md) | Fast, isolated component testing |
|
||||
| **Integration** | [`tests/integration/`](integration/README.md) | End-to-end workflows with record-replay |
|
||||
|
||||
| Testing Type | Details |
|
||||
|--------------|---------|
|
||||
| Unit | [unit/README.md](unit/README.md) |
|
||||
| Integration | [integration/README.md](integration/README.md) |
|
||||
| Verification | [verifications/README.md](verifications/README.md) |
|
||||
Both have their place. For unit tests, it is important to create minimal mocks and instead rely more on "fakes". Mocks are too brittle. In either case, tests must be very fast and reliable.
|
||||
|
||||
### Record-replay for integration tests
|
||||
|
||||
Testing AI applications end-to-end creates some challenges:
|
||||
- **API costs** accumulate quickly during development and CI
|
||||
- **Non-deterministic responses** make tests unreliable
|
||||
- **Multiple providers** require testing the same logic across different APIs
|
||||
|
||||
Our solution: **Record real API responses once, replay them for fast, deterministic tests.** This is better than mocking because AI APIs have complex response structures and streaming behavior. Mocks can miss edge cases that real APIs exhibit. A single test can exercise underlying APIs in multiple complex ways making it really hard to mock.
|
||||
|
||||
This gives you:
|
||||
- Cost control - No repeated API calls during development
|
||||
- Speed - Instant test execution with cached responses
|
||||
- Reliability - Consistent results regardless of external service state
|
||||
- Provider coverage - Same tests work across OpenAI, Anthropic, local models, etc.
|
||||
|
||||
### Testing Quick Start
|
||||
|
||||
You can run the unit tests with:
|
||||
```bash
|
||||
uv run --group unit pytest -sv tests/unit/
|
||||
```
|
||||
|
||||
For running integration tests, you must provide a few things:
|
||||
|
||||
- A stack config. This is a pointer to a stack. You have a few ways to point to a stack:
|
||||
- **`server:<config>`** - automatically start a server with the given config (e.g., `server:starter`). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.
|
||||
- **`server:<config>:<port>`** - same as above but with a custom port (e.g., `server:starter:8322`)
|
||||
- a URL which points to a Llama Stack distribution server
|
||||
- a distribution name (e.g., `starter`) or a path to a `run.yaml` file
|
||||
- a comma-separated list of api=provider pairs, e.g. `inference=fireworks,safety=llama-guard,agents=meta-reference`. This is most useful for testing a single API surface.
|
||||
|
||||
- Whether you are using replay or live mode for inference. This is specified with the LLAMA_STACK_TEST_INFERENCE_MODE environment variable. The default mode currently is "live" -- that is certainly surprising, but we will fix this soon.
|
||||
|
||||
- Any API keys you need to use should be set in the environment, or can be passed in with the --env option.
|
||||
|
||||
You can run the integration tests in replay mode with:
|
||||
```bash
|
||||
# Run all tests with existing recordings
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=replay \
|
||||
LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
|
||||
uv run --group test \
|
||||
pytest -sv tests/integration/ --stack-config=starter
|
||||
```
|
||||
|
||||
If you don't specify LLAMA_STACK_TEST_INFERENCE_MODE, by default it will be in "live" mode -- that is, it will make real API calls.
|
||||
|
||||
```bash
|
||||
# Test against live APIs
|
||||
FIREWORKS_API_KEY=your_key pytest -sv tests/integration/inference --stack-config=starter
|
||||
```
|
||||
|
||||
### Next Steps
|
||||
|
||||
- [Integration Testing Guide](integration/README.md) - Detailed usage and configuration
|
||||
- [Unit Testing Guide](unit/README.md) - Fast component testing
|
||||
|
|
|
@ -1,6 +1,23 @@
|
|||
# Llama Stack Integration Tests
|
||||
# Integration Testing Guide
|
||||
|
||||
We use `pytest` for parameterizing and running tests. You can see all options with:
|
||||
Integration tests verify complete workflows across different providers using Llama Stack's record-replay system.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Run all integration tests with existing recordings
|
||||
uv run pytest tests/integration/
|
||||
|
||||
# Test against live APIs with auto-server
|
||||
export FIREWORKS_API_KEY=your_key
|
||||
pytest tests/integration/inference/ \
|
||||
--stack-config=server:fireworks \
|
||||
--text-model=meta-llama/Llama-3.1-8B-Instruct
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
You can see all options with:
|
||||
```bash
|
||||
cd tests/integration
|
||||
|
||||
|
@ -114,3 +131,86 @@ pytest -s -v tests/integration/vector_io/ \
|
|||
--stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
|
||||
--embedding-model=$EMBEDDING_MODELS
|
||||
```
|
||||
|
||||
## Recording Modes
|
||||
|
||||
The testing system supports three modes controlled by environment variables:
|
||||
|
||||
### LIVE Mode (Default)
|
||||
Tests make real API calls:
|
||||
```bash
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/
|
||||
```
|
||||
|
||||
### RECORD Mode
|
||||
Captures API interactions for later replay:
|
||||
```bash
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=record \
|
||||
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
|
||||
pytest tests/integration/inference/test_new_feature.py
|
||||
```
|
||||
|
||||
### REPLAY Mode
|
||||
Uses cached responses instead of making API calls:
|
||||
```bash
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=replay \
|
||||
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
|
||||
pytest tests/integration/
|
||||
```
|
||||
|
||||
## Managing Recordings
|
||||
|
||||
### Viewing Recordings
|
||||
```bash
|
||||
# See what's recorded
|
||||
sqlite3 recordings/index.sqlite "SELECT endpoint, model, timestamp FROM recordings;"
|
||||
|
||||
# Inspect specific response
|
||||
cat recordings/responses/abc123.json | jq '.'
|
||||
```
|
||||
|
||||
### Re-recording Tests
|
||||
```bash
|
||||
# Re-record specific tests
|
||||
rm -rf recordings/
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=record pytest tests/integration/test_modified.py
|
||||
```
|
||||
|
||||
## Writing Tests
|
||||
|
||||
### Basic Test Pattern
|
||||
```python
|
||||
def test_basic_completion(llama_stack_client, text_model_id):
|
||||
response = llama_stack_client.inference.completion(
|
||||
model_id=text_model_id,
|
||||
content=CompletionMessage(role="user", content="Hello"),
|
||||
)
|
||||
|
||||
# Test structure, not AI output quality
|
||||
assert response.completion_message is not None
|
||||
assert isinstance(response.completion_message.content, str)
|
||||
assert len(response.completion_message.content) > 0
|
||||
```
|
||||
|
||||
### Provider-Specific Tests
|
||||
```python
|
||||
def test_asymmetric_embeddings(llama_stack_client, embedding_model_id):
|
||||
if embedding_model_id not in MODELS_SUPPORTING_TASK_TYPE:
|
||||
pytest.skip(f"Model {embedding_model_id} doesn't support task types")
|
||||
|
||||
query_response = llama_stack_client.inference.embeddings(
|
||||
model_id=embedding_model_id,
|
||||
contents=["What is machine learning?"],
|
||||
task_type="query"
|
||||
)
|
||||
|
||||
assert query_response.embeddings is not None
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Test API contracts, not AI output quality** - Focus on response structure, not content
|
||||
- **Use existing recordings for development** - Fast iteration without API costs
|
||||
- **Record new interactions only when needed** - Adding new functionality
|
||||
- **Test across providers** - Ensure compatibility
|
||||
- **Commit recordings to version control** - Deterministic CI builds
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue