rewrote all slop

This commit is contained in:
Ashwin Bharambe 2025-08-14 16:51:13 -07:00
parent f4281ce66a
commit 1e2bbd08da
9 changed files with 452 additions and 930 deletions

View file

@ -1,6 +1,23 @@
# Llama Stack Integration Tests
# Integration Testing Guide
We use `pytest` for parameterizing and running tests. You can see all options with:
Integration tests verify complete workflows across different providers using Llama Stack's record-replay system.
## Quick Start
```bash
# Run all integration tests with existing recordings
uv run pytest tests/integration/
# Test against live APIs with auto-server
export FIREWORKS_API_KEY=your_key
pytest tests/integration/inference/ \
--stack-config=server:fireworks \
--text-model=meta-llama/Llama-3.1-8B-Instruct
```
## Configuration Options
You can see all options with:
```bash
cd tests/integration
@ -114,3 +131,86 @@ pytest -s -v tests/integration/vector_io/ \
--stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
--embedding-model=$EMBEDDING_MODELS
```
## Recording Modes
The testing system supports three modes controlled by environment variables:
### LIVE Mode (Default)
Tests make real API calls:
```bash
LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/
```
### RECORD Mode
Captures API interactions for later replay:
```bash
LLAMA_STACK_TEST_INFERENCE_MODE=record \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/inference/test_new_feature.py
```
### REPLAY Mode
Uses cached responses instead of making API calls:
```bash
LLAMA_STACK_TEST_INFERENCE_MODE=replay \
LLAMA_STACK_TEST_RECORDING_DIR=./recordings \
pytest tests/integration/
```
## Managing Recordings
### Viewing Recordings
```bash
# See what's recorded
sqlite3 recordings/index.sqlite "SELECT endpoint, model, timestamp FROM recordings;"
# Inspect specific response
cat recordings/responses/abc123.json | jq '.'
```
### Re-recording Tests
```bash
# Re-record specific tests
rm -rf recordings/
LLAMA_STACK_TEST_INFERENCE_MODE=record pytest tests/integration/test_modified.py
```
## Writing Tests
### Basic Test Pattern
```python
def test_basic_completion(llama_stack_client, text_model_id):
response = llama_stack_client.inference.completion(
model_id=text_model_id,
content=CompletionMessage(role="user", content="Hello"),
)
# Test structure, not AI output quality
assert response.completion_message is not None
assert isinstance(response.completion_message.content, str)
assert len(response.completion_message.content) > 0
```
### Provider-Specific Tests
```python
def test_asymmetric_embeddings(llama_stack_client, embedding_model_id):
if embedding_model_id not in MODELS_SUPPORTING_TASK_TYPE:
pytest.skip(f"Model {embedding_model_id} doesn't support task types")
query_response = llama_stack_client.inference.embeddings(
model_id=embedding_model_id,
contents=["What is machine learning?"],
task_type="query"
)
assert query_response.embeddings is not None
```
## Best Practices
- **Test API contracts, not AI output quality** - Focus on response structure, not content
- **Use existing recordings for development** - Fast iteration without API costs
- **Record new interactions only when needed** - Adding new functionality
- **Test across providers** - Ensure compatibility
- **Commit recordings to version control** - Deterministic CI builds