llama-stack-mirror/tests/unit/utils
ehhuang e980436a2e
chore: introduce write queue for inference_store (#3383)
# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids
overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
LLAMA_STACK_LOGGING="all=WARNING" uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```
## RPS from 21 -> 57
2025-09-10 11:57:42 -07:00
..
inference chore: introduce write queue for inference_store (#3383) 2025-09-10 11:57:42 -07:00
responses chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00
sqlstore chore(dev): add inequality support to sqlstore where clause (#3272) 2025-08-28 14:49:36 -07:00
test_authorized_sqlstore.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00