llama-stack-mirror/llama_stack/core/routers
ehhuang e980436a2e
chore: introduce write queue for inference_store (#3383)
# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids
overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
LLAMA_STACK_LOGGING="all=WARNING" uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```
## RPS from 21 -> 57
2025-09-10 11:57:42 -07:00
..
__init__.py chore: introduce write queue for inference_store (#3383) 2025-09-10 11:57:42 -07:00
datasets.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
eval_scoring.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
inference.py chore: introduce write queue for inference_store (#3383) 2025-09-10 11:57:42 -07:00
safety.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
tool_runtime.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
vector_io.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00