llama-stack-mirror/llama_stack
Eric Huang e721ca9730 chore: introduce write queue for inference_store
# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```


Before:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.111s
  Median: 0.982s
  Min: 0.466s
  Max: 15.190s
  Std Dev: 1.091s

Percentiles:
  P50: 0.982s
  P90: 1.281s
  P95: 1.439s
  P99: 5.476s

Time to First Token (TTFT) Statistics:
  Mean: 0.474s
  Median: 0.347s
  Min: 0.175s
  Max: 15.129s
  Std Dev: 0.819s

TTFT Percentiles:
  P50: 0.347s
  P90: 0.661s
  P95: 0.762s
  P99: 2.788s

Streaming Statistics:
  Mean chunks per response: 67.2
  Total chunks received: 122154
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 1919
Successful requests: 1819
Failed requests: 100
Success rate: 94.8%
Requests per second: 15.16

Errors (showing first 5):
  Request error:
  Request error:
  Request error:
  Request error:
  Request error:
Benchmark completed.
Stopping server (PID: 679)...
Server stopped.


After:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.085s
  Median: 1.089s
  Min: 0.451s
  Max: 2.002s
  Std Dev: 0.212s

Percentiles:
  P50: 1.089s
  P90: 1.343s
  P95: 1.409s
  P99: 1.617s

Time to First Token (TTFT) Statistics:
  Mean: 0.407s
  Median: 0.361s
  Min: 0.182s
  Max: 1.178s
  Std Dev: 0.175s

TTFT Percentiles:
  P50: 0.361s
  P90: 0.644s
  P95: 0.744s
  P99: 0.932s

Streaming Statistics:
  Mean chunks per response: 66.8
  Total chunks received: 367240
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 5495
Successful requests: 5495
Failed requests: 0
Success rate: 100.0%
Requests per second: 45.79
Benchmark completed.
Stopping server (PID: 97169)...
Server stopped.
2025-09-10 11:50:06 -07:00
..
apis feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
cli feat: include a default inference store during llama stack build (#3373) 2025-09-09 15:54:58 -07:00
core chore: introduce write queue for inference_store 2025-09-10 11:50:06 -07:00
distributions fix: Fix locations of distrubution runtime directories (#3336) 2025-09-05 14:09:36 +02:00
models refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
providers chore: introduce write queue for inference_store 2025-09-10 11:50:06 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
testing fix: environment variable typo in inference recorder error message (#3374) 2025-09-08 17:51:38 +02:00
ui build: Bump version to 0.2.21 2025-09-08 22:30:03 +00:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
schema_utils.py feat(auth): API access control (#2822) 2025-07-24 15:30:48 -07:00