llama-stack-mirror/docs/source
Eric Huang e721ca9730 chore: introduce write queue for inference_store
# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```


Before:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.111s
  Median: 0.982s
  Min: 0.466s
  Max: 15.190s
  Std Dev: 1.091s

Percentiles:
  P50: 0.982s
  P90: 1.281s
  P95: 1.439s
  P99: 5.476s

Time to First Token (TTFT) Statistics:
  Mean: 0.474s
  Median: 0.347s
  Min: 0.175s
  Max: 15.129s
  Std Dev: 0.819s

TTFT Percentiles:
  P50: 0.347s
  P90: 0.661s
  P95: 0.762s
  P99: 2.788s

Streaming Statistics:
  Mean chunks per response: 67.2
  Total chunks received: 122154
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 1919
Successful requests: 1819
Failed requests: 100
Success rate: 94.8%
Requests per second: 15.16

Errors (showing first 5):
  Request error:
  Request error:
  Request error:
  Request error:
  Request error:
Benchmark completed.
Stopping server (PID: 679)...
Server stopped.


After:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.085s
  Median: 1.089s
  Min: 0.451s
  Max: 2.002s
  Std Dev: 0.212s

Percentiles:
  P50: 1.089s
  P90: 1.343s
  P95: 1.409s
  P99: 1.617s

Time to First Token (TTFT) Statistics:
  Mean: 0.407s
  Median: 0.361s
  Min: 0.182s
  Max: 1.178s
  Std Dev: 0.175s

TTFT Percentiles:
  P50: 0.361s
  P90: 0.644s
  P95: 0.744s
  P99: 0.932s

Streaming Statistics:
  Mean chunks per response: 66.8
  Total chunks received: 367240
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 5495
Successful requests: 5495
Failed requests: 0
Success rate: 100.0%
Requests per second: 45.79
Benchmark completed.
Stopping server (PID: 97169)...
Server stopped.
2025-09-10 11:50:06 -07:00
..
advanced_apis chore: remove absolute paths (#3263) 2025-08-27 12:04:25 -07:00
apis chore: bump min python version in docs and tests (#3103) 2025-08-12 08:52:57 -07:00
building_applications chore: remove absolute paths (#3263) 2025-08-27 12:04:25 -07:00
concepts chore: remove absolute paths (#3263) 2025-08-27 12:04:25 -07:00
contributing feat(tests): auto-merge all model list responses and unify recordings (#3320) 2025-09-03 11:33:03 -07:00
deploying chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
distributions chore: introduce write queue for inference_store 2025-09-10 11:50:06 -07:00
getting_started feat: Updating Rag Tool to use Files API and Vector Stores API (#3344) 2025-09-06 07:26:34 -06:00
providers docs: add MongoDB to external provider list (#3369) 2025-09-08 14:09:13 +02:00
references fix: Remove bfcl scoring function as not supported (#3281) 2025-08-29 11:03:52 -07:00
conf.py docs: Add documentation on how to contribute a Vector DB provider and update testing documentation (#3093) 2025-08-11 11:11:09 -07:00
index.md docs: Reorganize documentation on the webpage (#2651) 2025-07-15 14:19:35 -07:00