chore: introduce write queue for inference_store

# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```


Before:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.111s
  Median: 0.982s
  Min: 0.466s
  Max: 15.190s
  Std Dev: 1.091s

Percentiles:
  P50: 0.982s
  P90: 1.281s
  P95: 1.439s
  P99: 5.476s

Time to First Token (TTFT) Statistics:
  Mean: 0.474s
  Median: 0.347s
  Min: 0.175s
  Max: 15.129s
  Std Dev: 0.819s

TTFT Percentiles:
  P50: 0.347s
  P90: 0.661s
  P95: 0.762s
  P99: 2.788s

Streaming Statistics:
  Mean chunks per response: 67.2
  Total chunks received: 122154
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 1919
Successful requests: 1819
Failed requests: 100
Success rate: 94.8%
Requests per second: 15.16

Errors (showing first 5):
  Request error:
  Request error:
  Request error:
  Request error:
  Request error:
Benchmark completed.
Stopping server (PID: 679)...
Server stopped.


After:

============================================================
BENCHMARK RESULTS

Response Time Statistics:
  Mean: 1.085s
  Median: 1.089s
  Min: 0.451s
  Max: 2.002s
  Std Dev: 0.212s

Percentiles:
  P50: 1.089s
  P90: 1.343s
  P95: 1.409s
  P99: 1.617s

Time to First Token (TTFT) Statistics:
  Mean: 0.407s
  Median: 0.361s
  Min: 0.182s
  Max: 1.178s
  Std Dev: 0.175s

TTFT Percentiles:
  P50: 0.361s
  P90: 0.644s
  P95: 0.744s
  P99: 0.932s

Streaming Statistics:
  Mean chunks per response: 66.8
  Total chunks received: 367240
============================================================
Total time: 120.00s
Concurrent users: 50
Total requests: 5495
Successful requests: 5495
Failed requests: 0
Success rate: 100.0%
Requests per second: 45.79
Benchmark completed.
Stopping server (PID: 97169)...
Server stopped.
This commit is contained in:
Eric Huang 2025-09-10 11:43:26 -07:00
parent 935b8e28de
commit e721ca9730
7 changed files with 139 additions and 22 deletions

View file

@ -58,14 +58,6 @@ class BenchmarkStats:
print(f"\n{'='*60}")
print(f"BENCHMARK RESULTS")
print(f"{'='*60}")
print(f"Total time: {total_time:.2f}s")
print(f"Concurrent users: {self.concurrent_users}")
print(f"Total requests: {self.total_requests}")
print(f"Successful requests: {self.success_count}")
print(f"Failed requests: {len(self.errors)}")
print(f"Success rate: {success_rate:.1f}%")
print(f"Requests per second: {self.success_count / total_time:.2f}")
print(f"\nResponse Time Statistics:")
print(f" Mean: {statistics.mean(self.response_times):.3f}s")
@ -106,6 +98,15 @@ class BenchmarkStats:
print(f" Mean chunks per response: {statistics.mean(self.chunks_received):.1f}")
print(f" Total chunks received: {sum(self.chunks_received)}")
print(f"{'='*60}")
print(f"Total time: {total_time:.2f}s")
print(f"Concurrent users: {self.concurrent_users}")
print(f"Total requests: {self.total_requests}")
print(f"Successful requests: {self.success_count}")
print(f"Failed requests: {len(self.errors)}")
print(f"Success rate: {success_rate:.1f}%")
print(f"Requests per second: {self.success_count / total_time:.2f}")
if self.errors:
print(f"\nErrors (showing first 5):")
for error in self.errors[:5]:
@ -215,7 +216,7 @@ class LlamaStackBenchmark:
await asyncio.sleep(1) # Report every second
if time.time() >= last_report_time + 10: # Report every 10 seconds
elapsed = time.time() - stats.start_time
print(f"Completed: {stats.total_requests} requests in {elapsed:.1f}s")
print(f"Completed: {stats.total_requests} requests in {elapsed:.1f}s, RPS: {stats.total_requests / elapsed:.1f}")
last_report_time = time.time()
except asyncio.CancelledError:
break