llama-stack-mirror/llama_stack/providers
ehhuang e980436a2e
chore: introduce write queue for inference_store (#3383)
# What does this PR do?
Adds a write worker queue for writes to inference store. This avoids
overwhelming request processing with slow inference writes.

## Test Plan

Benchmark:
```
cd /docs/source/distributions/k8s-benchmark
# start mock server
python openai-mock-server.py --port 8000
# start stack server
LLAMA_STACK_LOGGING="all=WARNING" uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml
# run benchmark script
uv run python3 benchmark.py --duration 120 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct
```
## RPS from 21 -> 57
2025-09-10 11:57:42 -07:00
..
inline feat: Add vector_db_id to chunk metadata (#3304) 2025-09-10 11:19:21 +02:00
registry chore: update the vertexai inference impl to use openai-python for openai-compat functions (#3377) 2025-09-10 15:39:29 +02:00
remote ci: Re-enable pre-commit to fail (#3399) 2025-09-10 10:00:46 -04:00
utils chore: introduce write queue for inference_store (#3383) 2025-09-10 11:57:42 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00