mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 12:07:34 +00:00
# What does this PR do? ## Test Plan ``` cd /docs/source/distributions/k8s-benchmark # start mock server python openai-mock-server.py --port 8000 # start stack server uv run --with llama-stack python -m llama_stack.core.server.server docs/source/distributions/k8s-benchmark/stack_run_config.yaml # run benchmark script uv run python3 benchmark.py --duration 30 --concurrent 50 --base-url=http://localhost:8321/v1/openai/v1 --model=vllm-inference/meta-llama/Llama-3.2-3B-Instruct ``` Before: ============================================================ BENCHMARK RESULTS ============================================================ Total time: 30.00s Concurrent users: 50 Total requests: 1267 Successful requests: 1267 Failed requests: 0 Success rate: 100.0% Requests per second: 42.23 After: ============================================================ BENCHMARK RESULTS ============================================================ Total time: 30.00s Concurrent users: 50 Total requests: 1449 Successful requests: 1449 Failed requests: 0 Success rate: 100.0% Requests per second: 48.30 |
||
---|---|---|
.. | ||
access_control | ||
routers | ||
routing_tables | ||
server | ||
store | ||
ui | ||
utils | ||
__init__.py | ||
build.py | ||
build_container.sh | ||
build_venv.sh | ||
client.py | ||
common.sh | ||
configure.py | ||
datatypes.py | ||
distribution.py | ||
external.py | ||
inspect.py | ||
library_client.py | ||
providers.py | ||
request_headers.py | ||
resolver.py | ||
stack.py | ||
start_stack.sh |