llama-stack-mirror/docs/source/distributions/k8s-benchmark
ehhuang d6ae54723d
chore: setup for performance benchmarking (#3096)
# What does this PR do?
1. Added a simple mock openai-compat server that serves chat/completion
2. Add a benchmark server in EKS that includes mock inference server
3. Add locust (https://locust.io/) file for load testing

## Test Plan
bash apply.sh
kubectl port-forward service/locust-web-ui 8089:8089
Go to localhost:8089 to start a load test

<img width="1392" height="334" alt="image"
src="https://github.com/user-attachments/assets/d6aa3deb-583a-42ed-889b-751262b8e91c"
/>
<img width="1362" height="881" alt="image"
src="https://github.com/user-attachments/assets/6a28b9b4-05e6-44e2-b504-07e60c12d35e"
/>
2025-08-13 10:58:22 -07:00
..
apply.sh chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
locust-k8s.yaml chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
locustfile.py chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
openai-mock-deployment.yaml chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
openai-mock-server.py chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
stack-configmap.yaml chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
stack-k8s.yaml.template chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
stack_run_config.yaml chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00