mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-16 06:27:58 +00:00
# What does this PR do? 1. Added a simple mock openai-compat server that serves chat/completion 2. Add a benchmark server in EKS that includes mock inference server 3. Add locust (https://locust.io/) file for load testing ## Test Plan bash apply.sh kubectl port-forward service/locust-web-ui 8089:8089 Go to localhost:8089 to start a load test <img width="1392" height="334" alt="image" src="https://github.com/user-attachments/assets/d6aa3deb-583a-42ed-889b-751262b8e91c" /> <img width="1362" height="881" alt="image" src="https://github.com/user-attachments/assets/6a28b9b4-05e6-44e2-b504-07e60c12d35e" /> |
||
---|---|---|
.. | ||
apply.sh | ||
chroma-k8s.yaml.template | ||
hf-token-secret.yaml.template | ||
ingress-k8s.yaml.template | ||
postgres-k8s.yaml.template | ||
stack-configmap.yaml | ||
stack-k8s.yaml.template | ||
stack_run_config.yaml | ||
ui-k8s.yaml.template | ||
vllm-k8s.yaml.template | ||
vllm-safety-k8s.yaml.template |