llama-stack-mirror/docs/source/distributions
ehhuang d6ae54723d
chore: setup for performance benchmarking (#3096)
# What does this PR do?
1. Added a simple mock openai-compat server that serves chat/completion
2. Add a benchmark server in EKS that includes mock inference server
3. Add locust (https://locust.io/) file for load testing

## Test Plan
bash apply.sh
kubectl port-forward service/locust-web-ui 8089:8089
Go to localhost:8089 to start a load test

<img width="1392" height="334" alt="image"
src="https://github.com/user-attachments/assets/d6aa3deb-583a-42ed-889b-751262b8e91c"
/>
<img width="1362" height="881" alt="image"
src="https://github.com/user-attachments/assets/6a28b9b4-05e6-44e2-b504-07e60c12d35e"
/>
2025-08-13 10:58:22 -07:00
..
eks fix: update k8s templates (#2645) 2025-07-08 15:57:01 -07:00
k8s chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
k8s-benchmark chore: setup for performance benchmarking (#3096) 2025-08-13 10:58:22 -07:00
ondevice_distro docs: remove pure venv references (#3047) 2025-08-06 10:42:34 -07:00
remote_hosted_distro refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
self_hosted_distro docs: fix the docs for NVIDIA Inference Provider (#3055) 2025-08-08 11:27:55 +02:00
building_distro.md fix(docs): update llama stack build CLI doc (#3050) 2025-08-06 09:32:09 -07:00
configuration.md refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
customizing_run_yaml.md docs: clarify run.yaml files are starting points for customization (#2746) 2025-07-14 09:53:13 -07:00
importing_as_library.md chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
index.md docs: part 1 - fix warnings in documentation generation (#2861) 2025-07-30 10:50:10 -07:00
list_of_distributions.md fix: Restore the nvidia distro (#2639) 2025-07-07 15:50:05 -07:00
starting_llama_stack_server.md refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00