llama-stack-mirror/docs/openapi_generator
ehhuang 1166afdf76
fix: some telemetry APIs don't currently work (#1188)
Summary:

This bug is surfaced by using the http LS client. The issue is that
non-scalar values in 'GET' method are `body` params in fastAPI, but our
spec generation script doesn't respect that. We fix by just making them
POST method instead.

Test Plan:
Test API call with newly sync'd client
(https://github.com/meta-llama/llama-stack-client-python/pull/149)

<img width="1114" alt="image"
src="https://github.com/user-attachments/assets/7710aca5-d163-4e00-a465-14e6fcaac2b2"
/>
2025-02-20 14:09:25 -08:00
..
pyopenapi fix: some telemetry APIs don't currently work (#1188) 2025-02-20 14:09:25 -08:00
generate.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
README.md docs: Remove unused python-openapi and json-strong-typing in openapi_generator (#1167) 2025-02-19 22:06:29 -08:00
run_openapi_generator.sh Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00

The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_stack/distribution/server/endpoints.py using the generate.py utility.

Please install the following packages before running the script:

pip install fire PyYAML llama-models

Then simply run sh run_openapi_generator.sh