mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? An API spec must talk about Error handling. This was a pretty glaring omission so far. This PR begins to address it by adding a set of standard error responses we can attach to all our API calls. At a future point, we can add specific error types where necessary (although we should not hurry to do that; it is best done very late.) ## Test Plan Checked that Stainless SDK generation succeeds. |
||
---|---|---|
.. | ||
pyopenapi | ||
generate.py | ||
README.md | ||
run_openapi_generator.sh |
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_stack/distribution/server/endpoints.py
using the generate.py
utility.
Please install the following packages before running the script:
pip install fire PyYAML llama-models
Then simply run sh run_openapi_generator.sh