mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? - Discussion in https://github.com/meta-llama/llama-stack/pull/906#discussion_r1936260819 - image.data should accept base64 string as input instead of binary bytes, change prompt_adapter to account for that. ## Test Plan ``` pytest -v tests/client-sdk/inference/test_inference.py ``` with test in https://github.com/meta-llama/llama-stack/pull/906 ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. |
||
---|---|---|
.. | ||
pyopenapi | ||
strong_typing | ||
generate.py | ||
README.md | ||
run_openapi_generator.sh |
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_stack/[<subdir>]/api/endpoints.py
using the generate.py
utility.
Please install the following packages before running the script:
pip install python-openapi json-strong-typing fire PyYAML llama-models
Then simply run sh run_openapi_generator.sh <OUTPUT_DIR>