forked from phoenix-oss/llama-stack-mirror
llama-models should have extremely minimal cruft. Its sole purpose should be didactic -- show the simplest implementation of the llama models and document the prompt formats, etc. This PR is the complement to https://github.com/meta-llama/llama-models/pull/279 ## Test Plan Ensure all `llama` CLI `model` sub-commands work: ```bash llama model list llama model download --model-id ... llama model prompt-format -m ... ``` Ran tests: ```bash cd tests/client-sdk LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/ LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/ LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/ ``` Create a fresh venv `uv venv && source .venv/bin/activate` and run `llama stack build --template fireworks --image-type venv` followed by `llama stack run together --image-type venv` <-- the server runs Also checked that the OpenAPI generator can run and there is no change in the generated files as a result. ```bash cd docs/openapi_generator sh run_openapi_generator.sh ``` |
||
---|---|---|
.. | ||
pyopenapi | ||
strong_typing | ||
generate.py | ||
README.md | ||
run_openapi_generator.sh |
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_stack/[<subdir>]/api/endpoints.py
using the generate.py
utility.
Please install the following packages before running the script:
pip install python-openapi json-strong-typing fire PyYAML llama-models
Then simply run sh run_openapi_generator.sh <OUTPUT_DIR>