llama-stack/rfcs/openapi_generator
Xi Yan 5712566061
Remove request wrapper migration (#64)
* [1/n] migrate inference/chat_completion

* migrate inference/completion

* inference/completion

* inference regenerate openapi spec

* safety api

* migrate agentic system

* migrate apis without implementations

* re-generate openapi spec

* remove hack from openapi generator

* fix inference

* fix inference

* openapi generator rerun

* Simplified Telemetry API and tying it to logger (#57)

* Simplified Telemetry API and tying it to logger

* small update which adds a METRIC type

* move span events one level down into structured log events

---------

Co-authored-by: Ashwin Bharambe <ashwin@meta.com>

* fix api to work with openapi generator

* fix agentic calling inference

* together adapter inference

* update inference adapters

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
2024-09-12 15:03:49 -07:00
..
pyopenapi Remove request wrapper migration (#64) 2024-09-12 15:03:49 -07:00
generate.py add /inference/chat_completion to SSE special case 2024-09-10 01:14:11 -04:00
README.md RFC-0001-The-Llama-Stack (#8) 2024-08-20 19:01:18 -07:00
run_openapi_generator.sh rename observability -> Telemetry; regen Spec 2024-09-07 15:23:53 -07:00

The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_toolchain/[<subdir>]/api/endpoints.py using the generate.py utility.

Please install the following packages before running the script:

pip install python-openapi json-strong-typing fire PyYAML llama-models

Then simply run sh run_openapi_generator.sh <OUTPUT_DIR>