mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-24 08:47:26 +00:00
# What does this PR do? Converts openai(_chat)_completions params to pydantic BaseModel to reduce code duplication across all providers. ## Test Plan CI --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/llamastack/llama-stack/pull/3761). * #3777 * __->__ #3761 |
||
|---|---|---|
| .. | ||
| pyopenapi | ||
| generate.py | ||
| README.md | ||
| run_openapi_generator.sh | ||
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in llama_stack.core/server/endpoints.py using the generate.py utility.