mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
This patch ensures if max tokens is not defined, then is set to None instead of 0 when calling openai_chat_completion. This way some providers (like gemini) that cannot handle the `max_tokens = 0` will not fail Issue: #3666 |
||
|---|---|---|
| .. | ||
| openapi.stainless.yml | ||
| openapi.yml | ||
| README.md | ||
These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.
openapi.yml: this is the OpenAPI specification for the Llama Stack API.openapi.stainless.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.
A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.
These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the run_openapi_generator.sh script.