llama-stack-mirror/client-sdks/stainless
Charlie Doern b99eb2a1d2 feat: support thought_signature
according to gemini docs: https://ai.google.dev/gemini-api/docs/thought-signatures

thought_signature lives in the extra_content of the tool_call, add extra_content to OpenAIChatCompletionToolCall to support this

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-12-01 16:13:00 -05:00
..
config.yml feat(openapi): generate stainless config "more" programmatically (#4164) 2025-11-17 12:48:03 -08:00
openapi.yml feat: support thought_signature 2025-12-01 16:13:00 -05:00
README.md feat(openapi): generate stainless config "more" programmatically (#4164) 2025-11-17 12:48:03 -08:00

These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.

  • openapi.yml: this is the OpenAPI specification for the Llama Stack API.
  • config.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.

A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.

These files go hand-in-hand. Both openapi.yml and config.yml are generated by scripts/run_openapi_generator.sh:

  • openapi.yml comes from the FastAPI-based generator.
  • config.yml is rendered from scripts/openapi_generator/stainless_config/config_data.py so the Stainless config stays in lock-step with the spec.