llama-stack-mirror/client-sdks/stainless
Ashwin Bharambe 0e96279bee
chore(cleanup)!: remove tool_runtime.rag_tool (#3871)
Kill the `builtin::rag` tool group completely since it is no longer
targeted. We use the Responses implementation for knowledge_search which
uses the `openai_vector_stores` pathway.

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-20 22:26:21 -07:00
..
openapi.stainless.yml chore: add beta group to stainless (#3866) 2025-10-20 16:26:06 -07:00
openapi.yml chore(cleanup)!: remove tool_runtime.rag_tool (#3871) 2025-10-20 22:26:21 -07:00
README.md feat(stainless): add stainless source of truth config (#3860) 2025-10-20 14:32:20 -07:00

These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.

  • openapi.yml: this is the OpenAPI specification for the Llama Stack API.
  • openapi.stainless.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.

A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.

These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the run_openapi_generator.sh script.