llama-stack-mirror/client-sdks/stainless
ehhuang 9d5c34af27
fix!: BREAKING CHANGE: vector_store: search API response fix (#4080)
# What does this PR do?
- search_query in the vector store search API should be a list,
according to https://github.com/openai/openai-openapi


## Test Plan
modified tests


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/llamastack/llama-stack/pull/4080).
* #4086
* __->__ #4080
2025-11-05 15:01:48 -08:00
..
config-not-source-of-truth-yet.yml chore: add stainless config 2025-11-04 15:44:07 -08:00
openapi.yml fix!: BREAKING CHANGE: vector_store: search API response fix (#4080) 2025-11-05 15:01:48 -08:00
README.md feat(stainless): add stainless source of truth config (#3860) 2025-10-20 14:32:20 -07:00

These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.

  • openapi.yml: this is the OpenAPI specification for the Llama Stack API.
  • openapi.stainless.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.

A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.

These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the run_openapi_generator.sh script.