llama-stack-mirror/client-sdks/stainless
Sébastien Han 30cab02083
chore: refactor Batches protocol to use request models
This commit refactors the Batches protocol to use Pydantic request
models for both create_batch and list_batches methods, improving
consistency, readability, and maintainability.

- create_batch now accepts a single CreateBatchRequest parameter instead
  of individual arguments. This aligns the protocol with FastAPI’s
  request model pattern, allowing the router to pass the request object
  directly without unpacking parameters. Provider implementations now
  access fields via request.input_file_id, request.endpoint, etc.

- list_batches now accepts a single ListBatchesRequest parameter,
  replacing individual query parameters. The model includes after and
  limit fields with proper OpenAPI descriptions. FastAPI automatically
  parses query parameters into the model for GET requests, keeping
  router code clean. Provider implementations access fields via
  request.after and request.limit.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-11-20 16:00:34 +01:00
..
config.yml feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
openapi.yml chore: refactor Batches protocol to use request models 2025-11-20 16:00:34 +01:00
README.md feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00

These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.

  • openapi.yml: this is the OpenAPI specification for the Llama Stack API.
  • config.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.

A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.

These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the scripts/run_openapi_generator.sh script.