llama-stack-mirror/docs/static
Abhishek Bongale 97b345b3f8 feat: Add max_output_tokens to Response API
OpenAI Responses and Completions have a max_output_tokens field.
It is currently missing from the create and response object in Responses API.

This PR fixes it.

fixes: #3562
Signed-off-by: Abhishek Bongale <abhishekbongale@outlook.com>
2025-11-07 13:28:59 +00:00
..
img docs: update OG image (#3669) 2025-10-03 10:22:54 -07:00
providers/vector_io docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
deprecated-llama-stack-spec.yaml chore!: remove all deprecated routes (including /openai/v1/ ones) (#4054) 2025-11-03 19:00:59 -08:00
experimental-llama-stack-spec.yaml chore!: remove the agents (sessions and turns) API (#4055) 2025-11-04 09:38:39 -08:00
llama-stack-spec.yaml feat: Add max_output_tokens to Response API 2025-11-07 13:28:59 +00:00
remote_or_local.gif docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
safety_system.webp docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
site.webmanifest docs: add favicon and mobile styling (#3650) 2025-10-02 10:42:54 +02:00
stainless-llama-stack-spec.yaml feat: Add max_output_tokens to Response API 2025-11-07 13:28:59 +00:00