llama-stack-mirror/docs/static
Luis Tomas Bolivar f18b5eb537
fix: Avoid BadRequestError due to invalid max_tokens (#3667)
This patch ensures if max tokens is not defined, then is set to None
instead of 0 when calling openai_chat_completion. This way some
providers (like gemini) that cannot handle the `max_tokens = 0` will not
fail

Issue: #3666
2025-10-27 09:27:21 -07:00
..
img docs: update OG image (#3669) 2025-10-03 10:22:54 -07:00
providers/vector_io docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
deprecated-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
deprecated-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
experimental-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
experimental-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
llama-stack-spec.html chore: support default model in moderations API (#3890) 2025-10-23 16:03:53 -07:00
llama-stack-spec.yaml chore: support default model in moderations API (#3890) 2025-10-23 16:03:53 -07:00
remote_or_local.gif docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
safety_system.webp docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
site.webmanifest docs: add favicon and mobile styling (#3650) 2025-10-02 10:42:54 +02:00
stainless-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
stainless-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00