llama-stack-mirror/client-sdks
Luis Tomas Bolivar f7c2973aa5 fix: Avoid BadRequestError due to invalid max_tokens (#3667)
This patch ensures if max tokens is not defined, then is set to None
instead of 0 when calling openai_chat_completion. This way some
providers (like gemini) that cannot handle the `max_tokens = 0` will not
fail

Issue: #3666
2025-10-30 14:23:22 -07:00
..
stainless fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00