llama-stack-mirror/llama_stack
Luis Tomas Bolivar f18b5eb537
fix: Avoid BadRequestError due to invalid max_tokens (#3667)
This patch ensures if max tokens is not defined, then is set to None
instead of 0 when calling openai_chat_completion. This way some
providers (like gemini) that cannot handle the `max_tokens = 0` will not
fail

Issue: #3666
2025-10-27 09:27:21 -07:00
..
apis fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-27 09:27:21 -07:00
cli chore(telemetry): code cleanup (#3897) 2025-10-23 23:13:02 -07:00
core chore(telemetry): code cleanup (#3897) 2025-10-23 23:13:02 -07:00
distributions chore: support default model in moderations API (#3890) 2025-10-23 16:03:53 -07:00
models chore: remove dead code (#3729) 2025-10-07 20:26:02 -07:00
providers chore(telemetry): code cleanup (#3897) 2025-10-23 23:13:02 -07:00
strong_typing chore: refactor (chat)completions endpoints to use shared params struct (#3761) 2025-10-10 15:46:34 -07:00
testing feat(ci): add support for docker:distro in tests (#3832) 2025-10-16 19:33:13 -07:00
ui chore(ui-deps): bump @types/node from 24.8.1 to 24.9.1 in /llama_stack/ui (#3912) 2025-10-26 23:48:00 -04:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore(telemetry): code cleanup (#3897) 2025-10-23 23:13:02 -07:00
schema_utils.py fix(auth): allow unauthenticated access to health and version endpoints (#3736) 2025-10-10 13:41:43 -07:00