llama-stack-mirror/docs/static
Luis Tomas Bolivar f7c2973aa5 fix: Avoid BadRequestError due to invalid max_tokens (#3667)
This patch ensures if max tokens is not defined, then is set to None
instead of 0 when calling openai_chat_completion. This way some
providers (like gemini) that cannot handle the `max_tokens = 0` will not
fail

Issue: #3666
2025-10-30 14:23:22 -07:00
..
img docs: update OG image (#3669) 2025-10-03 10:22:54 -07:00
providers/vector_io docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
deprecated-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00
deprecated-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00
experimental-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00
experimental-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00
llama-stack-spec.html revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
llama-stack-spec.yaml revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
remote_or_local.gif docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
safety_system.webp docs: static content migration (#3535) 2025-09-24 14:08:50 -07:00
site.webmanifest docs: add favicon and mobile styling (#3650) 2025-10-02 10:42:54 +02:00
stainless-llama-stack-spec.html fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00
stainless-llama-stack-spec.yaml fix: Avoid BadRequestError due to invalid max_tokens (#3667) 2025-10-30 14:23:22 -07:00