llama-stack-mirror/llama_stack
Michael Clifford 093881071a
fix: add max_tokens slider to playground tools page (#1958)
# What does this PR do?

This PR adds a `max_tokens` slider to playground tools page. I have
found that in some instances the llama stack server throws a 500 error
if the max_tokens value is not explicitly set in the agent's
`sampling_params`. This PR, uses the same implementation of the
`max_tokens` slider from the chat page, and includes it on the tools
page.


## Test Plan
1. Attempting to call a tool without these changes results in a `500:
Internal server error: An unexpected error occurred`.
2. Attempting to call a tool with these changes results in the expected
output.

Signed-off-by: Michael Clifford <mcliffor@redhat.com>
2025-04-15 09:11:08 -07:00
..
apis fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
cli feat: add --providers to llama stack build (#1718) 2025-04-15 14:17:03 +02:00
distribution fix: add max_tokens slider to playground tools page (#1958) 2025-04-15 09:11:08 -07:00
models fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
providers feat: allow ollama to use 'latest' if available but not specified (#1903) 2025-04-14 09:03:54 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: add example for intel gpu in vllm remote (#1952) 2025-04-15 07:56:23 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00