llama-stack-mirror/tests/unit/providers
Jaideep Rao 4aa586d7af Fix: Ensure that tool calls with no arguments get handled correctly #3560
When a model decides to use an MCP tool call that requires no arguments, it sets the arguments field to None. This causes validation errors because this field gets removed when being parsed by an openai compatible inference provider like vLLM
This PR ensures that, as soon as the tool call args are accumulated while streaming, we check to ensure no tool call function arguments are set to None - if they are we replace them with "{}"

Closes #3456

Added new unit test to verify that any tool calls with function arguments set to None get handled correctly

Signed-off-by: Jaideep Rao <jrao@redhat.com>
2025-09-30 07:58:03 -04:00
..
agent fix: adding mime type of application/json support (#3452) 2025-09-29 11:27:31 -07:00
agents Fix: Ensure that tool calls with no arguments get handled correctly #3560 2025-09-30 07:58:03 -04:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
files feat(files): fix expires_after API shape (#3604) 2025-09-29 21:29:15 -07:00
inference feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
inline fix: mcp tool with array type should include items (#3602) 2025-09-29 23:11:41 -07:00
nvidia feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
utils feat(internal): add image_url download feature to OpenAIMixin (#3516) 2025-09-26 17:32:16 -04:00
vector_io chore(api): remove deprecated embeddings impls (#3301) 2025-09-29 14:45:09 -04:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00