mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
# Problem Responses API uses max_tool_calls parameter to limit the number of tool calls that can be generated in a response. Currently, LLS implementation of the Responses API does not support this parameter. # What does this PR do? This pull request adds the max_tool_calls field to the response object definition and updates the inline provider. it also ensures that: - the total number of calls to built-in and mcp tools do not exceed max_tool_calls - an error is thrown if max_tool_calls < 1 (behavior seen with the OpenAI Responses API, but we can change this if needed) Closes #[3563](https://github.com/llamastack/llama-stack/issues/3563) ## Test Plan - Tested manually for change in model response w.r.t supplied max_tool_calls field. - Added integration tests to test invalid max_tool_calls parameter. - Added integration tests to check max_tool_calls parameter with built-in and function tools. - Added integration tests to check max_tool_calls parameter in the returned response object. - Recorded OpenAI Responses API behavior using a sample script: https://github.com/s-akhtar-baig/llama-stack-examples/blob/main/responses/src/max_tool_calls.py Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> |
||
|---|---|---|
| .. | ||
| config.yml | ||
| openapi.yml | ||
| README.md | ||
These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.
openapi.yml: this is the OpenAPI specification for the Llama Stack API.config.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.
A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.
These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the run_openapi_generator.sh script.