feat(tools): use { input_schema, output_schema } for ToolDefinition

This commit is contained in:
Ashwin Bharambe 2025-09-30 19:13:15 -07:00
parent 42414a1a1b
commit 139320e19f
20 changed files with 1989 additions and 386 deletions

View file

@ -1,12 +1,12 @@
---
description: "Agents API for creating and interacting with agentic systems.
Main functionalities provided by this API:
- Create agents with specific instructions and ability to use tools.
- Interactions with agents are grouped into sessions (\"threads\"), and each interaction is called a \"turn\".
- Agents can be provided with various tools (see the ToolGroups and ToolRuntime APIs for more details).
- Agents can be provided with various shields (see the Safety API for more details).
- Agents can also use Memory to retrieve information from knowledge bases. See the RAG Tool and Vector IO APIs for more details."
Main functionalities provided by this API:
- Create agents with specific instructions and ability to use tools.
- Interactions with agents are grouped into sessions (\"threads\"), and each interaction is called a \"turn\".
- Agents can be provided with various tools (see the ToolGroups and ToolRuntime APIs for more details).
- Agents can be provided with various shields (see the Safety API for more details).
- Agents can also use Memory to retrieve information from knowledge bases. See the RAG Tool and Vector IO APIs for more details."
sidebar_label: Agents
title: Agents
---
@ -17,11 +17,11 @@ title: Agents
Agents API for creating and interacting with agentic systems.
Main functionalities provided by this API:
- Create agents with specific instructions and ability to use tools.
- Interactions with agents are grouped into sessions ("threads"), and each interaction is called a "turn".
- Agents can be provided with various tools (see the ToolGroups and ToolRuntime APIs for more details).
- Agents can be provided with various shields (see the Safety API for more details).
- Agents can also use Memory to retrieve information from knowledge bases. See the RAG Tool and Vector IO APIs for more details.
Main functionalities provided by this API:
- Create agents with specific instructions and ability to use tools.
- Interactions with agents are grouped into sessions ("threads"), and each interaction is called a "turn".
- Agents can be provided with various tools (see the ToolGroups and ToolRuntime APIs for more details).
- Agents can be provided with various shields (see the Safety API for more details).
- Agents can also use Memory to retrieve information from knowledge bases. See the RAG Tool and Vector IO APIs for more details.
This section contains documentation for all available providers for the **agents** API.

View file

@ -1,14 +1,14 @@
---
description: "The Batches API enables efficient processing of multiple requests in a single operation,
particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale.
particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale.
The API is designed to allow use of openai client libraries for seamless integration.
The API is designed to allow use of openai client libraries for seamless integration.
This API provides the following extensions:
- idempotent batch creation
This API provides the following extensions:
- idempotent batch creation
Note: This API is currently under active development and may undergo changes."
Note: This API is currently under active development and may undergo changes."
sidebar_label: Batches
title: Batches
---
@ -18,14 +18,14 @@ title: Batches
## Overview
The Batches API enables efficient processing of multiple requests in a single operation,
particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale.
particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale.
The API is designed to allow use of openai client libraries for seamless integration.
The API is designed to allow use of openai client libraries for seamless integration.
This API provides the following extensions:
- idempotent batch creation
This API provides the following extensions:
- idempotent batch creation
Note: This API is currently under active development and may undergo changes.
Note: This API is currently under active development and may undergo changes.
This section contains documentation for all available providers for the **batches** API.

View file

@ -1,9 +1,9 @@
---
description: "Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search."
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search."
sidebar_label: Inference
title: Inference
---
@ -14,8 +14,8 @@ title: Inference
Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate "raw" and "chat" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search.
This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate "raw" and "chat" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search.
This section contains documentation for all available providers for the **inference** API.

View file

@ -7383,12 +7383,57 @@
"type": "string",
"description": "(Optional) Human-readable description of what the tool does"
},
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ToolParameter"
"input_schema": {
"type": "object",
"additionalProperties": {
"oneOf": [
{
"type": "null"
},
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
},
{
"type": "array"
},
{
"type": "object"
}
]
},
"description": "(Optional) List of parameters this tool accepts"
"description": "(Optional) JSON Schema for tool inputs (MCP inputSchema)"
},
"output_schema": {
"type": "object",
"additionalProperties": {
"oneOf": [
{
"type": "null"
},
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
},
{
"type": "array"
},
{
"type": "object"
}
]
},
"description": "(Optional) JSON Schema for tool outputs (MCP outputSchema)"
},
"metadata": {
"type": "object",
@ -7424,68 +7469,6 @@
"title": "ToolDef",
"description": "Tool definition used in runtime contexts."
},
"ToolParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the parameter"
},
"parameter_type": {
"type": "string",
"description": "Type of the parameter (e.g., string, integer)"
},
"description": {
"type": "string",
"description": "Human-readable description of what the parameter does"
},
"required": {
"type": "boolean",
"default": true,
"description": "Whether this parameter is required for tool invocation"
},
"items": {
"type": "object",
"description": "Type of the elements when parameter_type is array"
},
"title": {
"type": "string",
"description": "(Optional) Title of the parameter"
},
"default": {
"oneOf": [
{
"type": "null"
},
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
},
{
"type": "array"
},
{
"type": "object"
}
],
"description": "(Optional) Default value for the parameter if not provided"
}
},
"additionalProperties": false,
"required": [
"name",
"parameter_type",
"description",
"required"
],
"title": "ToolParameter",
"description": "Parameter definition for a tool."
},
"TopKSamplingStrategy": {
"type": "object",
"properties": {
@ -13132,6 +13115,68 @@
"title": "Tool",
"description": "A tool that can be invoked by agents."
},
"ToolParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the parameter"
},
"parameter_type": {
"type": "string",
"description": "Type of the parameter (e.g., string, integer)"
},
"description": {
"type": "string",
"description": "Human-readable description of what the parameter does"
},
"required": {
"type": "boolean",
"default": true,
"description": "Whether this parameter is required for tool invocation"
},
"items": {
"type": "object",
"description": "Type of the elements when parameter_type is array"
},
"title": {
"type": "string",
"description": "(Optional) Title of the parameter"
},
"default": {
"oneOf": [
{
"type": "null"
},
{
"type": "boolean"
},
{
"type": "number"
},
{
"type": "string"
},
{
"type": "array"
},
{
"type": "object"
}
],
"description": "(Optional) Default value for the parameter if not provided"
}
},
"additionalProperties": false,
"required": [
"name",
"parameter_type",
"description",
"required"
],
"title": "ToolParameter",
"description": "Parameter definition for a tool."
},
"ToolGroup": {
"type": "object",
"properties": {

View file

@ -5322,12 +5322,30 @@ components:
type: string
description: >-
(Optional) Human-readable description of what the tool does
parameters:
type: array
items:
$ref: '#/components/schemas/ToolParameter'
input_schema:
type: object
additionalProperties:
oneOf:
- type: 'null'
- type: boolean
- type: number
- type: string
- type: array
- type: object
description: >-
(Optional) List of parameters this tool accepts
(Optional) JSON Schema for tool inputs (MCP inputSchema)
output_schema:
type: object
additionalProperties:
oneOf:
- type: 'null'
- type: boolean
- type: number
- type: string
- type: array
- type: object
description: >-
(Optional) JSON Schema for tool outputs (MCP outputSchema)
metadata:
type: object
additionalProperties:
@ -5346,50 +5364,6 @@ components:
title: ToolDef
description: >-
Tool definition used in runtime contexts.
ToolParameter:
type: object
properties:
name:
type: string
description: Name of the parameter
parameter_type:
type: string
description: >-
Type of the parameter (e.g., string, integer)
description:
type: string
description: >-
Human-readable description of what the parameter does
required:
type: boolean
default: true
description: >-
Whether this parameter is required for tool invocation
items:
type: object
description: >-
Type of the elements when parameter_type is array
title:
type: string
description: (Optional) Title of the parameter
default:
oneOf:
- type: 'null'
- type: boolean
- type: number
- type: string
- type: array
- type: object
description: >-
(Optional) Default value for the parameter if not provided
additionalProperties: false
required:
- name
- parameter_type
- description
- required
title: ToolParameter
description: Parameter definition for a tool.
TopKSamplingStrategy:
type: object
properties:
@ -9667,6 +9641,50 @@ components:
- parameters
title: Tool
description: A tool that can be invoked by agents.
ToolParameter:
type: object
properties:
name:
type: string
description: Name of the parameter
parameter_type:
type: string
description: >-
Type of the parameter (e.g., string, integer)
description:
type: string
description: >-
Human-readable description of what the parameter does
required:
type: boolean
default: true
description: >-
Whether this parameter is required for tool invocation
items:
type: object
description: >-
Type of the elements when parameter_type is array
title:
type: string
description: (Optional) Title of the parameter
default:
oneOf:
- type: 'null'
- type: boolean
- type: number
- type: string
- type: array
- type: object
description: >-
(Optional) Default value for the parameter if not provided
additionalProperties: false
required:
- name
- parameter_type
- description
- required
title: ToolParameter
description: Parameter definition for a tool.
ToolGroup:
type: object
properties:

View file

@ -27,14 +27,12 @@ from llama_stack.models.llama.datatypes import (
StopReason,
ToolCall,
ToolDefinition,
ToolParamDefinition,
ToolPromptFormat,
)
from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol
from llama_stack.schema_utils import json_schema_type, register_schema, webmethod
register_schema(ToolCall)
register_schema(ToolParamDefinition)
register_schema(ToolDefinition)
from enum import StrEnum

View file

@ -65,13 +65,15 @@ class ToolDef(BaseModel):
:param name: Name of the tool
:param description: (Optional) Human-readable description of what the tool does
:param parameters: (Optional) List of parameters this tool accepts
:param input_schema: (Optional) JSON Schema for tool inputs (MCP inputSchema)
:param output_schema: (Optional) JSON Schema for tool outputs (MCP outputSchema)
:param metadata: (Optional) Additional metadata about the tool
"""
name: str
description: str | None = None
parameters: list[ToolParameter] | None = None
input_schema: dict[str, Any] | None = None
output_schema: dict[str, Any] | None = None
metadata: dict[str, Any] | None = None

View file

@ -257,7 +257,7 @@ def create_dynamic_typed_route(func: Any, method: str, route: str) -> Callable:
return result
except Exception as e:
if logger.isEnabledFor(logging.DEBUG):
if logger.isEnabledFor(logging.INFO):
logger.exception(f"Error executing endpoint {route=} {method=}")
else:
logger.error(f"Error executing endpoint {route=} {method=}: {str(e)}")

View file

@ -88,19 +88,11 @@ class StopReason(Enum):
out_of_tokens = "out_of_tokens"
class ToolParamDefinition(BaseModel):
param_type: str
description: str | None = None
required: bool | None = True
items: Any | None = None
title: str | None = None
default: Any | None = None
class ToolDefinition(BaseModel):
tool_name: BuiltinTool | str
description: str | None = None
parameters: dict[str, ToolParamDefinition] | None = None
input_schema: dict[str, Any] | None = None
output_schema: dict[str, Any] | None = None
@field_validator("tool_name", mode="before")
@classmethod

View file

@ -18,7 +18,6 @@ from typing import Any
from llama_stack.apis.inference import (
BuiltinTool,
ToolDefinition,
ToolParamDefinition,
)
from .base import PromptTemplate, PromptTemplateGeneratorBase
@ -101,11 +100,8 @@ class JsonCustomToolGenerator(PromptTemplateGeneratorBase):
{# manually setting up JSON because jinja sorts keys in unexpected ways -#}
{%- set tname = t.tool_name -%}
{%- set tdesc = t.description -%}
{%- set tparams = t.parameters -%}
{%- set required_params = [] -%}
{%- for name, param in tparams.items() if param.required == true -%}
{%- set _ = required_params.append(name) -%}
{%- endfor -%}
{%- set tprops = t.input_schema.get('properties', {}) -%}
{%- set required_params = t.input_schema.get('required', []) -%}
{
"type": "function",
"function": {
@ -114,11 +110,11 @@ class JsonCustomToolGenerator(PromptTemplateGeneratorBase):
"parameters": {
"type": "object",
"properties": [
{%- for name, param in tparams.items() %}
{%- for name, param in tprops.items() %}
{
"{{name}}": {
"type": "object",
"description": "{{param.description}}"
"description": "{{param.get('description', '')}}"
}
}{% if not loop.last %},{% endif %}
{%- endfor %}
@ -143,17 +139,19 @@ class JsonCustomToolGenerator(PromptTemplateGeneratorBase):
ToolDefinition(
tool_name="trending_songs",
description="Returns the trending songs on a Music site",
parameters={
"n": ToolParamDefinition(
param_type="int",
description="The number of songs to return",
required=True,
),
"genre": ToolParamDefinition(
param_type="str",
description="The genre of the songs to return",
required=False,
),
input_schema={
"type": "object",
"properties": {
"n": {
"type": "int",
"description": "The number of songs to return",
},
"genre": {
"type": "str",
"description": "The genre of the songs to return",
},
},
"required": ["n"],
},
),
]
@ -170,11 +168,14 @@ class FunctionTagCustomToolGenerator(PromptTemplateGeneratorBase):
{#- manually setting up JSON because jinja sorts keys in unexpected ways -#}
{%- set tname = t.tool_name -%}
{%- set tdesc = t.description -%}
{%- set modified_params = t.parameters.copy() -%}
{%- for key, value in modified_params.items() -%}
{%- if 'default' in value -%}
{%- set _ = value.pop('default', None) -%}
{%- set tprops = t.input_schema.get('properties', {}) -%}
{%- set modified_params = {} -%}
{%- for key, value in tprops.items() -%}
{%- set param_copy = value.copy() -%}
{%- if 'default' in param_copy -%}
{%- set _ = param_copy.pop('default', None) -%}
{%- endif -%}
{%- set _ = modified_params.update({key: param_copy}) -%}
{%- endfor -%}
{%- set tparams = modified_params | tojson -%}
Use the function '{{ tname }}' to '{{ tdesc }}':
@ -205,17 +206,19 @@ class FunctionTagCustomToolGenerator(PromptTemplateGeneratorBase):
ToolDefinition(
tool_name="trending_songs",
description="Returns the trending songs on a Music site",
parameters={
"n": ToolParamDefinition(
param_type="int",
description="The number of songs to return",
required=True,
),
"genre": ToolParamDefinition(
param_type="str",
description="The genre of the songs to return",
required=False,
),
input_schema={
"type": "object",
"properties": {
"n": {
"type": "int",
"description": "The number of songs to return",
},
"genre": {
"type": "str",
"description": "The genre of the songs to return",
},
},
"required": ["n"],
},
),
]
@ -255,11 +258,8 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
{# manually setting up JSON because jinja sorts keys in unexpected ways -#}
{%- set tname = t.tool_name -%}
{%- set tdesc = t.description -%}
{%- set tparams = t.parameters -%}
{%- set required_params = [] -%}
{%- for name, param in tparams.items() if param.required == true -%}
{%- set _ = required_params.append(name) -%}
{%- endfor -%}
{%- set tprops = t.input_schema.get('properties', {}) -%}
{%- set required_params = t.input_schema.get('required', []) -%}
{
"name": "{{tname}}",
"description": "{{tdesc}}",
@ -267,11 +267,11 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
"type": "dict",
"required": {{ required_params | tojson }},
"properties": {
{%- for name, param in tparams.items() %}
{%- for name, param in tprops.items() %}
"{{name}}": {
"type": "{{param.param_type}}",
"description": "{{param.description}}"{% if param.default %},
"default": "{{param.default}}"{% endif %}
"type": "{{param.get('type', 'string')}}",
"description": "{{param.get('description', '')}}"{% if param.get('default') %},
"default": "{{param.get('default')}}"{% endif %}
}{% if not loop.last %},{% endif %}
{%- endfor %}
}
@ -299,18 +299,20 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
ToolDefinition(
tool_name="get_weather",
description="Get weather info for places",
parameters={
"city": ToolParamDefinition(
param_type="string",
description="The name of the city to get the weather for",
required=True,
),
"metric": ToolParamDefinition(
param_type="string",
description="The metric for weather. Options are: celsius, fahrenheit",
required=False,
default="celsius",
),
input_schema={
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city to get the weather for",
},
"metric": {
"type": "string",
"description": "The metric for weather. Options are: celsius, fahrenheit",
"default": "celsius",
},
},
"required": ["city"],
},
),
]

View file

@ -13,7 +13,7 @@
import textwrap
from llama_stack.apis.inference import ToolDefinition, ToolParamDefinition
from llama_stack.apis.inference import ToolDefinition
from llama_stack.models.llama.llama3.prompt_templates.base import (
PromptTemplate,
PromptTemplateGeneratorBase,
@ -81,11 +81,8 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
{# manually setting up JSON because jinja sorts keys in unexpected ways -#}
{%- set tname = t.tool_name -%}
{%- set tdesc = t.description -%}
{%- set tparams = t.parameters -%}
{%- set required_params = [] -%}
{%- for name, param in tparams.items() if param.required == true -%}
{%- set _ = required_params.append(name) -%}
{%- endfor -%}
{%- set tprops = t.input_schema.get('properties', {}) -%}
{%- set required_params = t.input_schema.get('required', []) -%}
{
"name": "{{tname}}",
"description": "{{tdesc}}",
@ -93,11 +90,11 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
"type": "dict",
"required": {{ required_params | tojson }},
"properties": {
{%- for name, param in tparams.items() %}
{%- for name, param in tprops.items() %}
"{{name}}": {
"type": "{{param.param_type}}",
"description": "{{param.description}}"{% if param.default %},
"default": "{{param.default}}"{% endif %}
"type": "{{param.get('type', 'string')}}",
"description": "{{param.get('description', '')}}"{% if param.get('default') %},
"default": "{{param.get('default')}}"{% endif %}
}{% if not loop.last %},{% endif %}
{%- endfor %}
}
@ -119,18 +116,20 @@ class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801
ToolDefinition(
tool_name="get_weather",
description="Get weather info for places",
parameters={
"city": ToolParamDefinition(
param_type="string",
description="The name of the city to get the weather for",
required=True,
),
"metric": ToolParamDefinition(
param_type="string",
description="The metric for weather. Options are: celsius, fahrenheit",
required=False,
default="celsius",
),
input_schema={
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city to get the weather for",
},
"metric": {
"type": "string",
"description": "The metric for weather. Options are: celsius, fahrenheit",
"default": "celsius",
},
},
"required": ["city"],
},
),
]

View file

@ -54,7 +54,6 @@ from llama_stack.apis.inference import (
StopReason,
SystemMessage,
ToolDefinition,
ToolParamDefinition,
ToolResponse,
ToolResponseMessage,
UserMessage,
@ -790,20 +789,38 @@ class ChatAgent(ShieldRunnerMixin):
for tool_def in self.agent_config.client_tools:
if tool_name_to_def.get(tool_def.name, None):
raise ValueError(f"Tool {tool_def.name} already exists")
# Build JSON Schema from tool parameters
properties = {}
required = []
for param in tool_def.parameters:
param_schema = {
"type": param.parameter_type,
"description": param.description,
}
if param.default is not None:
param_schema["default"] = param.default
if param.items is not None:
param_schema["items"] = param.items
if param.title is not None:
param_schema["title"] = param.title
properties[param.name] = param_schema
if param.required:
required.append(param.name)
input_schema = {
"type": "object",
"properties": properties,
"required": required,
}
tool_name_to_def[tool_def.name] = ToolDefinition(
tool_name=tool_def.name,
description=tool_def.description,
parameters={
param.name: ToolParamDefinition(
param_type=param.parameter_type,
description=param.description,
required=param.required,
items=param.items,
title=param.title,
default=param.default,
)
for param in tool_def.parameters
},
input_schema=input_schema,
)
for toolgroup_name_with_maybe_tool_name in agent_config_toolgroups:
toolgroup_name, input_tool_name = self._parse_toolgroup_name(toolgroup_name_with_maybe_tool_name)
@ -835,20 +852,37 @@ class ChatAgent(ShieldRunnerMixin):
if tool_name_to_def.get(identifier, None):
raise ValueError(f"Tool {identifier} already exists")
if identifier:
# Build JSON Schema from tool parameters
properties = {}
required = []
for param in tool_def.parameters:
param_schema = {
"type": param.parameter_type,
"description": param.description,
}
if param.default is not None:
param_schema["default"] = param.default
if param.items is not None:
param_schema["items"] = param.items
if param.title is not None:
param_schema["title"] = param.title
properties[param.name] = param_schema
if param.required:
required.append(param.name)
input_schema = {
"type": "object",
"properties": properties,
"required": required,
}
tool_name_to_def[tool_def.identifier] = ToolDefinition(
tool_name=identifier,
description=tool_def.description,
parameters={
param.name: ToolParamDefinition(
param_type=param.parameter_type,
description=param.description,
required=param.required,
items=param.items,
title=param.title,
default=param.default,
)
for param in tool_def.parameters
},
input_schema=input_schema,
)
tool_name_to_args[tool_def.identifier] = toolgroup_to_args.get(toolgroup_name, {})

View file

@ -62,22 +62,38 @@ def convert_tooldef_to_chat_tool(tool_def):
ChatCompletionToolParam suitable for OpenAI chat completion
"""
from llama_stack.models.llama.datatypes import ToolDefinition, ToolParamDefinition
from llama_stack.models.llama.datatypes import ToolDefinition
from llama_stack.providers.utils.inference.openai_compat import convert_tooldef_to_openai_tool
# Build JSON Schema from tool parameters
properties = {}
required = []
for param in tool_def.parameters:
param_schema = {
"type": param.parameter_type,
"description": param.description,
}
if param.default is not None:
param_schema["default"] = param.default
if param.items is not None:
param_schema["items"] = param.items
properties[param.name] = param_schema
if param.required:
required.append(param.name)
input_schema = {
"type": "object",
"properties": properties,
"required": required,
}
internal_tool_def = ToolDefinition(
tool_name=tool_def.name,
description=tool_def.description,
parameters={
param.name: ToolParamDefinition(
param_type=param.parameter_type,
description=param.description,
required=param.required,
default=param.default,
items=param.items,
)
for param in tool_def.parameters
},
input_schema=input_schema,
)
return convert_tooldef_to_openai_tool(internal_tool_def)
@ -526,22 +542,37 @@ class StreamingResponseOrchestrator:
from openai.types.chat import ChatCompletionToolParam
from llama_stack.apis.tools import Tool
from llama_stack.models.llama.datatypes import ToolDefinition, ToolParamDefinition
from llama_stack.models.llama.datatypes import ToolDefinition
from llama_stack.providers.utils.inference.openai_compat import convert_tooldef_to_openai_tool
def make_openai_tool(tool_name: str, tool: Tool) -> ChatCompletionToolParam:
# Build JSON Schema from tool parameters
properties = {}
required = []
for param in tool.parameters:
param_schema = {
"type": param.parameter_type,
"description": param.description,
}
if param.default is not None:
param_schema["default"] = param.default
properties[param.name] = param_schema
if param.required:
required.append(param.name)
input_schema = {
"type": "object",
"properties": properties,
"required": required,
}
tool_def = ToolDefinition(
tool_name=tool_name,
description=tool.description,
parameters={
param.name: ToolParamDefinition(
param_type=param.parameter_type,
description=param.description,
required=param.required,
default=param.default,
)
for param in tool.parameters
},
input_schema=input_schema,
)
return convert_tooldef_to_openai_tool(tool_def)

View file

@ -127,7 +127,6 @@ from llama_stack.models.llama.datatypes import (
StopReason,
ToolCall,
ToolDefinition,
ToolParamDefinition,
)
from llama_stack.providers.utils.inference.prompt_adapter import (
convert_image_content_to_url,
@ -747,14 +746,8 @@ def convert_tooldef_to_openai_tool(tool: ToolDefinition) -> dict:
ToolDefinition:
tool_name: str | BuiltinTool
description: Optional[str]
parameters: Optional[Dict[str, ToolParamDefinition]]
ToolParamDefinition:
param_type: str
description: Optional[str]
required: Optional[bool]
default: Optional[Any]
input_schema: Optional[Dict[str, Any]] # JSON Schema
output_schema: Optional[Dict[str, Any]] # JSON Schema (not used by OpenAI)
OpenAI spec -
@ -763,20 +756,11 @@ def convert_tooldef_to_openai_tool(tool: ToolDefinition) -> dict:
"function": {
"name": tool_name,
"description": description,
"parameters": {
"type": "object",
"properties": {
param_name: {
"type": param_type,
"description": description,
"default": default,
},
...
},
"required": [param_name, ...],
},
"parameters": {<JSON Schema>},
},
}
NOTE: OpenAI does not support output_schema, so it is dropped here.
"""
out = {
"type": "function",
@ -785,37 +769,19 @@ def convert_tooldef_to_openai_tool(tool: ToolDefinition) -> dict:
function = out["function"]
if isinstance(tool.tool_name, BuiltinTool):
function.update(name=tool.tool_name.value) # TODO(mf): is this sufficient?
function["name"] = tool.tool_name.value
else:
function.update(name=tool.tool_name)
function["name"] = tool.tool_name
if tool.description:
function.update(description=tool.description)
function["description"] = tool.description
if tool.parameters:
parameters = {
"type": "object",
"properties": {},
}
properties = parameters["properties"]
required = []
for param_name, param in tool.parameters.items():
properties[param_name] = to_openai_param_type(param.param_type)
if param.description:
properties[param_name].update(description=param.description)
if param.default:
properties[param_name].update(default=param.default)
if param.items:
properties[param_name].update(items=param.items)
if param.title:
properties[param_name].update(title=param.title)
if param.required:
required.append(param_name)
if tool.input_schema:
# Pass through the entire JSON Schema as-is
function["parameters"] = tool.input_schema
if required:
parameters.update(required=required)
function.update(parameters=parameters)
# NOTE: OpenAI does not support output_schema, so we drop it here
# It's stored in LlamaStack for validation and other provider usage
return out
@ -876,22 +842,12 @@ def _convert_openai_request_tools(tools: list[dict[str, Any]] | None = None) ->
tool_fn = tool.get("function", {})
tool_name = tool_fn.get("name", None)
tool_desc = tool_fn.get("description", None)
tool_params = tool_fn.get("parameters", None)
lls_tool_params = {}
if tool_params is not None:
tool_param_properties = tool_params.get("properties", {})
for tool_param_key, tool_param_value in tool_param_properties.items():
tool_param_def = ToolParamDefinition(
param_type=str(tool_param_value.get("type", None)),
description=tool_param_value.get("description", None),
)
lls_tool_params[tool_param_key] = tool_param_def
lls_tool = ToolDefinition(
tool_name=tool_name,
description=tool_desc,
parameters=lls_tool_params,
input_schema=tool_params, # Pass through entire JSON Schema
)
lls_tools.append(lls_tool)
return lls_tools

View file

@ -20,7 +20,6 @@ from llama_stack.apis.tools import (
ListToolDefsResponse,
ToolDef,
ToolInvocationResult,
ToolParameter,
)
from llama_stack.core.datatypes import AuthenticationRequiredError
from llama_stack.log import get_logger
@ -113,24 +112,12 @@ async def list_mcp_tools(endpoint: str, headers: dict[str, str]) -> ListToolDefs
async with client_wrapper(endpoint, headers) as session:
tools_result = await session.list_tools()
for tool in tools_result.tools:
parameters = []
for param_name, param_schema in tool.inputSchema.get("properties", {}).items():
parameters.append(
ToolParameter(
name=param_name,
parameter_type=param_schema.get("type", "string"),
description=param_schema.get("description", ""),
required="default" not in param_schema,
items=param_schema.get("items", None),
title=param_schema.get("title", None),
default=param_schema.get("default", None),
)
)
tools.append(
ToolDef(
name=tool.name,
description=tool.description,
parameters=parameters,
input_schema=tool.inputSchema,
output_schema=getattr(tool, "outputSchema", None),
metadata={
"endpoint": endpoint,
},

View file

@ -0,0 +1,369 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Integration tests for inference/chat completion with JSON Schema-based tools.
Tests that tools pass through correctly to various LLM providers.
"""
import json
import pytest
from llama_stack import LlamaStackAsLibraryClient
from llama_stack.models.llama.datatypes import ToolDefinition
from tests.common.mcp import make_mcp_server
AUTH_TOKEN = "test-token"
class TestChatCompletionWithTools:
"""Test chat completion with tools that have complex schemas."""
def test_simple_tool_call(self, llama_stack_client, text_model_id):
"""Test basic tool calling with simple input schema."""
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string", "description": "City name"}},
"required": ["location"],
},
},
}
]
response = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "What's the weather in San Francisco?"}],
tools=tools,
)
assert response is not None
def test_tool_with_complex_schema(self, llama_stack_client, text_model_id):
"""Test tool calling with complex schema including $ref and $defs."""
tools = [
{
"type": "function",
"function": {
"name": "book_flight",
"description": "Book a flight",
"parameters": {
"type": "object",
"properties": {
"flight": {"$ref": "#/$defs/FlightInfo"},
"passenger": {"$ref": "#/$defs/Passenger"},
},
"required": ["flight", "passenger"],
"$defs": {
"FlightInfo": {
"type": "object",
"properties": {
"from": {"type": "string"},
"to": {"type": "string"},
"date": {"type": "string", "format": "date"},
},
},
"Passenger": {
"type": "object",
"properties": {"name": {"type": "string"}, "age": {"type": "integer"}},
},
},
},
},
}
]
response = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "Book a flight from SFO to JFK for John Doe"}],
tools=tools,
)
# The key test: No errors during schema processing
# The LLM received a valid, complete schema with $ref/$defs
assert response is not None
class TestOpenAICompatibility:
"""Test OpenAI-compatible endpoints with new schema format."""
def test_openai_chat_completion_with_tools(self, compat_client, text_model_id):
"""Test OpenAI-compatible chat completion with tools."""
from openai import OpenAI
if not isinstance(compat_client, OpenAI):
pytest.skip("OpenAI client required")
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string", "description": "City name"}},
"required": ["location"],
},
},
}
]
response = compat_client.chat.completions.create(
model=text_model_id, messages=[{"role": "user", "content": "What's the weather in Tokyo?"}], tools=tools
)
assert response is not None
assert response.choices is not None
def test_openai_format_preserves_complex_schemas(self, compat_client, text_model_id):
"""Test that complex schemas work through OpenAI-compatible API."""
from openai import OpenAI
if not isinstance(compat_client, OpenAI):
pytest.skip("OpenAI client required")
tools = [
{
"type": "function",
"function": {
"name": "process_data",
"description": "Process structured data",
"parameters": {
"type": "object",
"properties": {"data": {"$ref": "#/$defs/DataObject"}},
"$defs": {
"DataObject": {
"type": "object",
"properties": {"values": {"type": "array", "items": {"type": "number"}}},
}
},
},
},
}
]
response = compat_client.chat.completions.create(
model=text_model_id, messages=[{"role": "user", "content": "Process this data"}], tools=tools
)
assert response is not None
class TestMCPToolsInChatCompletion:
"""Test using MCP tools in chat completion."""
@pytest.fixture
def mcp_with_schemas(self):
"""MCP server for chat completion tests."""
from mcp.server.fastmcp import Context
async def calculate(x: float, y: float, operation: str, ctx: Context) -> float:
ops = {"add": x + y, "sub": x - y, "mul": x * y, "div": x / y if y != 0 else None}
return ops.get(operation, 0)
with make_mcp_server(required_auth_token=AUTH_TOKEN, tools={"calculate": calculate}) as server:
yield server
def test_mcp_tools_in_inference(self, llama_stack_client, text_model_id, mcp_with_schemas):
"""Test that MCP tools can be used in inference."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::calc"
uri = mcp_with_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Get the tools from MCP
tools_response = llama_stack_client.tool_runtime.list_runtime_tools(
tool_group_id=test_toolgroup_id,
extra_headers=auth_headers,
)
# Convert to OpenAI format for inference
tools = []
for tool in tools_response.data:
tools.append(
{
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.input_schema if hasattr(tool, "input_schema") else {},
},
}
)
# Use in chat completion
response = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "Calculate 5 + 3"}],
tools=tools,
)
# Schema should have been passed through correctly
assert response is not None
class TestProviderSpecificBehavior:
"""Test provider-specific handling of schemas."""
def test_openai_provider_drops_output_schema(self, llama_stack_client, text_model_id):
"""Test that OpenAI provider doesn't send output_schema (API limitation)."""
# This is more of a documentation test
# OpenAI API doesn't support output schemas, so we drop them
_tool = ToolDefinition(
tool_name="test",
input_schema={"type": "object", "properties": {"x": {"type": "string"}}},
output_schema={"type": "object", "properties": {"y": {"type": "number"}}},
)
# When this tool is sent to OpenAI provider, output_schema is dropped
# But input_schema is preserved
# This test documents the expected behavior
# We can't easily test this without mocking, but the unit tests cover it
pass
def test_gemini_array_support(self):
"""Test that Gemini receives array schemas correctly (issue from commit 65f7b81e)."""
# This was the original bug that led to adding 'items' field
# Now with full JSON Schema pass-through, arrays should work
tool = ToolDefinition(
tool_name="tag_processor",
input_schema={
"type": "object",
"properties": {"tags": {"type": "array", "items": {"type": "string"}, "description": "List of tags"}},
},
)
# With new approach, the complete schema with items is preserved
assert tool.input_schema["properties"]["tags"]["type"] == "array"
assert tool.input_schema["properties"]["tags"]["items"]["type"] == "string"
class TestStreamingWithTools:
"""Test streaming chat completion with tools."""
def test_streaming_tool_calls(self, llama_stack_client, text_model_id):
"""Test that tool schemas work correctly in streaming mode."""
tools = [
{
"type": "function",
"function": {
"name": "get_time",
"description": "Get current time",
"parameters": {"type": "object", "properties": {"timezone": {"type": "string"}}},
},
}
]
response_stream = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "What time is it in UTC?"}],
tools=tools,
stream=True,
)
# Should be able to iterate through stream
chunks = []
for chunk in response_stream:
chunks.append(chunk)
# Should have received some chunks
assert len(chunks) >= 0
class TestEdgeCases:
"""Test edge cases in inference with tools."""
def test_tool_without_schema(self, llama_stack_client, text_model_id):
"""Test tool with no input_schema."""
tools = [
{
"type": "function",
"function": {
"name": "no_args_tool",
"description": "Tool with no arguments",
"parameters": {"type": "object", "properties": {}},
},
}
]
response = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "Call the no args tool"}],
tools=tools,
)
assert response is not None
def test_multiple_tools_with_different_schemas(self, llama_stack_client, text_model_id):
"""Test multiple tools with different schema complexities."""
tools = [
{
"type": "function",
"function": {
"name": "simple",
"parameters": {"type": "object", "properties": {"x": {"type": "string"}}},
},
},
{
"type": "function",
"function": {
"name": "complex",
"parameters": {
"type": "object",
"properties": {"data": {"$ref": "#/$defs/Complex"}},
"$defs": {
"Complex": {
"type": "object",
"properties": {"nested": {"type": "array", "items": {"type": "number"}}},
}
},
},
},
},
{
"type": "function",
"function": {
"name": "with_output",
"parameters": {"type": "object", "properties": {"input": {"type": "string"}}},
},
},
]
response = llama_stack_client.chat.completions.create(
model=text_model_id,
messages=[{"role": "user", "content": "Use one of the available tools"}],
tools=tools,
)
# All tools should have been processed without errors
assert response is not None

View file

@ -0,0 +1,478 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Integration tests for MCP tools with complex JSON Schema support.
Tests $ref, $defs, and other JSON Schema features through MCP integration.
"""
import json
import pytest
from llama_stack import LlamaStackAsLibraryClient
from tests.common.mcp import make_mcp_server
AUTH_TOKEN = "test-token"
@pytest.fixture(scope="function")
def mcp_server_with_complex_schemas():
"""MCP server with tools that have complex schemas including $ref and $defs."""
from mcp.server.fastmcp import Context
async def book_flight(flight: dict, passengers: list[dict], payment: dict, ctx: Context) -> dict:
"""
Book a flight with passenger and payment information.
This tool uses JSON Schema $ref and $defs for type reuse.
"""
return {
"booking_id": "BK12345",
"flight": flight,
"passengers": passengers,
"payment": payment,
"status": "confirmed",
}
async def process_order(order_data: dict, ctx: Context) -> dict:
"""
Process an order with nested address information.
Uses nested objects and $ref.
"""
return {"order_id": "ORD789", "status": "processing", "data": order_data}
async def flexible_contact(contact_info: str, ctx: Context) -> dict:
"""
Accept flexible contact (email or phone).
Uses anyOf schema.
"""
if "@" in contact_info:
return {"type": "email", "value": contact_info}
else:
return {"type": "phone", "value": contact_info}
# Manually attach complex schemas to the functions
# (FastMCP might not support this by default, so this is test setup)
# For MCP, we need to set the schema via tool annotations
# This is test infrastructure to force specific schemas
tools = {"book_flight": book_flight, "process_order": process_order, "flexible_contact": flexible_contact}
# Note: In real MCP implementation, we'd configure these schemas properly
# For testing, we may need to mock or extend the MCP server setup
with make_mcp_server(required_auth_token=AUTH_TOKEN, tools=tools) as server_info:
yield server_info
@pytest.fixture(scope="function")
def mcp_server_with_output_schemas():
"""MCP server with tools that have output schemas defined."""
from mcp.server.fastmcp import Context
async def get_weather(location: str, ctx: Context) -> dict:
"""
Get weather with structured output.
Has both input and output schemas.
"""
return {"temperature": 72.5, "conditions": "Sunny", "humidity": 45, "wind_speed": 10.2}
async def calculate(x: float, y: float, operation: str, ctx: Context) -> dict:
"""
Perform calculation with validated output.
"""
operations = {"add": x + y, "subtract": x - y, "multiply": x * y, "divide": x / y if y != 0 else None}
result = operations.get(operation)
return {"result": result, "operation": operation}
tools = {"get_weather": get_weather, "calculate": calculate}
with make_mcp_server(required_auth_token=AUTH_TOKEN, tools=tools) as server_info:
yield server_info
class TestMCPSchemaPreservation:
"""Test that MCP tool schemas are preserved correctly."""
def test_mcp_tools_list_with_schemas(self, llama_stack_client, mcp_server_with_complex_schemas):
"""Test listing MCP tools preserves input_schema."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
# Clean up any existing registration
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
# Register MCP toolgroup
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# List runtime tools
response = llama_stack_client.tool_runtime.list_runtime_tools(
tool_group_id=test_toolgroup_id,
extra_headers=auth_headers,
)
tools = response.data
assert len(tools) > 0
# Check each tool has input_schema
for tool in tools:
assert hasattr(tool, "input_schema")
# Schema might be None or a dict depending on tool
if tool.input_schema is not None:
assert isinstance(tool.input_schema, dict)
# Should have basic JSON Schema structure
if "properties" in tool.input_schema:
assert "type" in tool.input_schema
def test_mcp_schema_with_refs_preserved(self, llama_stack_client, mcp_server_with_complex_schemas):
"""Test that $ref and $defs in MCP schemas are preserved."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
# Register
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# List tools
response = llama_stack_client.tool_runtime.list_runtime_tools(
tool_group_id=test_toolgroup_id,
extra_headers=auth_headers,
)
# Find book_flight tool (which should have $ref/$defs)
book_flight_tool = next((t for t in response.data if t.name == "book_flight"), None)
if book_flight_tool and book_flight_tool.input_schema:
# If the MCP server provides $defs, they should be preserved
# This is the KEY test for the bug fix
schema = book_flight_tool.input_schema
# Check if schema has properties (might vary based on MCP implementation)
if "properties" in schema:
# Verify schema structure is preserved (exact structure depends on MCP server)
assert isinstance(schema["properties"], dict)
# If $defs are present, verify they're preserved
if "$defs" in schema:
assert isinstance(schema["$defs"], dict)
# Each definition should be a dict
for _def_name, def_schema in schema["$defs"].items():
assert isinstance(def_schema, dict)
def test_mcp_output_schema_preserved(self, llama_stack_client, mcp_server_with_output_schemas):
"""Test that MCP outputSchema is preserved."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::with_output"
uri = mcp_server_with_output_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
response = llama_stack_client.tool_runtime.list_runtime_tools(
tool_group_id=test_toolgroup_id,
extra_headers=auth_headers,
)
# Find get_weather tool
weather_tool = next((t for t in response.data if t.name == "get_weather"), None)
if weather_tool:
# Check if output_schema field exists and is preserved
assert hasattr(weather_tool, "output_schema")
# If MCP server provides output schema, it should be preserved
if weather_tool.output_schema is not None:
assert isinstance(weather_tool.output_schema, dict)
# Should have JSON Schema structure
if "properties" in weather_tool.output_schema:
assert "type" in weather_tool.output_schema
class TestMCPToolInvocation:
"""Test invoking MCP tools with complex schemas."""
def test_invoke_mcp_tool_with_nested_data(self, llama_stack_client, mcp_server_with_complex_schemas):
"""Test invoking MCP tool that expects nested object structure."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Invoke tool with complex nested data
result = llama_stack_client.tool_runtime.invoke_tool(
tool_name="process_order",
kwargs={
"order_data": {
"items": [{"name": "Widget", "quantity": 2}, {"name": "Gadget", "quantity": 1}],
"shipping": {"address": {"street": "123 Main St", "city": "San Francisco", "zipcode": "94102"}},
}
},
extra_headers=auth_headers,
)
# Should succeed without schema validation errors
assert result.content is not None
assert result.error_message is None
def test_invoke_with_flexible_schema(self, llama_stack_client, mcp_server_with_complex_schemas):
"""Test invoking tool with anyOf schema (flexible input)."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Test with email format
result_email = llama_stack_client.tool_runtime.invoke_tool(
tool_name="flexible_contact",
kwargs={"contact_info": "user@example.com"},
extra_headers=auth_headers,
)
assert result_email.error_message is None
# Test with phone format
result_phone = llama_stack_client.tool_runtime.invoke_tool(
tool_name="flexible_contact",
kwargs={"contact_info": "+15551234567"},
extra_headers=auth_headers,
)
assert result_phone.error_message is None
class TestAgentWithMCPTools:
"""Test agents using MCP tools with complex schemas."""
def test_agent_with_complex_mcp_tool(self, llama_stack_client, text_model_id, mcp_server_with_complex_schemas):
"""Test agent can use MCP tools with $ref/$defs schemas."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
from llama_stack_client import Agent
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Create agent with MCP tools
agent = Agent(
client=llama_stack_client,
model=text_model_id,
instructions="You are a helpful assistant that can process orders and book flights.",
tools=[test_toolgroup_id],
)
session_id = agent.create_session("test-session-complex")
# Ask agent to use a tool with complex schema
response = agent.create_turn(
session_id=session_id,
messages=[
{"role": "user", "content": "Process an order with 2 widgets going to 123 Main St, San Francisco"}
],
stream=False,
extra_headers=auth_headers,
)
steps = response.steps
# Verify agent was able to call the tool
# (The LLM should have been able to understand the schema and formulate a valid call)
tool_execution_steps = [s for s in steps if s.step_type == "tool_execution"]
# Agent might or might not call the tool depending on the model
# But if it does, there should be no errors
for step in tool_execution_steps:
if step.tool_responses:
for tool_response in step.tool_responses:
assert tool_response.content is not None
class TestSchemaValidation:
"""Test schema validation (future feature)."""
def test_invalid_input_rejected(self, llama_stack_client, mcp_server_with_complex_schemas):
"""Test that invalid input is rejected (if validation is implemented)."""
# This test documents expected behavior once we add input validation
# For now, it may pass invalid data through
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::complex"
uri = mcp_server_with_complex_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Try to invoke with completely wrong data type
# Once validation is added, this should raise an error
try:
llama_stack_client.tool_runtime.invoke_tool(
tool_name="process_order",
kwargs={"order_data": "this should be an object not a string"},
extra_headers=auth_headers,
)
# For now, this might succeed (no validation)
# After adding validation, we'd expect a ValidationError
except Exception:
# Expected once validation is implemented
pass
class TestOutputValidation:
"""Test output schema validation (future feature)."""
def test_output_matches_schema(self, llama_stack_client, mcp_server_with_output_schemas):
"""Test that tool output is validated against output_schema (if implemented)."""
if not isinstance(llama_stack_client, LlamaStackAsLibraryClient):
pytest.skip("Library client required for local MCP server")
test_toolgroup_id = "mcp::with_output"
uri = mcp_server_with_output_schemas["server_url"]
try:
llama_stack_client.toolgroups.unregister(toolgroup_id=test_toolgroup_id)
except Exception:
pass
llama_stack_client.toolgroups.register(
toolgroup_id=test_toolgroup_id,
provider_id="model-context-protocol",
mcp_endpoint=dict(uri=uri),
)
provider_data = {"mcp_headers": {uri: {"Authorization": f"Bearer {AUTH_TOKEN}"}}}
auth_headers = {
"X-LlamaStack-Provider-Data": json.dumps(provider_data),
}
# Invoke tool
result = llama_stack_client.tool_runtime.invoke_tool(
tool_name="get_weather",
kwargs={"location": "San Francisco"},
extra_headers=auth_headers,
)
# Tool should return valid output
assert result.error_message is None
assert result.content is not None
# Once output validation is implemented, the system would check
# that result.content matches the tool's output_schema

View file

@ -18,7 +18,6 @@ from llama_stack.apis.inference import (
from llama_stack.models.llama.datatypes import (
BuiltinTool,
ToolDefinition,
ToolParamDefinition,
ToolPromptFormat,
)
from llama_stack.providers.utils.inference.prompt_adapter import (
@ -75,12 +74,15 @@ async def test_system_custom_only():
ToolDefinition(
tool_name="custom1",
description="custom1 tool",
parameters={
"param1": ToolParamDefinition(
param_type="str",
description="param1 description",
required=True,
),
input_schema={
"type": "object",
"properties": {
"param1": {
"type": "str",
"description": "param1 description",
},
},
"required": ["param1"],
},
)
],
@ -107,12 +109,15 @@ async def test_system_custom_and_builtin():
ToolDefinition(
tool_name="custom1",
description="custom1 tool",
parameters={
"param1": ToolParamDefinition(
param_type="str",
description="param1 description",
required=True,
),
input_schema={
"type": "object",
"properties": {
"param1": {
"type": "str",
"description": "param1 description",
},
},
"required": ["param1"],
},
),
],
@ -148,12 +153,15 @@ async def test_completion_message_encoding():
ToolDefinition(
tool_name="custom1",
description="custom1 tool",
parameters={
"param1": ToolParamDefinition(
param_type="str",
description="param1 description",
required=True,
),
input_schema={
"type": "object",
"properties": {
"param1": {
"type": "str",
"description": "param1 description",
},
},
"required": ["param1"],
},
),
],
@ -227,12 +235,15 @@ async def test_replace_system_message_behavior_custom_tools():
ToolDefinition(
tool_name="custom1",
description="custom1 tool",
parameters={
"param1": ToolParamDefinition(
param_type="str",
description="param1 description",
required=True,
),
input_schema={
"type": "object",
"properties": {
"param1": {
"type": "str",
"description": "param1 description",
},
},
"required": ["param1"],
},
),
],
@ -264,12 +275,15 @@ async def test_replace_system_message_behavior_custom_tools_with_template():
ToolDefinition(
tool_name="custom1",
description="custom1 tool",
parameters={
"param1": ToolParamDefinition(
param_type="str",
description="param1 description",
required=True,
),
input_schema={
"type": "object",
"properties": {
"param1": {
"type": "str",
"description": "param1 description",
},
},
"required": ["param1"],
},
),
],

View file

@ -0,0 +1,381 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Unit tests for OpenAI compatibility tool conversion.
Tests convert_tooldef_to_openai_tool with new JSON Schema approach.
"""
from llama_stack.models.llama.datatypes import BuiltinTool, ToolDefinition
from llama_stack.providers.utils.inference.openai_compat import convert_tooldef_to_openai_tool
class TestSimpleSchemaConversion:
"""Test basic schema conversions to OpenAI format."""
def test_simple_tool_conversion(self):
"""Test conversion of simple tool with basic input schema."""
tool = ToolDefinition(
tool_name="get_weather",
description="Get weather information",
input_schema={
"type": "object",
"properties": {"location": {"type": "string", "description": "City name"}},
"required": ["location"],
},
)
result = convert_tooldef_to_openai_tool(tool)
# Check OpenAI structure
assert result["type"] == "function"
assert "function" in result
function = result["function"]
assert function["name"] == "get_weather"
assert function["description"] == "Get weather information"
# Check parameters are passed through
assert "parameters" in function
assert function["parameters"] == tool.input_schema
assert function["parameters"]["type"] == "object"
assert "location" in function["parameters"]["properties"]
def test_tool_without_description(self):
"""Test tool conversion without description."""
tool = ToolDefinition(tool_name="test_tool", input_schema={"type": "object", "properties": {}})
result = convert_tooldef_to_openai_tool(tool)
assert result["function"]["name"] == "test_tool"
assert "description" not in result["function"]
assert "parameters" in result["function"]
def test_builtin_tool_conversion(self):
"""Test conversion of BuiltinTool enum."""
tool = ToolDefinition(
tool_name=BuiltinTool.code_interpreter,
description="Run Python code",
input_schema={"type": "object", "properties": {"code": {"type": "string"}}},
)
result = convert_tooldef_to_openai_tool(tool)
# BuiltinTool should be converted to its value
assert result["function"]["name"] == "code_interpreter"
class TestComplexSchemaConversion:
"""Test conversion of complex JSON Schema features."""
def test_schema_with_refs_and_defs(self):
"""Test that $ref and $defs are passed through to OpenAI."""
tool = ToolDefinition(
tool_name="book_flight",
description="Book a flight",
input_schema={
"type": "object",
"properties": {
"flight": {"$ref": "#/$defs/FlightInfo"},
"passengers": {"type": "array", "items": {"$ref": "#/$defs/Passenger"}},
"payment": {"$ref": "#/$defs/Payment"},
},
"required": ["flight", "passengers", "payment"],
"$defs": {
"FlightInfo": {
"type": "object",
"properties": {
"from": {"type": "string", "description": "Departure airport"},
"to": {"type": "string", "description": "Arrival airport"},
"date": {"type": "string", "format": "date"},
},
"required": ["from", "to", "date"],
},
"Passenger": {
"type": "object",
"properties": {"name": {"type": "string"}, "age": {"type": "integer", "minimum": 0}},
"required": ["name", "age"],
},
"Payment": {
"type": "object",
"properties": {
"method": {"type": "string", "enum": ["credit_card", "debit_card"]},
"amount": {"type": "number", "minimum": 0},
},
},
},
},
)
result = convert_tooldef_to_openai_tool(tool)
params = result["function"]["parameters"]
# Verify $defs are preserved
assert "$defs" in params
assert "FlightInfo" in params["$defs"]
assert "Passenger" in params["$defs"]
assert "Payment" in params["$defs"]
# Verify $ref are preserved
assert params["properties"]["flight"]["$ref"] == "#/$defs/FlightInfo"
assert params["properties"]["passengers"]["items"]["$ref"] == "#/$defs/Passenger"
assert params["properties"]["payment"]["$ref"] == "#/$defs/Payment"
# Verify nested schema details are preserved
assert params["$defs"]["FlightInfo"]["properties"]["date"]["format"] == "date"
assert params["$defs"]["Passenger"]["properties"]["age"]["minimum"] == 0
assert params["$defs"]["Payment"]["properties"]["method"]["enum"] == ["credit_card", "debit_card"]
def test_anyof_schema_conversion(self):
"""Test conversion of anyOf schemas."""
tool = ToolDefinition(
tool_name="flexible_input",
input_schema={
"type": "object",
"properties": {
"contact": {
"anyOf": [
{"type": "string", "format": "email"},
{"type": "string", "pattern": "^\\+?[0-9]{10,15}$"},
],
"description": "Email or phone number",
}
},
},
)
result = convert_tooldef_to_openai_tool(tool)
contact_schema = result["function"]["parameters"]["properties"]["contact"]
assert "anyOf" in contact_schema
assert len(contact_schema["anyOf"]) == 2
assert contact_schema["anyOf"][0]["format"] == "email"
assert "pattern" in contact_schema["anyOf"][1]
def test_nested_objects_conversion(self):
"""Test conversion of deeply nested objects."""
tool = ToolDefinition(
tool_name="nested_data",
input_schema={
"type": "object",
"properties": {
"user": {
"type": "object",
"properties": {
"profile": {
"type": "object",
"properties": {
"name": {"type": "string"},
"settings": {
"type": "object",
"properties": {"theme": {"type": "string", "enum": ["light", "dark"]}},
},
},
}
},
}
},
},
)
result = convert_tooldef_to_openai_tool(tool)
# Navigate deep structure
user_schema = result["function"]["parameters"]["properties"]["user"]
profile_schema = user_schema["properties"]["profile"]
settings_schema = profile_schema["properties"]["settings"]
theme_schema = settings_schema["properties"]["theme"]
assert theme_schema["enum"] == ["light", "dark"]
def test_array_schemas_with_constraints(self):
"""Test conversion of array schemas with constraints."""
tool = ToolDefinition(
tool_name="list_processor",
input_schema={
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {"id": {"type": "integer"}, "name": {"type": "string"}},
"required": ["id"],
},
"minItems": 1,
"maxItems": 100,
"uniqueItems": True,
}
},
},
)
result = convert_tooldef_to_openai_tool(tool)
items_schema = result["function"]["parameters"]["properties"]["items"]
assert items_schema["type"] == "array"
assert items_schema["minItems"] == 1
assert items_schema["maxItems"] == 100
assert items_schema["uniqueItems"] is True
assert items_schema["items"]["type"] == "object"
class TestOutputSchemaHandling:
"""Test that output_schema is correctly handled (or dropped) for OpenAI."""
def test_output_schema_is_dropped(self):
"""Test that output_schema is NOT included in OpenAI format (API limitation)."""
tool = ToolDefinition(
tool_name="calculator",
description="Perform calculation",
input_schema={"type": "object", "properties": {"x": {"type": "number"}, "y": {"type": "number"}}},
output_schema={"type": "object", "properties": {"result": {"type": "number"}}, "required": ["result"]},
)
result = convert_tooldef_to_openai_tool(tool)
# OpenAI doesn't support output schema
assert "outputSchema" not in result["function"]
assert "responseSchema" not in result["function"]
assert "output_schema" not in result["function"]
# But input schema should be present
assert "parameters" in result["function"]
assert result["function"]["parameters"] == tool.input_schema
def test_only_output_schema_no_input(self):
"""Test tool with only output_schema (unusual but valid)."""
tool = ToolDefinition(
tool_name="no_input_tool",
description="Tool with no inputs",
output_schema={"type": "object", "properties": {"timestamp": {"type": "string"}}},
)
result = convert_tooldef_to_openai_tool(tool)
# No parameters should be set if input_schema is None
# (or we might set an empty object schema - implementation detail)
assert "outputSchema" not in result["function"]
class TestEdgeCases:
"""Test edge cases and error conditions."""
def test_tool_with_no_schemas(self):
"""Test tool with neither input nor output schema."""
tool = ToolDefinition(tool_name="schemaless_tool", description="Tool without schemas")
result = convert_tooldef_to_openai_tool(tool)
assert result["function"]["name"] == "schemaless_tool"
assert result["function"]["description"] == "Tool without schemas"
# Implementation detail: might have no parameters or empty object
def test_empty_input_schema(self):
"""Test tool with empty object schema."""
tool = ToolDefinition(tool_name="no_params", input_schema={"type": "object", "properties": {}})
result = convert_tooldef_to_openai_tool(tool)
assert result["function"]["parameters"]["type"] == "object"
assert result["function"]["parameters"]["properties"] == {}
def test_schema_with_additional_properties(self):
"""Test that additionalProperties is preserved."""
tool = ToolDefinition(
tool_name="flexible_tool",
input_schema={
"type": "object",
"properties": {"known_field": {"type": "string"}},
"additionalProperties": True,
},
)
result = convert_tooldef_to_openai_tool(tool)
assert result["function"]["parameters"]["additionalProperties"] is True
def test_schema_with_pattern_properties(self):
"""Test that patternProperties is preserved."""
tool = ToolDefinition(
tool_name="pattern_tool",
input_schema={"type": "object", "patternProperties": {"^[a-z]+$": {"type": "string"}}},
)
result = convert_tooldef_to_openai_tool(tool)
assert "patternProperties" in result["function"]["parameters"]
def test_schema_identity(self):
"""Test that converted schema is identical to input (no lossy conversion)."""
original_schema = {
"type": "object",
"properties": {"complex": {"$ref": "#/$defs/Complex"}},
"$defs": {
"Complex": {
"type": "object",
"properties": {"nested": {"anyOf": [{"type": "string"}, {"type": "number"}]}},
}
},
"required": ["complex"],
"additionalProperties": False,
}
tool = ToolDefinition(tool_name="test", input_schema=original_schema)
result = convert_tooldef_to_openai_tool(tool)
# Converted parameters should be EXACTLY the same as input
assert result["function"]["parameters"] == original_schema
class TestConversionConsistency:
"""Test consistency across multiple conversions."""
def test_multiple_tools_with_shared_defs(self):
"""Test converting multiple tools that could share definitions."""
tool1 = ToolDefinition(
tool_name="tool1",
input_schema={
"type": "object",
"properties": {"data": {"$ref": "#/$defs/Data"}},
"$defs": {"Data": {"type": "object", "properties": {"x": {"type": "number"}}}},
},
)
tool2 = ToolDefinition(
tool_name="tool2",
input_schema={
"type": "object",
"properties": {"info": {"$ref": "#/$defs/Data"}},
"$defs": {"Data": {"type": "object", "properties": {"y": {"type": "string"}}}},
},
)
result1 = convert_tooldef_to_openai_tool(tool1)
result2 = convert_tooldef_to_openai_tool(tool2)
# Each tool maintains its own $defs independently
assert result1["function"]["parameters"]["$defs"]["Data"]["properties"]["x"]["type"] == "number"
assert result2["function"]["parameters"]["$defs"]["Data"]["properties"]["y"]["type"] == "string"
def test_conversion_is_pure(self):
"""Test that conversion doesn't modify the original tool."""
original_schema = {
"type": "object",
"properties": {"x": {"type": "string"}},
"$defs": {"T": {"type": "number"}},
}
tool = ToolDefinition(tool_name="test", input_schema=original_schema.copy())
# Convert
convert_tooldef_to_openai_tool(tool)
# Original tool should be unchanged
assert tool.input_schema == original_schema
assert "$defs" in tool.input_schema

View file

@ -0,0 +1,297 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Unit tests for JSON Schema-based tool definitions.
Tests the new input_schema and output_schema fields.
"""
from pydantic import ValidationError
from llama_stack.apis.tools import ToolDef
from llama_stack.models.llama.datatypes import BuiltinTool, ToolDefinition
class TestToolDefValidation:
"""Test ToolDef validation with JSON Schema."""
def test_simple_input_schema(self):
"""Test ToolDef with simple input schema."""
tool = ToolDef(
name="get_weather",
description="Get weather information",
input_schema={
"type": "object",
"properties": {"location": {"type": "string", "description": "City name"}},
"required": ["location"],
},
)
assert tool.name == "get_weather"
assert tool.input_schema["type"] == "object"
assert "location" in tool.input_schema["properties"]
assert tool.output_schema is None
def test_input_and_output_schema(self):
"""Test ToolDef with both input and output schemas."""
tool = ToolDef(
name="calculate",
description="Perform calculation",
input_schema={
"type": "object",
"properties": {"x": {"type": "number"}, "y": {"type": "number"}},
"required": ["x", "y"],
},
output_schema={"type": "object", "properties": {"result": {"type": "number"}}, "required": ["result"]},
)
assert tool.input_schema is not None
assert tool.output_schema is not None
assert "result" in tool.output_schema["properties"]
def test_schema_with_refs_and_defs(self):
"""Test that $ref and $defs are preserved in schemas."""
tool = ToolDef(
name="book_flight",
description="Book a flight",
input_schema={
"type": "object",
"properties": {
"flight": {"$ref": "#/$defs/FlightInfo"},
"passengers": {"type": "array", "items": {"$ref": "#/$defs/Passenger"}},
},
"$defs": {
"FlightInfo": {
"type": "object",
"properties": {"from": {"type": "string"}, "to": {"type": "string"}},
},
"Passenger": {
"type": "object",
"properties": {"name": {"type": "string"}, "age": {"type": "integer"}},
},
},
},
)
# Verify $defs are preserved
assert "$defs" in tool.input_schema
assert "FlightInfo" in tool.input_schema["$defs"]
assert "Passenger" in tool.input_schema["$defs"]
# Verify $ref are preserved
assert tool.input_schema["properties"]["flight"]["$ref"] == "#/$defs/FlightInfo"
assert tool.input_schema["properties"]["passengers"]["items"]["$ref"] == "#/$defs/Passenger"
def test_output_schema_with_refs(self):
"""Test that output_schema also supports $ref and $defs."""
tool = ToolDef(
name="search",
description="Search for items",
input_schema={"type": "object", "properties": {"query": {"type": "string"}}},
output_schema={
"type": "object",
"properties": {"results": {"type": "array", "items": {"$ref": "#/$defs/SearchResult"}}},
"$defs": {
"SearchResult": {
"type": "object",
"properties": {"title": {"type": "string"}, "score": {"type": "number"}},
}
},
},
)
assert "$defs" in tool.output_schema
assert "SearchResult" in tool.output_schema["$defs"]
def test_complex_json_schema_features(self):
"""Test various JSON Schema features are preserved."""
tool = ToolDef(
name="complex_tool",
description="Tool with complex schema",
input_schema={
"type": "object",
"properties": {
# anyOf
"contact": {
"anyOf": [
{"type": "string", "format": "email"},
{"type": "string", "pattern": "^\\+?[0-9]{10,15}$"},
]
},
# enum
"status": {"type": "string", "enum": ["pending", "approved", "rejected"]},
# nested objects
"address": {
"type": "object",
"properties": {
"street": {"type": "string"},
"city": {"type": "string"},
"zipcode": {"type": "string", "pattern": "^[0-9]{5}$"},
},
"required": ["street", "city"],
},
# array with constraints
"tags": {
"type": "array",
"items": {"type": "string"},
"minItems": 1,
"maxItems": 10,
"uniqueItems": True,
},
},
},
)
# Verify anyOf
assert "anyOf" in tool.input_schema["properties"]["contact"]
# Verify enum
assert tool.input_schema["properties"]["status"]["enum"] == ["pending", "approved", "rejected"]
# Verify nested object
assert tool.input_schema["properties"]["address"]["type"] == "object"
assert "zipcode" in tool.input_schema["properties"]["address"]["properties"]
# Verify array constraints
tags_schema = tool.input_schema["properties"]["tags"]
assert tags_schema["minItems"] == 1
assert tags_schema["maxItems"] == 10
assert tags_schema["uniqueItems"] is True
def test_invalid_json_schema_raises_error(self):
"""Test that invalid JSON Schema raises validation error."""
# TODO: This test will pass once we add schema validation
# For now, Pydantic accepts any dict, so this is a placeholder
# This should eventually raise an error due to invalid schema
try:
ToolDef(
name="bad_tool",
input_schema={
"type": "invalid_type", # Not a valid JSON Schema type
"properties": "not_an_object", # Should be an object
},
)
# For now this passes, but shouldn't after we add validation
except ValidationError:
pass # Expected once validation is added
class TestToolDefinitionValidation:
"""Test ToolDefinition (internal) validation with JSON Schema."""
def test_simple_tool_definition(self):
"""Test ToolDefinition with simple schema."""
tool = ToolDefinition(
tool_name="get_time",
description="Get current time",
input_schema={"type": "object", "properties": {"timezone": {"type": "string"}}},
)
assert tool.tool_name == "get_time"
assert tool.input_schema is not None
def test_builtin_tool_with_schema(self):
"""Test ToolDefinition with BuiltinTool enum."""
tool = ToolDefinition(
tool_name=BuiltinTool.code_interpreter,
description="Run Python code",
input_schema={"type": "object", "properties": {"code": {"type": "string"}}, "required": ["code"]},
output_schema={"type": "object", "properties": {"output": {"type": "string"}, "error": {"type": "string"}}},
)
assert isinstance(tool.tool_name, BuiltinTool)
assert tool.input_schema is not None
assert tool.output_schema is not None
def test_tool_definition_with_refs(self):
"""Test ToolDefinition preserves $ref/$defs."""
tool = ToolDefinition(
tool_name="process_data",
input_schema={
"type": "object",
"properties": {"data": {"$ref": "#/$defs/DataObject"}},
"$defs": {
"DataObject": {
"type": "object",
"properties": {
"id": {"type": "integer"},
"values": {"type": "array", "items": {"type": "number"}},
},
}
},
},
)
assert "$defs" in tool.input_schema
assert tool.input_schema["properties"]["data"]["$ref"] == "#/$defs/DataObject"
class TestSchemaEquivalence:
"""Test that schemas remain unchanged through serialization."""
def test_schema_roundtrip(self):
"""Test that schemas survive model_dump/model_validate roundtrip."""
original = ToolDef(
name="test",
input_schema={
"type": "object",
"properties": {"x": {"$ref": "#/$defs/X"}},
"$defs": {"X": {"type": "string"}},
},
)
# Serialize and deserialize
dumped = original.model_dump()
restored = ToolDef(**dumped)
# Schemas should be identical
assert restored.input_schema == original.input_schema
assert "$defs" in restored.input_schema
assert restored.input_schema["properties"]["x"]["$ref"] == "#/$defs/X"
def test_json_serialization(self):
"""Test JSON serialization preserves schema."""
import json
tool = ToolDef(
name="test",
input_schema={
"type": "object",
"properties": {"a": {"type": "string"}},
"$defs": {"T": {"type": "number"}},
},
output_schema={"type": "object", "properties": {"b": {"$ref": "#/$defs/T"}}},
)
# Serialize to JSON and back
json_str = tool.model_dump_json()
parsed = json.loads(json_str)
restored = ToolDef(**parsed)
assert restored.input_schema == tool.input_schema
assert restored.output_schema == tool.output_schema
assert "$defs" in restored.input_schema
class TestBackwardsCompatibility:
"""Test handling of legacy code patterns."""
def test_none_schemas(self):
"""Test tools with no schemas (legacy case)."""
tool = ToolDef(name="legacy_tool", description="Tool without schemas", input_schema=None, output_schema=None)
assert tool.input_schema is None
assert tool.output_schema is None
def test_metadata_preserved(self):
"""Test that metadata field still works."""
tool = ToolDef(
name="test", input_schema={"type": "object"}, metadata={"endpoint": "http://example.com", "version": "1.0"}
)
assert tool.metadata["endpoint"] == "http://example.com"
assert tool.metadata["version"] == "1.0"