mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
This is a sweeping change to clean up some gunk around our "Tool" definitions. First, we had two types `Tool` and `ToolDef`. The first of these was a "Resource" type for the registry but we had stopped registering tools inside the Registry long back (and only registered ToolGroups.) The latter was for specifying tools for the Agents API. This PR removes the former and adds an optional `toolgroup_id` field to the latter. Secondly, as pointed out by @bbrowning in https://github.com/llamastack/llama-stack/pull/3003#issuecomment-3245270132, we were doing a lossy conversion from a full JSON schema from the MCP tool specification into our ToolDefinition to send it to the model. There is no necessity to do this -- we ourselves aren't doing any execution at all but merely passing it to the chat completions API which supports this. By doing this (and by doing it poorly), we encountered limitations like not supporting array items, or not resolving $refs, etc. To fix this, we replaced the `parameters` field by `{ input_schema, output_schema }` which can be full blown JSON schemas. Finally, there were some types in our llama-related chat format conversion which needed some cleanup. We are taking this opportunity to clean those up. This PR is a substantial breaking change to the API. However, given our window for introducing breaking changes, this suits us just fine. I will be landing a concurrent `llama-stack-client` change as well since API shapes are changing.
258 lines
9.5 KiB
JSON
258 lines
9.5 KiB
JSON
{
|
|
"request": {
|
|
"method": "POST",
|
|
"url": "http://localhost:11434/api/generate",
|
|
"headers": {},
|
|
"body": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"raw": true,
|
|
"prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant. You have access to functions, but you should only use them if they are required.\nYou are an expert in composing functions. You are given a question and a set of possible functions.\nBased on the question, you may or may not need to make one function/tool call to achieve the purpose.\n\nIf you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]\nIf you decide to invoke a function, you SHOULD NOT include any other text in the response. besides the function call in the above format.\nFor a boolean parameter, be sure to use `True` or `False` (capitalized) for the value.\n\n\nHere is a list of functions in JSON format that you can invoke.\n\n[\n {\n \"name\": \"get_boiling_point\",\n \"description\": \"Returns the boiling point of a liquid in Celcius or Fahrenheit.\",\n \"parameters\": {\n \"type\": \"dict\",\n \"required\": [\"liquid_name\"],\n \"properties\": {\n \"liquid_name\": {\n \"type\": \"str\",\n \"description\": \"The name of the liquid\"\n },\n \"celcius\": {\n \"type\": \"bool\",\n \"description\": \"Whether to return the boiling point in Celcius\"\n }\n }\n }\n }\n]\n\nYou can answer general questions or invoke tools when necessary.\nIn addition to tool calls, you should also augment your responses by using the tool outputs.\nYou are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is the boiling point of the liquid polyjuice in celsius?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n[get_boiling_point(liquid_name=\"polyjuice\", celcius=True)]<|eot_id|><|start_header_id|>ipython<|end_header_id|>\n\n-100<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
|
|
"options": {
|
|
"temperature": 0.0001,
|
|
"top_p": 0.9
|
|
},
|
|
"stream": true
|
|
},
|
|
"endpoint": "/api/generate",
|
|
"model": "llama3.2:3b-instruct-fp16"
|
|
},
|
|
"response": {
|
|
"body": [
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:55.86924Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "The",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:55.911521Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " boiling",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:55.95324Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " point",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:55.996666Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " of",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.038076Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " poly",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.079306Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "ju",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.121626Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "ice",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.162658Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " is",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.203804Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " -",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.245419Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "100",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.286364Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "\u00b0C",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.327683Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": ".",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-10-02T02:54:56.369528Z",
|
|
"done": true,
|
|
"done_reason": "stop",
|
|
"total_duration": 708500166,
|
|
"load_duration": 138748458,
|
|
"prompt_eval_count": 392,
|
|
"prompt_eval_duration": 68099125,
|
|
"eval_count": 13,
|
|
"eval_duration": 500834417,
|
|
"response": "",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
}
|
|
],
|
|
"is_streaming": true
|
|
}
|
|
}
|