mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-21 17:33:12 +00:00
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Python Package Build Test / build (3.13) (push) Failing after 4s
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 7s
Python Package Build Test / build (3.12) (push) Failing after 12s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 14s
Update ReadTheDocs / update-readthedocs (push) Failing after 12s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 17s
Test External API and Providers / test-external (venv) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (push) Failing after 28s
Unit Tests / unit-tests (3.12) (push) Failing after 27s
Unit Tests / unit-tests (3.13) (push) Failing after 51s
Pre-commit / pre-commit (push) Successful in 2m6s
# What does this PR do? Recording tests has become a nightmare. This is the first part of making that process simpler by making it _less_ automatic. I tried to be too clever earlier. It simplifies the record-integration-tests workflow to use workflow dispatch inputs instead of PR labels. No more opaque stuff. Just go to the GitHub UI and run the workflow with inputs. I will soon add a helper script for this also. Other things to aid re-running just the small set of things you need to re-record: - Replaces the `test-types` JSON array parameter with a more intuitive `test-subdirs` comma-separated list. The whole JSON array crap was for matrix. - Adds a new `test-pattern` parameter to allow filtering tests using pytest's `-k` option ## Test Plan Note that this PR is in a fork not the source repository. - Replay tests on this PR are green - Manually [ran](1699856292
) the replay workflow with a test-subdir and test-pattern filter, worked - Manually [ran](4819508034
) the **record** workflow with a simple pattern, it has worked and updated _this_ PR. --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
221 lines
8.1 KiB
JSON
221 lines
8.1 KiB
JSON
{
|
|
"request": {
|
|
"method": "POST",
|
|
"url": "http://localhost:11434/api/generate",
|
|
"headers": {},
|
|
"body": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"raw": true,
|
|
"prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant. You have access to functions, but you should only use them if they are required.\nYou are an expert in composing functions. You are given a question and a set of possible functions.\nBased on the question, you may or may not need to make one function/tool call to achieve the purpose.\n\nIf you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]\nIf you decide to invoke a function, you SHOULD NOT include any other text in the response. besides the function call in the above format.\nFor a boolean parameter, be sure to use `True` or `False` (capitalized) for the value.\n\n\nHere is a list of functions in JSON format that you can invoke.\n\n[\n {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather\",\n \"parameters\": {\n \"type\": \"dict\",\n \"required\": [\"location\"],\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state (both required), e.g. San Francisco, CA.\"\n }\n }\n }\n }\n]\n\nYou can answer general questions or invoke tools when necessary.\nIn addition to tool calls, you should also augment your responses by using the tool outputs.\nPretend you are a weather assistant.\nYou MUST use one of the provided functions/tools to answer the user query.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat's the weather like in San Francisco, CA?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
|
|
"options": {
|
|
"temperature": 0.0
|
|
},
|
|
"stream": true
|
|
},
|
|
"endpoint": "/api/generate",
|
|
"model": "llama3.2:3b-instruct-fp16"
|
|
},
|
|
"response": {
|
|
"body": [
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:49.18651486Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "[",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:49.370611348Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "get",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:49.557000029Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "_weather",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:49.746777116Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "(location",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:49.942233333Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "=\"",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:50.126788846Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "San",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:50.311346131Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " Francisco",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:50.501507173Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": ",",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:50.692296777Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": " CA",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:50.878846539Z",
|
|
"done": false,
|
|
"done_reason": null,
|
|
"total_duration": null,
|
|
"load_duration": null,
|
|
"prompt_eval_count": null,
|
|
"prompt_eval_duration": null,
|
|
"eval_count": null,
|
|
"eval_duration": null,
|
|
"response": "\")]",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
},
|
|
{
|
|
"__type__": "ollama._types.GenerateResponse",
|
|
"__data__": {
|
|
"model": "llama3.2:3b-instruct-fp16",
|
|
"created_at": "2025-08-15T20:24:51.063200561Z",
|
|
"done": true,
|
|
"done_reason": "stop",
|
|
"total_duration": 33982453650,
|
|
"load_duration": 2909001805,
|
|
"prompt_eval_count": 341,
|
|
"prompt_eval_duration": 29194357307,
|
|
"eval_count": 11,
|
|
"eval_duration": 1878247732,
|
|
"response": "",
|
|
"thinking": null,
|
|
"context": null
|
|
}
|
|
}
|
|
],
|
|
"is_streaming": true
|
|
}
|
|
}
|