llama-stack-mirror/tests/integration/recordings/responses/cb3df2a1dc22.json
Ashwin Bharambe c3d3a0b833
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 3s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 4s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 7s
Update ReadTheDocs / update-readthedocs (push) Failing after 3s
Test External API and Providers / test-external (venv) (push) Failing after 5s
Vector IO Integration Tests / test-matrix (push) Failing after 7s
Python Package Build Test / build (3.13) (push) Failing after 8s
Python Package Build Test / build (3.12) (push) Failing after 8s
Unit Tests / unit-tests (3.13) (push) Failing after 14s
Unit Tests / unit-tests (3.12) (push) Failing after 14s
UI Tests / ui-tests (22) (push) Successful in 1m7s
Pre-commit / pre-commit (push) Successful in 2m34s
feat(tests): auto-merge all model list responses and unify recordings (#3320)
One needed to specify record-replay related environment variables for
running integration tests. We could not use defaults because integration
tests could be run against Ollama instances which could be running
different models. For example, text vs vision tests needed separate
instances of Ollama because a single instance typically cannot serve
both of these models if you assume the standard CI worker configuration
on Github. As a result, `client.list()` as returned by the Ollama client
would be different between these runs and we'd end up overwriting
responses.

This PR "solves" it by adding a small amount of complexity -- we store
model list responses specially, keyed by the hashes of the models they
return. At replay time, we merge all of them and pretend that we have
the union of all models available.

## Test Plan

Re-recorded all the tests using `scripts/integration-tests.sh
--inference-mode record`, including the vision tests.
2025-09-03 11:33:03 -07:00

56 lines
4.3 KiB
JSON

{
"request": {
"method": "POST",
"url": "http://0.0.0.0:11434/v1/v1/chat/completions",
"headers": {},
"body": {
"model": "llama3.2:3b-instruct-fp16",
"messages": [
{
"role": "user",
"content": "Test OpenAI telemetry creation"
}
],
"stream": false
},
"endpoint": "/v1/chat/completions",
"model": "llama3.2:3b-instruct-fp16"
},
"response": {
"body": {
"__type__": "openai.types.chat.chat_completion.ChatCompletion",
"__data__": {
"id": "chatcmpl-271",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "# OpenAI Telemetry Creation Testing\n\nThis guide provides a test environment for creating and analyzing telemetries using the OpenAI API.\n\n## Prerequisites\n\n- Python 3.8 or higher\n- `requests` library\n- An OpenAI API key (create an account on the [OpenAI website](https://openai.com/))\n\n## Step 1: Install Required Libraries\n\nFirst, install the required libraries by running the following command in your terminal:\n\n```bash\npip install requests\n```\n\n## Step 2: Create a Telemetry Object\n\nCreate a new Python file (e.g., `telemetry.py`) and import the required library. Define a telemetry object with the relevant data:\n\n```python\n# Import the requests library\nimport requests\n\nclass Telemetry:\n def __init__(self, api_key):\n \"\"\"\n Initialize the telemetry object.\n\n Args:\n api_key (str): The OpenAI API key.\n \"\"\"\n\n self.api_key = api_key\n\n def create_telemetry(self, id, data):\n \"\"\"\n Create a new telemetry entry using the OpenAI API.\n\n Args:\n id (str): The ID of the model or dataset.\n data (dict): The telemetry data to be created.\n\n Returns:\n dict: The response from the OpenAI API.\n\n Raises:\n ValueError: If the request fails.\n \"\"\"\n\n url = f\"https://api.openai.com/v1/models/{id}/telemetry\"\n\n headers = {\n \"Authorization\": self.api_key,\n \"Content-Type\": \"application/json\",\n }\n\n telemetry_data = {\"events\": data}\n\n response = requests.post(url, json=telemetry_data, headers=headers)\n\n if not response.ok:\n raise ValueError(\"Failed to create telemetry\")\n\n return response.json()\n```\n\n## Step 3: Usage Example\n\nHere's an example usage of the `Telemetry` class:\n\n```python\n# Create a new Telemetry object with your OpenAI API key\ntelemetry = Telemetry(\n \"YOUR_OPENAI_API_KEY_HERE\"\n)\n\n# Define the telemetry data\ndata = {\"event\": \"example_event\"}\n\n# Create a new telemetry entry\nid = \"my_model_id\" # Replace with your model or dataset ID\n\ntry:\n result = telemetry.create_telemetry(id, data)\n print(result)\nexcept ValueError as e:\n print(e)\n```\n\nThis code creates a new `Telemetry` object, defines some sample telemetry data, and uses the `create_telemetry` method to create a new telemetry entry. The response from the OpenAI API is printed out.\n\nNote: Replace `\"YOUR_OPENAI_API_KEY_HERE\"` with your actual OpenAI API key.\n\n## Conclusion\n\nThis guide provides a basic example of how to create telemetries using the OpenAI API. You can modify the code and implement additional features as needed for your project.\n\nStay updated on our latest tutorials and guides:\n\n* [Check out our Discord channel](link): https://discord.gg/openai-exists\n\nHappy coding!",
"refusal": null,
"role": "assistant",
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 1756921299,
"model": "llama3.2:3b-instruct-fp16",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_ollama",
"usage": {
"completion_tokens": 633,
"prompt_tokens": 30,
"total_tokens": 663,
"completion_tokens_details": null,
"prompt_tokens_details": null
}
}
},
"is_streaming": false
}
}