llama-stack-mirror/tests/integration/recordings/responses/53365c6ae29c.json
Matthew Farrellee 65e01b5684 feat: together now supports base64 embedding encoding (#3559)
# What does this PR do?

use together's new base64 support

## Test Plan

recordings for: ./scripts/integration-tests.sh --stack-config
server:ci-tests --suite base --setup together --subdirs inference
--pattern openai
2025-09-26 16:05:52 +02:00

59 lines
5.6 KiB
JSON

{
"request": {
"method": "POST",
"url": "https://api.together.xyz/v1/v1/chat/completions",
"headers": {},
"body": {
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
"messages": [
{
"role": "user",
"content": "Test OpenAI telemetry creation"
}
],
"stream": false
},
"endpoint": "/v1/chat/completions",
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free"
},
"response": {
"body": {
"__type__": "openai.types.chat.chat_completion.ChatCompletion",
"__data__": {
"id": "oCfxQwj-4Yz4kd-984c2bd8ba58901d",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "To test OpenAI telemetry creation, let's break down the process into steps that can help ensure telemetry data is correctly created and utilized. Telemetry in the context of AI, such as OpenAI, typically refers to the automated process of collecting, transmitting, and analyzing data from remote sources, in this case, user interactions with AI models. Here\u2019s how you might approach testing telemetry creation:\n\n### 1. **Define Telemetry Requirements**\n- **Identify Data Points:** Determine what data points are crucial for your analysis. This could include user input, model responses, interaction timestamps, user IDs, etc.\n- **Purpose of Telemetry:** Understand why you are collecting this data. Is it for model improvement, user experience enhancement, or security monitoring?\n\n### 2. **Implement Data Collection**\n- **API Integration:** If you're using OpenAI's API, ensure you have the necessary permissions and access to collect the required data. OpenAI provides APIs for interacting with their models, and you can collect telemetry data through these interactions.\n- **Logging Mechanisms:** Implement logging mechanisms in your application to capture relevant data. This could involve server-side logging for API calls or client-side logging for user interactions.\n\n### 3. **Data Transmission**\n- **Secure Data Transfer:** Ensure that the data collected is transmitted securely to your analytics or storage system. HTTPS is a standard for secure data transmission over the internet.\n- **Data Format:** Decide on a data format for transmission. JSON is commonly used due to its readability and ease of parsing.\n\n### 4. **Data Analysis**\n- **Analytics Tools:** Utilize appropriate analytics tools or platforms to process and analyze the collected data. This could range from simple statistical analysis to complex machine learning models.\n- **Visualization:** Use data visualization techniques to represent the insights gained from the telemetry data. This helps in understanding trends, patterns, and areas for improvement.\n\n### 5. **Testing the Telemetry System**\n- **Mock Data:** Test your telemetry system with mock data to ensure it can collect, transmit, and analyze data correctly without affecting real users.\n- **Real-World Testing:** Once the system seems to work with mock data, gradually introduce it to real-world scenarios, starting with a small user base to identify and fix any issues that arise.\n- **Feedback Loop:** Implement a feedback loop to continuously monitor the telemetry system's performance and the insights it provides, making adjustments as necessary.\n\n### Example of Testing with Mock Data\nIf you're testing an OpenAI model's interaction telemetry, you might simulate user queries and model responses, then verify that this data is correctly logged, transmitted, and analyzed. For instance, in Python, you might use a mock library to simulate API calls and then assert that the expected data is collected and processed.\n\n```python\nimport unittest\nfrom unittest.mock import Mock\nimport json\n\nclass TestTelemetry(unittest.TestCase):\n\n def test_telemetry_data_collection(self):\n # Mock user interaction\n user_input = \"Hello, how are you?\"\n model_response = \"I'm doing well, thanks.\"\n\n # Mock the API call\n api_call_mock = Mock(return_value=model_response)\n\n # Simulate the interaction\n response = api_call_mock(user_input)\n\n # Verify telemetry data is collected correctly\n expected_data = {\n 'user_input': user_input,\n 'model_response': model_response,\n # Other relevant data points\n }\n\n # Assuming `collect_telemetry_data` is your function to collect telemetry\n collected_data = collect_telemetry_data(user_input, response)\n\n self.assertEqual(collected_data, expected_data)\n\nif __name__ == '__main__':\n unittest.main()\n```\n\nThis example simplifies the process and focuses on the concept. Actual implementation details may vary based on your specific requirements, the technologies you're using, and the OpenAI APIs or models you're interacting with.",
"refusal": null,
"role": "assistant",
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": []
},
"seed": 3434953141173799400
}
],
"created": 1758820620,
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo-Free",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": null,
"usage": {
"completion_tokens": 791,
"prompt_tokens": 40,
"total_tokens": 831,
"completion_tokens_details": null,
"prompt_tokens_details": null,
"cached_tokens": 0
},
"prompt": []
}
},
"is_streaming": false
}
}