From 2ee898cc4c1537db7ed2d656023d1c2df1a0869a Mon Sep 17 00:00:00 2001
From: Matthew Farrellee Sourced from actions/setup-node's
releases. Full Changelog: https://github.com/actions/setup-node/compare/v4...v4.4.0 Full Changelog: https://github.com/actions/setup-node/compare/v4...v4.3.0 Full Changelog: https://github.com/actions/setup-node/compare/v4...v4.2.0 Sourced from remeda's
releases. Sourced from eslint-plugin-prettier's
releases. #755
#751
republish the latest version Full Changelog: https://github.com/prettier/eslint-plugin-prettier/compare/v5.5.2...v5.5.3 republish the latest version Full Changelog: https://github.com/prettier/eslint-plugin-prettier/compare/v5.5.1...v5.5.2 Full Changelog: https://github.com/prettier/eslint-plugin-prettier/compare/v5.5.0...v5.5.1 Full Changelog: https://github.com/prettier/eslint-plugin-prettier/compare/v5.4.1...v5.5.0 Full Changelog: https://github.com/prettier/eslint-plugin-prettier/compare/v5.4.0...v5.4.1 Sourced from eslint-plugin-prettier's
changelog. #755
#751
Sourced from amannn/action-semantic-pull-request's
releases. Sourced from amannn/action-semantic-pull-request's
changelog. ... (truncated) Sourced from
+ To get started, create your first agent. Each agent is
+ configured with specific instructions, models, and tools to
+ help you with different tasks.
+
---------
Signed-off-by: Charlie Doern
h?)m{@~1H=eC$0)_s@h8K(gUn_o`
diff --git a/tests/integration/recordings/responses/390f0c7dac96.json b/tests/integration/recordings/responses/390f0c7dac96.json
new file mode 100644
index 000000000..e8c9528fb
--- /dev/null
+++ b/tests/integration/recordings/responses/390f0c7dac96.json
@@ -0,0 +1,39 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/api/generate",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b-instruct-fp16",
+ "raw": true,
+ "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTest metrics generation 1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
+ "options": {
+ "temperature": 0.0
+ },
+ "stream": false
+ },
+ "endpoint": "/api/generate",
+ "model": "llama3.2:3b-instruct-fp16"
+ },
+ "response": {
+ "body": {
+ "__type__": "ollama._types.GenerateResponse",
+ "__data__": {
+ "model": "llama3.2:3b-instruct-fp16",
+ "created_at": "2025-08-11T15:51:18.170868Z",
+ "done": true,
+ "done_reason": "stop",
+ "total_duration": 5240614083,
+ "load_duration": 9823416,
+ "prompt_eval_count": 21,
+ "prompt_eval_duration": 21000000,
+ "eval_count": 310,
+ "eval_duration": 5209000000,
+ "response": "This is the start of a test. I'll provide some sample data and you can try to generate metrics based on it.\n\n**Data:**\n\nLet's say we have a dataset of user interactions with an e-commerce website. The data includes:\n\n| User ID | Product Name | Purchase Date | Quantity | Price |\n| --- | --- | --- | --- | --- |\n| 1 | iPhone 13 | 2022-01-01 | 2 | 999.99 |\n| 1 | MacBook Air | 2022-01-05 | 1 | 1299.99 |\n| 2 | Samsung TV | 2022-01-10 | 3 | 899.99 |\n| 3 | iPhone 13 | 2022-01-15 | 1 | 999.99 |\n| 4 | MacBook Pro | 2022-01-20 | 2 | 1799.99 |\n\n**Task:**\n\nYour task is to generate the following metrics based on this data:\n\n1. Average order value (AOV)\n2. Conversion rate\n3. Average revenue per user (ARPU)\n4. Customer lifetime value (CLV)\n\nPlease provide your answers in a format like this:\n\n| Metric | Value |\n| --- | --- |\n| AOV | 1234.56 |\n| Conversion Rate | 0.25 |\n| ARPU | 1000.00 |\n| CLV | 5000.00 |\n\nGo ahead and generate the metrics!",
+ "thinking": null,
+ "context": null
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/4de6877d86fa.json b/tests/integration/recordings/responses/4de6877d86fa.json
new file mode 100644
index 000000000..b30c7c451
--- /dev/null
+++ b/tests/integration/recordings/responses/4de6877d86fa.json
@@ -0,0 +1,56 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/v1/v1/chat/completions",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "messages": [
+ {
+ "role": "user",
+ "content": "OpenAI test 0"
+ }
+ ],
+ "stream": false
+ },
+ "endpoint": "/v1/chat/completions",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "openai.types.chat.chat_completion.ChatCompletion",
+ "__data__": {
+ "id": "chatcmpl-843",
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "logprobs": null,
+ "message": {
+ "content": "I don't have any information about an \"OpenAI test 0\". It's possible that you may be referring to a specific experiment or task being performed by OpenAI, but without more context, I can only speculate.\n\nHowever, I can tell you that OpenAI is a research organization that has been involved in various projects and tests related to artificial intelligence. If you could provide more context or clarify what you're referring to, I may be able to help further.\n\nIf you're looking for general information about OpenAI, I can try to provide some background on the organization:\n\nOpenAI is a non-profit research organization that was founded in 2015 with the goal of developing and applying advanced artificial intelligence to benefit humanity. The organization has made significant contributions to the field of AI, including the development of the popular language model, ChatGPT.\n\nIf you could provide more context or clarify what you're looking for, I'll do my best to assist you.",
+ "refusal": null,
+ "role": "assistant",
+ "annotations": null,
+ "audio": null,
+ "function_call": null,
+ "tool_calls": null
+ }
+ }
+ ],
+ "created": 1755891518,
+ "model": "llama3.2:3b",
+ "object": "chat.completion",
+ "service_tier": null,
+ "system_fingerprint": "fp_ollama",
+ "usage": {
+ "completion_tokens": 194,
+ "prompt_tokens": 30,
+ "total_tokens": 224,
+ "completion_tokens_details": null,
+ "prompt_tokens_details": null
+ }
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/5db0c44c83a4.json b/tests/integration/recordings/responses/5db0c44c83a4.json
new file mode 100644
index 000000000..058478a11
--- /dev/null
+++ b/tests/integration/recordings/responses/5db0c44c83a4.json
@@ -0,0 +1,56 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/v1/v1/chat/completions",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "messages": [
+ {
+ "role": "user",
+ "content": "OpenAI test 1"
+ }
+ ],
+ "stream": false
+ },
+ "endpoint": "/v1/chat/completions",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "openai.types.chat.chat_completion.ChatCompletion",
+ "__data__": {
+ "id": "chatcmpl-726",
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "logprobs": null,
+ "message": {
+ "content": "I'm ready to help with the test. What language would you like to use? Would you like to have a conversation, ask questions, or take a specific type of task?",
+ "refusal": null,
+ "role": "assistant",
+ "annotations": null,
+ "audio": null,
+ "function_call": null,
+ "tool_calls": null
+ }
+ }
+ ],
+ "created": 1755891519,
+ "model": "llama3.2:3b",
+ "object": "chat.completion",
+ "service_tier": null,
+ "system_fingerprint": "fp_ollama",
+ "usage": {
+ "completion_tokens": 37,
+ "prompt_tokens": 30,
+ "total_tokens": 67,
+ "completion_tokens_details": null,
+ "prompt_tokens_details": null
+ }
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/6cb0285a7638.json b/tests/integration/recordings/responses/6cb0285a7638.json
new file mode 100644
index 000000000..60ad9f66d
--- /dev/null
+++ b/tests/integration/recordings/responses/6cb0285a7638.json
@@ -0,0 +1,56 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/v1/v1/chat/completions",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "messages": [
+ {
+ "role": "user",
+ "content": "OpenAI test 4"
+ }
+ ],
+ "stream": false
+ },
+ "endpoint": "/v1/chat/completions",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "openai.types.chat.chat_completion.ChatCompletion",
+ "__data__": {
+ "id": "chatcmpl-581",
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "logprobs": null,
+ "message": {
+ "content": "I'm ready to help. What would you like to test? We could try a variety of things, such as:\n\n1. Conversational dialogue\n2. Language understanding\n3. Common sense reasoning\n4. Joke or pun generation\n5. Trivia or knowledge-based questions\n6. Creative writing or storytelling\n7. Summarization or paraphrasing\n\nLet me know which area you'd like to test, or suggest something else that's on your mind!",
+ "refusal": null,
+ "role": "assistant",
+ "annotations": null,
+ "audio": null,
+ "function_call": null,
+ "tool_calls": null
+ }
+ }
+ ],
+ "created": 1755891527,
+ "model": "llama3.2:3b",
+ "object": "chat.completion",
+ "service_tier": null,
+ "system_fingerprint": "fp_ollama",
+ "usage": {
+ "completion_tokens": 96,
+ "prompt_tokens": 30,
+ "total_tokens": 126,
+ "completion_tokens_details": null,
+ "prompt_tokens_details": null
+ }
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/7bcb0f86c91b.json b/tests/integration/recordings/responses/7bcb0f86c91b.json
new file mode 100644
index 000000000..4c9a55153
--- /dev/null
+++ b/tests/integration/recordings/responses/7bcb0f86c91b.json
@@ -0,0 +1,39 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/api/generate",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b-instruct-fp16",
+ "raw": true,
+ "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTest metrics generation 0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
+ "options": {
+ "temperature": 0.0
+ },
+ "stream": false
+ },
+ "endpoint": "/api/generate",
+ "model": "llama3.2:3b-instruct-fp16"
+ },
+ "response": {
+ "body": {
+ "__type__": "ollama._types.GenerateResponse",
+ "__data__": {
+ "model": "llama3.2:3b-instruct-fp16",
+ "created_at": "2025-08-11T15:51:12.918723Z",
+ "done": true,
+ "done_reason": "stop",
+ "total_duration": 8868987792,
+ "load_duration": 2793275292,
+ "prompt_eval_count": 21,
+ "prompt_eval_duration": 250000000,
+ "eval_count": 344,
+ "eval_duration": 5823000000,
+ "response": "Here are some common test metrics used to evaluate the performance of a system:\n\n1. **Accuracy**: The proportion of correct predictions or classifications out of total predictions made.\n2. **Precision**: The ratio of true positives (correctly predicted instances) to the sum of true positives and false positives (incorrectly predicted instances).\n3. **Recall**: The ratio of true positives to the sum of true positives and false negatives (missed instances).\n4. **F1-score**: The harmonic mean of precision and recall, providing a balanced measure of both.\n5. **Mean Squared Error (MSE)**: The average squared difference between predicted and actual values.\n6. **Mean Absolute Error (MAE)**: The average absolute difference between predicted and actual values.\n7. **Root Mean Squared Percentage Error (RMSPE)**: The square root of the mean of the squared percentage differences between predicted and actual values.\n8. **Coefficient of Determination (R-squared, R2)**: Measures how well a model fits the data, with higher values indicating better fit.\n9. **Mean Absolute Percentage Error (MAPE)**: The average absolute percentage difference between predicted and actual values.\n10. **Normalized Mean Squared Error (NMSE)**: Similar to MSE, but normalized by the mean of the actual values.\n\nThese metrics can be used for various types of data, including:\n\n* Regression problems (e.g., predicting continuous values)\n* Classification problems (e.g., predicting categorical labels)\n* Time series forecasting\n* Clustering and dimensionality reduction\n\nWhen choosing a metric, consider the specific problem you're trying to solve, the type of data, and the desired level of precision.",
+ "thinking": null,
+ "context": null
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/bf79a89cc37f.json b/tests/integration/recordings/responses/bf79a89cc37f.json
new file mode 100644
index 000000000..2373c1d6a
--- /dev/null
+++ b/tests/integration/recordings/responses/bf79a89cc37f.json
@@ -0,0 +1,56 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/v1/v1/chat/completions",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "messages": [
+ {
+ "role": "user",
+ "content": "OpenAI test 3"
+ }
+ ],
+ "stream": false
+ },
+ "endpoint": "/v1/chat/completions",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "openai.types.chat.chat_completion.ChatCompletion",
+ "__data__": {
+ "id": "chatcmpl-48",
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "logprobs": null,
+ "message": {
+ "content": "I'm happy to help, but it seems you want me to engage in a basic conversation as OpenAI's new chat model, right? I can do that!\n\nHere's my response:\n\nHello! How are you today? Is there something specific on your mind that you'd like to talk about or any particular topic you'd like to explore together?\n\nWhat is it that you're curious about?",
+ "refusal": null,
+ "role": "assistant",
+ "annotations": null,
+ "audio": null,
+ "function_call": null,
+ "tool_calls": null
+ }
+ }
+ ],
+ "created": 1755891524,
+ "model": "llama3.2:3b",
+ "object": "chat.completion",
+ "service_tier": null,
+ "system_fingerprint": "fp_ollama",
+ "usage": {
+ "completion_tokens": 80,
+ "prompt_tokens": 30,
+ "total_tokens": 110,
+ "completion_tokens_details": null,
+ "prompt_tokens_details": null
+ }
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/c31a86ea6c58.json b/tests/integration/recordings/responses/c31a86ea6c58.json
new file mode 100644
index 000000000..b8d109ddd
--- /dev/null
+++ b/tests/integration/recordings/responses/c31a86ea6c58.json
@@ -0,0 +1,39 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/api/generate",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "raw": true,
+ "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTest metrics generation 0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
+ "options": {
+ "temperature": 0.0
+ },
+ "stream": false
+ },
+ "endpoint": "/api/generate",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "ollama._types.GenerateResponse",
+ "__data__": {
+ "model": "llama3.2:3b",
+ "created_at": "2025-08-11T15:56:06.703788Z",
+ "done": true,
+ "done_reason": "stop",
+ "total_duration": 2722294000,
+ "load_duration": 9736083,
+ "prompt_eval_count": 21,
+ "prompt_eval_duration": 113000000,
+ "eval_count": 324,
+ "eval_duration": 2598000000,
+ "response": "Here are some test metrics that can be used to evaluate the performance of a system:\n\n1. **Accuracy**: The proportion of correct predictions made by the model.\n2. **Precision**: The ratio of true positives (correctly predicted instances) to total positive predictions.\n3. **Recall**: The ratio of true positives to the sum of true positives and false negatives (missed instances).\n4. **F1-score**: The harmonic mean of precision and recall, providing a balanced measure of both.\n5. **Mean Squared Error (MSE)**: The average squared difference between predicted and actual values.\n6. **Mean Absolute Error (MAE)**: The average absolute difference between predicted and actual values.\n7. **Root Mean Squared Percentage Error (RMSPE)**: A variation of MSE that expresses the error as a percentage.\n8. **Coefficient of Determination (R-squared, R2)**: Measures how well the model explains the variance in the data.\n9. **Mean Absolute Percentage Error (MAPE)**: The average absolute percentage difference between predicted and actual values.\n10. **Mean Squared Logarithmic Error (MSLE)**: A variation of MSE that is more suitable for skewed distributions.\n\nThese metrics can be used to evaluate different aspects of a system's performance, such as:\n\n* Classification models: accuracy, precision, recall, F1-score\n* Regression models: MSE, MAE, RMSPE, R2, MSLE\n* Time series forecasting: MAPE, RMSPE\n\nNote that the choice of metric depends on the specific problem and data.",
+ "thinking": null,
+ "context": null
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/dc8120cf0774.json b/tests/integration/recordings/responses/dc8120cf0774.json
new file mode 100644
index 000000000..cf6b8c4d3
--- /dev/null
+++ b/tests/integration/recordings/responses/dc8120cf0774.json
@@ -0,0 +1,56 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/v1/v1/chat/completions",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "messages": [
+ {
+ "role": "user",
+ "content": "OpenAI test 2"
+ }
+ ],
+ "stream": false
+ },
+ "endpoint": "/v1/chat/completions",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "openai.types.chat.chat_completion.ChatCompletion",
+ "__data__": {
+ "id": "chatcmpl-516",
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "logprobs": null,
+ "message": {
+ "content": "I'm happy to help with your question or task. Please go ahead and ask me anything, and I'll do my best to assist you.\n\nNote: I'll be using the latest version of my knowledge cutoff, which is December 2023.\n\nAlso, please keep in mind that I'm a large language model, I can provide information on a broad range of topics, including science, history, technology, culture, and more. However, my ability to understand and respond to specific questions or requests may be limited by the data I've been trained on.",
+ "refusal": null,
+ "role": "assistant",
+ "annotations": null,
+ "audio": null,
+ "function_call": null,
+ "tool_calls": null
+ }
+ }
+ ],
+ "created": 1755891522,
+ "model": "llama3.2:3b",
+ "object": "chat.completion",
+ "service_tier": null,
+ "system_fingerprint": "fp_ollama",
+ "usage": {
+ "completion_tokens": 113,
+ "prompt_tokens": 30,
+ "total_tokens": 143,
+ "completion_tokens_details": null,
+ "prompt_tokens_details": null
+ }
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/f6857bcea729.json b/tests/integration/recordings/responses/f6857bcea729.json
new file mode 100644
index 000000000..404bfb987
--- /dev/null
+++ b/tests/integration/recordings/responses/f6857bcea729.json
@@ -0,0 +1,39 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/api/generate",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "raw": true,
+ "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTest metrics generation 2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
+ "options": {
+ "temperature": 0.0
+ },
+ "stream": false
+ },
+ "endpoint": "/api/generate",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "ollama._types.GenerateResponse",
+ "__data__": {
+ "model": "llama3.2:3b",
+ "created_at": "2025-08-11T15:56:13.082679Z",
+ "done": true,
+ "done_reason": "stop",
+ "total_duration": 2606245291,
+ "load_duration": 9979708,
+ "prompt_eval_count": 21,
+ "prompt_eval_duration": 23000000,
+ "eval_count": 321,
+ "eval_duration": 2572000000,
+ "response": "Here are some test metrics that can be used to evaluate the performance of a system:\n\n1. **Accuracy**: Measures how close the predicted values are to the actual values.\n2. **Precision**: Measures the proportion of true positives among all positive predictions made by the model.\n3. **Recall**: Measures the proportion of true positives among all actual positive instances.\n4. **F1-score**: The harmonic mean of precision and recall, providing a balanced measure of both.\n5. **Mean Squared Error (MSE)**: Measures the average squared difference between predicted and actual values.\n6. **Mean Absolute Error (MAE)**: Measures the average absolute difference between predicted and actual values.\n7. **Root Mean Squared Percentage Error (RMSPE)**: A variation of MSE that expresses errors as a percentage of the actual value.\n8. **Coefficient of Determination (R-squared, R2)**: Measures how well the model explains the variance in the data.\n9. **Mean Absolute Percentage Error (MAPE)**: Measures the average absolute percentage difference between predicted and actual values.\n10. **Mean Squared Logarithmic Error (MSLE)**: A variation of MSE that is more suitable for skewed distributions.\n\nThese metrics can be used to evaluate different aspects of a system's performance, such as:\n\n* Classification models: accuracy, precision, recall, F1-score\n* Regression models: MSE, MAE, RMSPE, R2\n* Time series forecasting: MAPE, MSLE\n\nNote that the choice of metric depends on the specific problem and data.",
+ "thinking": null,
+ "context": null
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/recordings/responses/f80b99430f7e.json b/tests/integration/recordings/responses/f80b99430f7e.json
new file mode 100644
index 000000000..5b692f4ca
--- /dev/null
+++ b/tests/integration/recordings/responses/f80b99430f7e.json
@@ -0,0 +1,39 @@
+{
+ "request": {
+ "method": "POST",
+ "url": "http://localhost:11434/api/generate",
+ "headers": {},
+ "body": {
+ "model": "llama3.2:3b",
+ "raw": true,
+ "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTest metrics generation 1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
+ "options": {
+ "temperature": 0.0
+ },
+ "stream": false
+ },
+ "endpoint": "/api/generate",
+ "model": "llama3.2:3b"
+ },
+ "response": {
+ "body": {
+ "__type__": "ollama._types.GenerateResponse",
+ "__data__": {
+ "model": "llama3.2:3b",
+ "created_at": "2025-08-11T15:56:10.465932Z",
+ "done": true,
+ "done_reason": "stop",
+ "total_duration": 3745686709,
+ "load_duration": 9734584,
+ "prompt_eval_count": 21,
+ "prompt_eval_duration": 23000000,
+ "eval_count": 457,
+ "eval_duration": 3712000000,
+ "response": "Here are some test metrics that can be used to evaluate the performance of a system:\n\n**Primary Metrics**\n\n1. **Response Time**: The time it takes for the system to respond to a request.\n2. **Throughput**: The number of requests processed by the system per unit time (e.g., requests per second).\n3. **Error Rate**: The percentage of requests that result in an error.\n\n**Secondary Metrics**\n\n1. **Average Response Time**: The average response time for all requests.\n2. **Median Response Time**: The middle value of the response times, used to detect outliers.\n3. **99th Percentile Response Time**: The response time at which 99% of requests are completed within this time.\n4. **Request Latency**: The difference between the request arrival time and the response time.\n\n**User Experience Metrics**\n\n1. **User Satisfaction (USAT)**: Measured through surveys or feedback forms to gauge user satisfaction with the system's performance.\n2. **First Response Time**: The time it takes for a user to receive their first response from the system.\n3. **Time Spent in System**: The total amount of time a user spends interacting with the system.\n\n**System Resource Metrics**\n\n1. **CPU Utilization**: The percentage of CPU resources being used by the system.\n2. **Memory Usage**: The amount of memory being used by the system.\n3. **Disk I/O Wait Time**: The average time spent waiting for disk I/O operations to complete.\n\n**Security Metrics**\n\n1. **Authentication Success Rate**: The percentage of successful authentication attempts.\n2. **Authorization Success Rate**: The percentage of successful authorization attempts.\n3. **Error Rate (Security)**: The percentage of security-related errors.\n\n**Other Metrics**\n\n1. **Page Load Time**: The time it takes for a page to load.\n2. **Click-Through Rate (CTR)**: The percentage of users who click on a link or button after seeing an ad or notification.\n3. **Conversion Rate**: The percentage of users who complete a desired action (e.g., fill out a form, make a purchase).\n\nThese metrics can be used to evaluate the performance and effectiveness of various aspects of your system, from user experience to security and resource utilization.",
+ "thinking": null,
+ "context": null
+ }
+ },
+ "is_streaming": false
+ }
+}
diff --git a/tests/integration/telemetry/test_telemetry_metrics.py b/tests/integration/telemetry/test_telemetry_metrics.py
new file mode 100644
index 000000000..4ba2bd2d9
--- /dev/null
+++ b/tests/integration/telemetry/test_telemetry_metrics.py
@@ -0,0 +1,209 @@
+# Copyright (c) Meta Platforms, Inc. and affiliates.
+# All rights reserved.
+#
+# This source code is licensed under the terms described in the LICENSE file in
+# the root directory of this source tree.
+
+import time
+from datetime import UTC, datetime, timedelta
+
+import pytest
+
+
+@pytest.fixture(scope="module", autouse=True)
+def setup_telemetry_metrics_data(openai_client, client_with_models, text_model_id):
+ """Setup fixture that creates telemetry metrics data before tests run."""
+
+ # Skip OpenAI tests if running in library mode
+ if not hasattr(client_with_models, "base_url"):
+ pytest.skip("OpenAI client tests not supported with library client")
+
+ prompt_tokens = []
+ completion_tokens = []
+ total_tokens = []
+
+ # Create OpenAI completions to generate metrics using the proper OpenAI client
+ for i in range(5):
+ response = openai_client.chat.completions.create(
+ model=text_model_id,
+ messages=[{"role": "user", "content": f"OpenAI test {i}"}],
+ stream=False,
+ )
+ prompt_tokens.append(response.usage.prompt_tokens)
+ completion_tokens.append(response.usage.completion_tokens)
+ total_tokens.append(response.usage.total_tokens)
+
+ # Wait for metrics to be logged
+ start_time = time.time()
+ while time.time() - start_time < 30:
+ try:
+ # Try to query metrics to see if they're available
+ metrics_response = client_with_models.telemetry.query_metrics(
+ metric_name="completion_tokens",
+ start_time=int((datetime.now(UTC) - timedelta(minutes=5)).timestamp()),
+ )
+ if len(metrics_response[0].values) > 0:
+ break
+ except Exception:
+ pass
+ time.sleep(1)
+
+ # Wait additional time to ensure all metrics are processed
+ time.sleep(5)
+
+ # Return the token lists for use in tests
+ return {"prompt_tokens": prompt_tokens, "completion_tokens": completion_tokens, "total_tokens": total_tokens}
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_prompt_tokens(client_with_models, text_model_id, setup_telemetry_metrics_data):
+ """Test that prompt_tokens metrics are queryable."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ response = client_with_models.telemetry.query_metrics(
+ metric_name="prompt_tokens",
+ start_time=start_time,
+ )
+
+ assert isinstance(response, list)
+
+ assert isinstance(response[0].values, list), "Should return a list of metric series"
+
+ assert response[0].metric == "prompt_tokens"
+
+ # Use the actual values from setup instead of hardcoded values
+ expected_values = setup_telemetry_metrics_data["prompt_tokens"]
+ assert response[0].values[-1].value in expected_values, (
+ f"Expected one of {expected_values}, got {response[0].values[-1].value}"
+ )
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_completion_tokens(client_with_models, text_model_id, setup_telemetry_metrics_data):
+ """Test that completion_tokens metrics are queryable."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ response = client_with_models.telemetry.query_metrics(
+ metric_name="completion_tokens",
+ start_time=start_time,
+ )
+
+ assert isinstance(response, list)
+
+ assert isinstance(response[0].values, list), "Should return a list of metric series"
+
+ assert response[0].metric == "completion_tokens"
+
+ # Use the actual values from setup instead of hardcoded values
+ expected_values = setup_telemetry_metrics_data["completion_tokens"]
+ assert response[0].values[-1].value in expected_values, (
+ f"Expected one of {expected_values}, got {response[0].values[-1].value}"
+ )
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_total_tokens(client_with_models, text_model_id, setup_telemetry_metrics_data):
+ """Test that total_tokens metrics are queryable."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ response = client_with_models.telemetry.query_metrics(
+ metric_name="total_tokens",
+ start_time=start_time,
+ )
+
+ assert isinstance(response, list)
+
+ assert isinstance(response[0].values, list), "Should return a list of metric series"
+
+ assert response[0].metric == "total_tokens"
+
+ # Use the actual values from setup instead of hardcoded values
+ expected_values = setup_telemetry_metrics_data["total_tokens"]
+ assert response[0].values[-1].value in expected_values, (
+ f"Expected one of {expected_values}, got {response[0].values[-1].value}"
+ )
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_with_time_range(llama_stack_client, text_model_id):
+ """Test that metrics are queryable with time range."""
+ end_time = int(datetime.now(UTC).timestamp())
+ start_time = end_time - 600 # 10 minutes ago
+
+ response = llama_stack_client.telemetry.query_metrics(
+ metric_name="prompt_tokens",
+ start_time=start_time,
+ end_time=end_time,
+ )
+
+ assert isinstance(response, list)
+
+ assert isinstance(response[0].values, list), "Should return a list of metric series"
+
+ assert response[0].metric == "prompt_tokens"
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_with_label_matchers(llama_stack_client, text_model_id):
+ """Test that metrics are queryable with label matchers."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ response = llama_stack_client.telemetry.query_metrics(
+ metric_name="prompt_tokens",
+ start_time=start_time,
+ label_matchers=[{"name": "model_id", "value": text_model_id, "operator": "="}],
+ )
+
+ assert isinstance(response[0].values, list), "Should return a list of metric series"
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_nonexistent_metric(llama_stack_client):
+ """Test that querying a nonexistent metric returns empty data."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ response = llama_stack_client.telemetry.query_metrics(
+ metric_name="nonexistent_metric",
+ start_time=start_time,
+ )
+
+ assert isinstance(response, list), "Should return an empty list for nonexistent metric"
+ assert len(response) == 0
+
+
+@pytest.mark.skip(reason="Skipping this test until client is regenerated")
+def test_query_metrics_with_granularity(llama_stack_client, text_model_id):
+ """Test that metrics are queryable with different granularity levels."""
+ start_time = int((datetime.now(UTC) - timedelta(minutes=10)).timestamp())
+
+ # Test hourly granularity
+ hourly_response = llama_stack_client.telemetry.query_metrics(
+ metric_name="total_tokens",
+ start_time=start_time,
+ granularity="1h",
+ )
+
+ # Test daily granularity
+ daily_response = llama_stack_client.telemetry.query_metrics(
+ metric_name="total_tokens",
+ start_time=start_time,
+ granularity="1d",
+ )
+
+ # Test no granularity (raw data points)
+ raw_response = llama_stack_client.telemetry.query_metrics(
+ metric_name="total_tokens",
+ start_time=start_time,
+ granularity=None,
+ )
+
+ # All should return valid data
+ assert isinstance(hourly_response[0].values, list), "Hourly granularity should return data"
+ assert isinstance(daily_response[0].values, list), "Daily granularity should return data"
+ assert isinstance(raw_response[0].values, list), "No granularity should return data"
+
+ # Verify that different granularities produce different aggregation levels
+ # (The exact number depends on data distribution, but they should be queryable)
+ assert len(hourly_response[0].values) >= 0, "Hourly granularity should be queryable"
+ assert len(daily_response[0].values) >= 0, "Daily granularity should be queryable"
+ assert len(raw_response[0].values) >= 0, "No granularity should be queryable"
From 7519b73fcc79c9eef3e83cfd453f2087c300dc96 Mon Sep 17 00:00:00 2001
From: Ashwin Bharambe Release notes
v4.4.0
What's Changed
Bug fixes:
@FloEdelmann
in actions/setup-node#98@fregante
in actions/setup-node#1245Enhancement:
@marco-ippolito
in actions/setup-node#1240Dependency update:
@action/cache
from 4.0.2 to 4.0.3 by @aparnajyothi-y
in actions/setup-node#1262New Contributors
@FloEdelmann
made their first contribution in actions/setup-node#98@fregante
made their first contribution in actions/setup-node#1245@marco-ippolito
made their first contribution in actions/setup-node#1240v4.3.0
What's Changed
Dependency updates
@actions/glob
from 0.4.0 to 0.5.0 by @dependabot
in actions/setup-node#1200@action/cache
from 4.0.0 to 4.0.2 by @gowridurgad
in
actions/setup-node#1251@vercel/ncc
from 0.38.1 to 0.38.3 by @dependabot
in actions/setup-node#1203@actions/tool-cache
from 2.0.1 to 2.0.2 by @dependabot
in actions/setup-node#1220New Contributors
@gowridurgad
made their first contribution in actions/setup-node#1251v4.2.0
What's Changed
@aparnajyothi-y
in actions/setup-node#1174@benwells
in actions/setup-node#1193@HarithaVattikuti
in actions/setup-node#1192@actions/cache
to ^4.0.0
by @priyagupta108
in actions/setup-node#1191@dependabot
in actions/setup-node#1194@dependabot
in actions/setup-node#1195@dependabot
in actions/setup-node#1196@types/jest
from 29.5.12 to 29.5.14 by @dependabot
in actions/setup-node#1201@dependabot
in actions/setup-node#1205New Contributors
@benwells
made their first contribution in actions/setup-node#1193Commits
49933ea
Bump @action/cache
from 4.0.2 to 4.0.3 (#1262)e3ce749
feat: support private mirrors (#1240)40337cb
Add support for indented eslint output (#1245)1ccdddc
Make eslint-compact matcher compatible with Stylelint (#98)cdca736
Bump @actions/tool-cache
from 2.0.1 to 2.0.2 (#1220)22c0e74
Bump @vercel/ncc
from 0.38.1 to 0.38.3 (#1203)a7c2d94
actions/cache upgrade (#1251)8026329
Bump @actions/glob
from 0.4.0 to 0.5.0 (#1200)1d0ff46
Bump undici from 5.28.4 to 5.28.5 (#1205)574f09a
Bump @types/jest
from 29.5.12 to 29.5.14 (#1201)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Release notes
v2.30.0
2.30.0
(2025-08-07)
Features
v2.29.0
2.29.0
(2025-08-07)
Features
v2.28.0
2.28.0
(2025-08-03)
Features
v2.27.2
2.27.2
(2025-08-01)
Bug Fixes
v2.27.1
2.27.1
(2025-08-01)
Bug Fixes
v2.27.0
2.27.0
(2025-07-28)
Features
Commits
729ead3
feat(isFunction): stricter Function
type (#1161)5691380
feat: migrate build from tsup to tsdown (#1172)e870653
chore: manual version bumps (#1173)6bd6f98
chore(deps-dev): bump eslint-plugin-jsdoc from 51.3.3 to 52.0.2 (#1170)92449ef
feat(defaultTo): introduce defaultTo
(#1159)2029326
chore(deps-dev): bump eslint-plugin-unicorn from 59.0.1 to 60.0.0 (#1169)4c5bc73
fix(const): prefer narrow typing for literals (#1160)7c37e39
fix: prevent redundant type computation paths (#1163)55e5c8c
fix(sample): revamp typing (#1162)e455924
chore(deps): bump the minor group with 9 updates (#1168)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Commits
225a3e4
fix(deps): replace chalk with picocolors (#1341)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Release notes
v5.5.4
Patch Changes
723f7a8
Thanks @kbrilla
!
- fix: add 'oxc', 'oxc-ts' and 'hermes' parsers to
parserBlocklist
cf52b30
Thanks @andreww2012
! -
fix: disallow extra properties in rule optionsv5.5.3
v5.5.2
v5.5.1
Patch Changes
v5.5.0
Minor Changes
92f2c9c
Thanks @dotcarmen
! -
feat: support non-js languages like css
for
@eslint/css
and json
for
@eslint/json
New Contributors
@dotcarmen
made their first contribution in prettier/eslint-plugin-prettier#743v5.4.1
Patch Changes
c21521f
Thanks @JounQin
!
- fix(deps): bump synckit
to v0.11.7 to fix potential
TypeError: Cannot read properties of undefined (reading
'message')
errorChangelog
5.5.4
Patch Changes
723f7a8
Thanks @kbrilla
!
- fix: add 'oxc', 'oxc-ts' and 'hermes' parsers to
parserBlocklist
cf52b30
Thanks @andreww2012
! -
fix: disallow extra properties in rule options5.5.1
Patch Changes
5.5.0
Minor Changes
92f2c9c
Thanks @dotcarmen
! -
feat: support non-js languages like css
for
@eslint/css
and json
for
@eslint/json
5.4.1
Patch Changes
Commits
e2c31d2
chore: release eslint-plugin-prettier (#756)98a8bfd
chore(deps): update all dependencies (#750)cf52b30
fix: disallow extra properties in rule options (#751)723f7a8
fix: add 'oxc', 'oxc-ts' and 'hermes' parsers to
parserBlocklist
(#755)cdfcefd
fix: release a new latest versiond8c303e
fix: release a new latest version3e87f2e
chore: release eslint-plugin-prettier (#749)bfd1e95
fix: use prettierRcOptions
directly for prettier 3.6+ (#748)9c4b792
chore: release eslint-plugin-prettier (#744)78e41ec
chore(deps): update all dependencies (#745)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Release notes
v6.1.1
6.1.1
(2025-08-22)
Bug Fixes
Changelog
Changelog
6.1.1
(2025-08-22)
Bug Fixes
6.1.0
(2025-08-19)
Features
Bug Fixes
6.0.1
(2025-08-13)
Bug Fixes
6.0.0
(2025-08-13)
⚠ BREAKING CHANGES
Features
5.5.3
(2024-06-28)
Bug Fixes
braces
dependency (#269.
by @EelcoLos
)
(2d952a1)5.5.2
(2024-04-24)
Bug Fixes
@EelcoLos
)
(9a90d5a)5.5.1
(2024-04-24)
Commits
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Commits
4959332
Bump dependencies (#532)adeb286
Add support for .tools-versions (#531)fce199e
Add log message before long API calls to GitHub (#530)f758a4a
chore: update known versions for 0.8.12 (#529)c0e7e93
chore: update known versions for 0.8.11 (#526)fda2399
chore: update known versions for 0.8.10 (#525)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show Release notes
@testing-library/jest-dom
's
releases.
v6.8.0
6.8.0
(2025-08-20)
Features
v6.7.0
6.7.0
(2025-08-13)
Features
v6.6.4
6.6.4
(2025-07-26)
Performance Improvements
Commits
779b712
feat: add toBePartiallyPressed matcher (#203)
(#692)e15f789
docs: add kretajak as a contributor for code, and test (#691)cfdf8ae
feat: add toBePressed matcher (#203)
(#658)f00d94d
chore: add dependebot.yml
(#456)476c30b
refactor: drop lodash
entirely (#676)fafd8ca
chore: add tests for Node 22 & 24 (#678)d9babb1
docs: fix typo (#667)f0f31bb
docs: adopt the new build-badge URL (#497)707e647
perf: replace chalk with picocolors (#659)918b6fb
docs: add InfiniteXyy as a contributor for code, and bug (#650)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show n!lXxSR8;Lr#b!hUwX)g0LO-&(mVbe^RtY~yBYLUmW(phFw
z)ilCnD$A`Zh0&rYY^ADHsW_`m;*Z~b-_87%laKMfTbFLV)V=XlFOv%*$%Jf@xY2o8
z%DAq|Cbnr}lT>F-rIWm>RL+Mrr7V<8id;!u7-Lit+e{>lKfZh8JH&zV_M^v-)SHLd
zzDJJaM{{{~vLTE8hmWy?PL-Sco8v-zatXVU+KpHs*EQR6jf=Fbc(1aG)269nnX5Dw
zrKx0*m9|lW4_wg;W4BHC3!EKO`n4Ty*eL!ims8?9ilU
z@Ia><(8J^~;TO!P)T*f8Y9K10nw!E`SM3E2O9R&=tP8nuG;Wj)A!ZDT987K|vBR#)
zGqeP8uB(9c)4P4r!vooVD4*i94@KzQuXGCzouGS?v?Id74}*Dz`YR|mU}|i#2C_Do
zstoycSYk@pL|Qm;Q&8~@2MgSvgYUG{fD)(tt;SogAPS^{90V;qa|O{DM>ts*)RggD
zI|>&tzR-{)aEs=ETMJ0l$d1oGzmvw8hVX
8QoY0?iu5!dUlINvE_urhpc&!cd7w
zhuNGe8tJ;5LsCd>CS30PR?y*>{`erNpaPFR$Zv!b6P1M)|M}zl2sb%l+P&YhK0B3B
z)elJ4(3jg01<*b{hFt;Jy=xFalV`xTINN~7bBC@4Q%hC1qOF|eEaVWZz}WAO?<3V@
zArm<8&+eP(3O6gIb9GETE!F0r`EzjNF84wmF8yjXB=_(gxcUt0iW~z3ZW=Aop%
+ Create an Agent with Llama Stack
+
+
+ No vector databases available. Create one to use RAG + tools. +
+ ) : ( + availableVectorDBs.map(vectorDB => ( + + )) + )} ++ ⚠️ RAG tool selected but no vector databases chosen. + Create or select a vector database. +
+ )} +