llama-stack-mirror/llama_stack/apis
Charlie Doern fb553f3430 feat: implement query_metrics
query_metrics currently has no implementation, meaning once a metric is emitted there is no way in llama stack to query it from the store.

implement query_metrics for the meta_reference provider which follows a similar style to `query_traces`, using the trace_store to format an SQL query and execute it

in this case the parameters for the query are `metric.METRIC_NAME, start_time, and end_time`.

this required client side changes since the client had no `query_metrics` or any associated resources, so any tests here will fail but I will provider manual execution logs for the new tests I am adding

order the metrics by timestamp.

Additionally add `unit` to the `MetricDataPoint` class since this adds much more context to the metric being queried.

these metrics can also be aggregated via a `granularity` parameter. This was pre-defined as a string like: `1m, 1h, 1d` where metrics occuring in same timespan specified are aggregated together.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-08-22 15:47:04 -04:00
..
agents feat(responses): add MCP argument streaming and content part events (#3136) 2025-08-13 16:34:26 -07:00
batch_inference chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
batches feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
benchmarks docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
common feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
datasetio chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
datasets docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
eval chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
files feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
inference chore: indicate to mypy that InferenceProvider.rerank is concrete (#3238) 2025-08-22 12:02:13 -07:00
inspect docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
models docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
post_training fix: remove unused DPO parameters from schema and tests (#2988) 2025-07-31 09:11:08 -07:00
providers docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
safety chore: Change moderations api response to Provider returned categories (#3098) 2025-08-13 09:47:35 -07:00
scoring docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
scoring_functions docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
shields feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00
synthetic_data_generation docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
telemetry feat: implement query_metrics 2025-08-22 15:47:04 -04:00
tools docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
vector_dbs docs: Add detailed docstrings to API models and update OpenAPI spec (#2889) 2025-07-30 16:32:59 -07:00
vector_io chore: Enabling Integration tests for Weaviate (#2882) 2025-07-31 20:29:50 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
resource.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
version.py llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00