llama-stack-mirror/llama_stack/providers/inline
Charlie Doern 6389bf5ffb
fix: make telemetry optional for agents (#3705)
# What does this PR do?

there is a lot of code in the agents API using the telemetry API and its
helpers without checking if that API is even enabled.

This is the only API besides inference actively using telemetry code, so
after this telemetry can be optional for the entire stack


resolves #3665


## Test Plan

existing agent tests.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-10-07 16:09:03 +02:00
..
agents fix: make telemetry optional for agents (#3705) 2025-10-07 16:09:03 +02:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval feat: update eval runner to use openai endpoints (#3588) 2025-09-29 13:13:53 -07:00
files/localfs fix(expires_after): make sure multipart/form-data is properly parsed (#3612) 2025-09-30 16:14:03 -04:00
inference chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
ios/inference feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
post_training chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
safety feat: use /v1/chat/completions for safety model inference (#3591) 2025-09-30 11:01:44 -07:00
scoring chore: use openai_chat_completion for llm as a judge scoring (#3635) 2025-10-01 09:44:31 -04:00
telemetry chore: Remove debug logging from telemetry adapter (#3643) 2025-10-01 15:16:23 -07:00
tool_runtime feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
vector_io feat(api): Add vector store file batches api (#3642) 2025-10-06 16:58:22 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00