llama-stack-mirror/tests/unit/providers
skamenan7 606b9f0ca4 Enable streaming usage metrics for OpenAI providers
Inject stream_options for telemetry, add completion streaming metrics,
fix params mutation, remove duplicate provider logic. Add unit tests.
2025-12-01 09:12:21 -05:00
..
agents/meta_reference feat(responses)!: implement support for OpenAI compatible prompts in Responses API (#3965) 2025-11-19 11:48:11 -08:00
batches refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
files refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
inference feat!: change bedrock bearer token env variable to match AWS docs & boto3 convention (#4152) 2025-11-21 09:48:05 -05:00
inline fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
nvidia feat!: standardize base_url for inference (#4177) 2025-11-19 08:44:28 -08:00
utils Enable streaming usage metrics for OpenAI providers 2025-12-01 09:12:21 -05:00
vector_io fix: Pydantic validation error with list-type metadata in vector search (#3797) (#4173) 2025-11-19 10:16:34 -08:00
test_bedrock.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_configs.py feat!: standardize base_url for inference (#4177) 2025-11-19 08:44:28 -08:00