llama-stack-mirror/src/llama_stack
skamenan7 606b9f0ca4 Enable streaming usage metrics for OpenAI providers
Inject stream_options for telemetry, add completion streaming metrics,
fix params mutation, remove duplicate provider logic. Add unit tests.
2025-12-01 09:12:21 -05:00
..
cli fix: bind to proper default hosts (#4232) 2025-11-26 06:16:28 -05:00
core Enable streaming usage metrics for OpenAI providers 2025-12-01 09:12:21 -05:00
distributions feat!: change bedrock bearer token env variable to match AWS docs & boto3 convention (#4152) 2025-11-21 09:48:05 -05:00
models refactor: remove dead inference API code and clean up imports (#4093) 2025-11-10 15:29:24 -08:00
providers Enable streaming usage metrics for OpenAI providers 2025-12-01 09:12:21 -05:00
testing fix: MCP authorization parameter implementation (#4052) 2025-11-14 08:54:42 -08:00
__init__.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
env.py chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
log.py chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00