llama-stack-mirror/tests/unit/providers
2025-11-10 11:37:14 -05:00
..
batches feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
inference feat: add OpenAI-compatible Bedrock provider (#3748) 2025-11-06 17:18:18 -08:00
inline feat: implement OpenAI chat completion for meta_reference provider 2025-11-08 14:33:19 -05:00
nvidia refactor: remove dead inference API code and clean up imports 2025-11-08 14:33:18 -05:00
utils feat: implement OpenAI chat completion for meta_reference provider 2025-11-08 14:33:19 -05:00
vector_io fix: Vector store persistence across server restarts (#3977) 2025-11-09 00:05:00 -05:00
test_bedrock.py feat: add OpenAI-compatible Bedrock provider (#3748) 2025-11-06 17:18:18 -08:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00