llama-stack-mirror/tests/unit/providers
Matthew Farrellee 1263448de2
fix: allowed_models config did not filter models (#4030)
# What does this PR do?

closes #4022 

## Test Plan

ci w/ new tests

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-11-03 11:43:39 -08:00
..
agent chore(mypy): part-04 resolve mypy errors in meta_reference agents (#3969) 2025-10-29 13:37:28 -07:00
agents fix!: Enhance response API support to not fail with tool calling (#3385) 2025-10-27 09:33:02 -07:00
batches feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
inference feat: add provider data keys for Cerebras, Databricks, NVIDIA, and RunPod (#3734) 2025-10-27 13:09:35 -07:00
inline feat: Add responses and safety impl extra_body (#3781) 2025-10-15 15:01:37 -07:00
nvidia feat: Add rerank API for NVIDIA Inference Provider (#3329) 2025-10-30 21:42:09 -07:00
utils fix: allowed_models config did not filter models (#4030) 2025-11-03 11:43:39 -08:00
vector_io fix!: remove chunk_id property from Chunk class (#3954) 2025-10-29 18:59:59 -07:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00