llama-stack-mirror/tests/unit/providers
Matthew Farrellee 466ef6f490 feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin
- remove auto-download of ollama embedding models
- add embedding model metadata to dynamic listing w/ unit test
- add support and tests for allowed_models
- removed inference provider models.py files where dynamic listing is enabled
- store embedding metadata in embedding_model_metadata field on inference providers
- make model_entries optional on ModelRegistryHelper and LiteLLMOpenAIMixin
- make OpenAIMixin a ModelRegistryHelper
- skip base64 embedding test for remote::ollama, always returns floats
- only use OpenAI client for ollama model listing
- remove unused build_model_entry function
- remove unused get_huggingface_repo function
2025-09-25 04:56:54 -04:00
..
agent fix: Fix list_sessions() (#3114) 2025-08-13 07:46:26 -07:00
agents fix: ensure assistant message is followed by tool call message as expected by openai (#3224) 2025-08-22 10:42:03 -07:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
files feat(files, s3, expiration): add expires_after support to S3 files provider (#3283) 2025-08-29 16:17:24 -07:00
inference feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin 2025-09-25 04:56:54 -04:00
nvidia feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin 2025-09-25 04:56:54 -04:00
utils feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin 2025-09-25 04:56:54 -04:00
vector_io feat: migrate to FIPS-validated cryptographic algorithms (#3423) 2025-09-12 11:18:19 +02:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00