llama-stack-mirror/llama_stack/providers/utils/inference
2025-07-24 16:41:17 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat(registry): more flexible model lookup (#2859) 2025-07-22 15:22:48 -07:00
inference_store.py feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
litellm_openai_mixin.py feat: implement dynamic model detection support for inference providers using litellm 2025-07-24 09:49:32 -04:00
model_registry.py feat(registry): make the Stack query providers for model listing (#2862) 2025-07-24 10:39:53 -07:00
openai_compat.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
openai_mixin.py chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods (#2835) 2025-07-23 06:49:40 -04:00
prompt_adapter.py fix(ollama): Download remote image URLs for Ollama (#2551) 2025-06-30 20:36:11 +05:30
stream_utils.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30