llama-stack-mirror/llama_stack/providers/utils/inference
2025-07-31 15:37:04 -05:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat(registry): more flexible model lookup (#2859) 2025-07-22 15:22:48 -07:00
inference_store.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
litellm_openai_mixin.py feat: Add clear error message when API key is missing (#2992) 2025-07-31 16:33:16 -04:00
model_registry.py feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
openai_compat.py move SambaNovaInferenceAdapter from bespoke convert_message_to_openai_dict_with_b64_images to common convert_message_to_openai_dict_new 2025-07-31 15:37:04 -05:00
openai_mixin.py chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods (#2835) 2025-07-23 06:49:40 -04:00
prompt_adapter.py fix(ollama): Download remote image URLs for Ollama (#2551) 2025-06-30 20:36:11 +05:30
stream_utils.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30